Nonconvex compressive sensing and reconstruction of gradient-sparse images: random vs. tomographic Fourier sampling (original) (raw)

Fast algorithms for nonconvex compressive sensing: MRI reconstruction from very few data

2009

Compressive sensing is the reconstruction of sparse images or signals from very few samples, by means of solving a tractable optimization problem. In the context of MRI, this can allow reconstruction from many fewer k-space samples, thereby reducing scanning time. Previous work has shown that nonconvex optimization reduces still further the number of samples required for reconstruction, while still being tractable. In this work, we extend recent Fourier-based algorithms for convex optimization to the nonconvex setting, and obtain methods that combine the reconstruction abilities of previous nonconvex approaches with the computational speed of state-of-the-art convex methods.

Gradient-Based Image Recovery Methods From Incomplete Fourier Measurements

IEEE Transactions on Image Processing, 2000

A major problem in imaging applications such as Magnetic Resonance Imaging (MRI) and Synthetic Aperture Radar (SAR) is the task of trying to reconstruct an image with the smallest possible set of Fourier samples, every single one of which has a potential time and/or power cost. The theory of Compressive Sensing (CS) points to ways of exploiting inherent sparsity in such images in order to achieve accurate recovery using sub-Nyquist sampling schemes. Traditional CS approaches to this problem consist of solving total-variation minimization programs with Fourier measurement constraints or other variations thereof. This paper takes a different approach: Since the horizontal and vertical differences of a medical image are each more sparse or compressible than the corresponding total-variational image, CS methods will be more successful in recovering these differences individually. We develop an algorithm called GradientRec that uses a CS algorithm to recover the horizontal and vertical gradients and then estimates the original image from these gradients. We present two methods of solving the latter inverse problem: one based on least squares optimization and the other based on a generalized Poisson solver. After a thorough derivation of our complete algorithm, we present the results of various experiments that compare the effectiveness of the proposed method against other leading methods.

Iterative tomographic image reconstruction by compressive sampling

2010 IEEE International Conference on Image Processing, 2010

Positron Emission Tomography (PET) and Single Photon Emission Computerized Tomography (SPECT) are essential medical imaging tools with inherent drawback of slow data acquisition process. With the knowledge that radionuclide images are sparse in transform domain, we have applied a novel concept of Compressive Sampling on them. The proposed approach aims to reconstruct images from fewer measurements than traditionally employed, significantly reducing scan time and radiopharmaceutical doze, with benefits for patients and health care economics. The reconstruction of tomographic images is realized by compressed sensing the 2-D Fourier projections. These 2-D projections being sparse in transform domain are sensed with fewer samples in k-space and are reconstructed without loss of fidelity. These undersampled Fourier projections can then be backprojected by employing the iterative reconstruction approach for a complete 3-D volume. Our work focuses on the acquisition of 2-D SPECT/PET projections based on compressive sampling and their reconstruction using a non-linear recovery algorithm. Compressive sampling of a phantom image and PET bone scan scintigraph with radial Fourier samples are performed. The reconstructions of these images are compared to conventionally sampled images with MSE, PSNR and a new image quality measure, Structure SIMilarity (SSIM). The results show high quality image reconstruction using considerably few measurements.

Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

Scientific Reports, 2016

Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. Sampling protocols have drastically changed with the discovery of compressive sensing (CS) data acquisition and signal recovery 1,2. Prior to the development of CS theory, the Shannon-Nyquist theorem determined the majority of sampling procedures for both audio signals and images, dictating the minimum rate, the Nyquist rate, with which a signal must be uniformly sampled to guarantee successful reconstruction 3. Since the theorem specifically addresses minimal sampling rates corresponding to uniformly-spaced measurements, signals were typically sampled at equally-spaced intervals in space or time before the discovery of CS. However, using CS-type data acquisition, it is possible to reconstruct a broad class of sparse signals, containing a small number of dominant components in some domain, by employing a sub-Nyquist sampling rate 2. Instead of applying uniformly-spaced signal measurements, CS theory demonstrates that several types of uniformly-random sampling protocols will yield successful reconstructions with high probability 4-6. While CS signal recovery is relatively accurate for sufficiently high sampling rates, we demonstrate that, for the recovery of natural scenes, reconstruction quality can be further improved via localized random sampling. In this new protocol, each signal sample consists of a randomly centered local cluster of measurements, in which the probability of measuring a given pixel decreases with its distance from the cluster center. We show that the localized random sampling protocol consistently produces more accurate CS reconstructions of natural scenes than the uniformly-random sampling procedure using the same number of samples. For images containing a relatively large spread of dominant frequency components, the improvement is most pronounced, with localized random sampling yielding a higher fidelity representation of both low and moderate frequency components containing the majority of image information. Moreover, the reconstruction improvements garnered by localized random sampling also extend to images with varying size and spectrum distribution, affording improved reconstruction

Average case recovery analysis of tomographic compressive sensing

Linear Algebra and its Applications, 2014

The reconstruction of three-dimensional sparse volume functions from few tomographic projections constitutes a challenging problem in image reconstruction and turns out to be a particular instance problem of compressive sensing. The tomographic measurement matrix encodes the incidence relation of the imaging process, and therefore is not subject to design up to small perturbations of non-zero entries. We present an average case analysis of the recovery properties and a corresponding tail bound to establish weak thresholds, in excellent agreement with numerical experiments. Our result improve the state-of-the-art of tomographic imaging in experimental fluid dynamics by a factor of three.

Frequency extrapolation by nonconvex compressive sensing

Proceedings - International Symposium on Biomedical Imaging, 2011

Tomographic imaging modalities sample subjects with a discrete, finite set of measurements, while the underlying object function is continuous. Because of this, inversion of the imaging model, even under ideal conditions, necessarily entails approximation. The error incurred by this approximation can be important when there is rapid variation in the object function or when the objects of interest are small. In this work, we investigate this issue with the Fourier transform (FT), which can be taken as the imaging model for magnetic resonance imaging (MRI) or some forms of wave imaging. Compressive sensing has been successful for inverting this data model when only a sparse set of samples are available. We apply the compressive sensing principle to a somewhat related problem of frequency extrapolation, where the object function is represented by a super-resolution grid with many more pixels than FT measurements. The image on the super-resolution grid is obtained through nonconvex minimization. The method fully utilizes the available FT samples, while controlling aliasing and ringing. The algorithm is demonstrated with continuous FT samples of the Shepp-Logan phantom with additional small, high-contrast objects.

Compressive sensing in medical imaging

Applied Optics, 2015

The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsityexploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

IJERT-Compressive Sensing Reconstruction for Sparse Signals with Convex Optimization

International Journal of Engineering Research and Technology (IJERT), 2014

https://www.ijert.org/compressive-sensing-reconstruction-for-sparse-signals-with-convex-optimization https://www.ijert.org/research/compressive-sensing-reconstruction-for-sparse-signals-with-convex-optimization-IJERTV3IS090011.pdf The theory of compressive sampling (CS), also known as compressed sensing. It is a modern sensing scheme that goes against the common theory in data acquisition. The CS theory claims that one can recover images or signals from fewer samples or measurements than the traditional methods use. To achieve this recovery, CS theory depends on two basic principles: the first is the sparsity of signal, which relates to the signals of interest, and the incoherence, which relates to the sensing method. In this paper we will give a simple review on the CS theory and the analog to information (AIC) system will be discussed briefly supported with two examples of signal reconstruction from undersampled signals. Simulation results show the powerful of the CS reconstruction for both sparse in time and spars in frequency signals.

A Compressed Sensing Approach to Image Reconstruction

IJSRD, 2013

compressed sensing is a new technique that discards the Shannon Nyquist theorem for reconstructing a signal. It uses very few random measurements that were needed traditionally to recover any signal or image. The need of this technique comes from the fact that most of the information is provided by few of the signal coefficients, then why do we have to acquire all the data if it is thrown away without being used. A number of review articles and research papers have been published in this area. But with the increasing interest of practitioners in this emerging field it is mandatory to take a fresh look at this method and its implementations. The main aim of this paper is to review the compressive sensing theory and its applications.