Hybrid Compressive Sampling via a New Total Variation TVL1 (original) (raw)

Review of Algorithms for Compressive Sensing of Images

We provide a comprehensive review of leading algorithms for compressive sensing of images, focused on Total variation methods, with a view to application in LiDAR systems. Our primary focus is providing a full review for beginners in the field, as well as simulating the kind of noise found in real LiDAR systems. To this end, we provide an overview of the theoretical background, a brief discussion of various considerations that come in to play in compressive sensing, and a standardized comparison of off-the-shelf methods, intended as a quick-start guide to choosing algorithms for compressive sensing applications.

Compressive sensing with modified Total Variation minimization algorithm

2010 IEEE International Conference on Acoustics, Speech and Signal Processing, 2010

In this paper, the reconstruction problem of compressive sensing algorithm that is exploited for image compression, is investigated. Considering the Total Variation (TV) minimization algorithm, and by adding some new constraints compatible with typical image properties, the performance of the reconstruction is improved. Using DCT and contourlet transforms, sparse expansion of the image are exploited to provide new constraints to remove irrelevant vectors from the feasible set of the optimization problem while keeping the problem as a standard Second Order Cone Programming (SOCP) one. Experimental results show that, the proposed method, with new constraints, outperforms the conventional TV minimization method by up to 2 dB in PSNR.

Improved Compressive Sensing of Natural Scenes Using Localized Random Sampling

Scientific Reports, 2016

Compressive sensing (CS) theory demonstrates that by using uniformly-random sampling, rather than uniformly-spaced sampling, higher quality image reconstructions are often achievable. Considering that the structure of sampling protocols has such a profound impact on the quality of image reconstructions, we formulate a new sampling scheme motivated by physiological receptive field structure, localized random sampling, which yields significantly improved CS image reconstructions. For each set of localized image measurements, our sampling method first randomly selects an image pixel and then measures its nearby pixels with probability depending on their distance from the initially selected pixel. We compare the uniformly-random and localized random sampling methods over a large space of sampling parameters, and show that, for the optimal parameter choices, higher quality image reconstructions can be consistently obtained by using localized random sampling. In addition, we argue that the localized random CS optimal parameter choice is stable with respect to diverse natural images, and scales with the number of samples used for reconstruction. We expect that the localized random sampling protocol helps to explain the evolutionarily advantageous nature of receptive field structure in visual systems and suggests several future research areas in CS theory and its application to brain imaging. Sampling protocols have drastically changed with the discovery of compressive sensing (CS) data acquisition and signal recovery 1,2. Prior to the development of CS theory, the Shannon-Nyquist theorem determined the majority of sampling procedures for both audio signals and images, dictating the minimum rate, the Nyquist rate, with which a signal must be uniformly sampled to guarantee successful reconstruction 3. Since the theorem specifically addresses minimal sampling rates corresponding to uniformly-spaced measurements, signals were typically sampled at equally-spaced intervals in space or time before the discovery of CS. However, using CS-type data acquisition, it is possible to reconstruct a broad class of sparse signals, containing a small number of dominant components in some domain, by employing a sub-Nyquist sampling rate 2. Instead of applying uniformly-spaced signal measurements, CS theory demonstrates that several types of uniformly-random sampling protocols will yield successful reconstructions with high probability 4-6. While CS signal recovery is relatively accurate for sufficiently high sampling rates, we demonstrate that, for the recovery of natural scenes, reconstruction quality can be further improved via localized random sampling. In this new protocol, each signal sample consists of a randomly centered local cluster of measurements, in which the probability of measuring a given pixel decreases with its distance from the cluster center. We show that the localized random sampling protocol consistently produces more accurate CS reconstructions of natural scenes than the uniformly-random sampling procedure using the same number of samples. For images containing a relatively large spread of dominant frequency components, the improvement is most pronounced, with localized random sampling yielding a higher fidelity representation of both low and moderate frequency components containing the majority of image information. Moreover, the reconstruction improvements garnered by localized random sampling also extend to images with varying size and spectrum distribution, affording improved reconstruction

General Framework of Compressive Sampling and its Applications for Signal and Image Compression A Random Approach

Journal of Technology Management for Growing Economies

Compressive sampling emerged as a very useful random protocol and has become an active research area for almost a decade. Compressive sampling allows us to sample a signal below Shannon Nyquist rate and assures its successful reconstruction if the signal is sparse. In this paper we used compressive sampling for arbitrary signal and image compression and successfully reconstructed them by solving l1 norm optimization problem. We also showed that compressive sampling can be implemented if signal is sparse and incoherent through simulations.

Performance Evaluation of Total Variation based Compressed Sensing MRI for Different Sampling Patterns

Magnetic Resonance Imaging (MRI) has been utilized broadly for clinical purposes to portray human anatomy due to its non-intrusive nature. The information acquisition method in MRI naturally picks up encoded signals (Fourier transformed) instead of pixel values and is called k-space information. Sparse reconstruction techniques can be executed in MRI for producing an image from fewer measurements. Compressive sensing (CS) technique samples the signals at a rate lower than traditional Nyquist's rate and thereby reduces the data acquisition time in MRI. This paper investigates a new proposed sampling scheme along with radial sampling and 1D Cartesian variable density sampling. For various sampling percentages, subjective and quantitative analyses are carried out on the reconstructed Magnetic Resonance image. Experimental results depicts that the high sampling density near the center of k-space gives a better reconstruction of compressing sensing MRI.

An Introduction to Compressive Sampling

The general trend to compress signals after they have been completely recovered is no longer the most effective method in signal processing and communication. Since most signals are compressible, we can collect fewer measurements and recover only the necessary information to maximize the efficacy of the sampling and reconstruction system. Compressive Sampling uses the concept of signal sparsity for an n dimensional signal and develops algorithms to reconstruct the signal from m<n measurements. This report describes various conditions on the sparsity S, the measurements m and the restricted isometry property (RIP) on a sensing matrix A; and attempts to find a solution to the Basis Pursuit optimization program for exact S-sparse and near S-sparse vectors under noisy conditions. Also, a strong emphasis is given on random matrices, incoherence of basis and it is proved that random sensing is the key to acquire fewer measurements. Based on the theoretical concepts, the report describes emerging applications along with two hardware architectures that utilize robust sensing design techniques. The content of this report is based on the study of the paper An Introduction to Compressive Sampling by Candes et al. and Wakin et. al. and thus the scope of the report is limited to discrete time signals.

Sparse signal, image recovery in compressive sensing technique through l1 norm minimization

2012

The classical Shannon Nyquist theorem tells us that, the number of samples required for a signal to reconstruct must be at least twice the bandwidth of the highest frequency for the signal of interest. In fact, this principle is used in all signal processing applications. Unfortunately, in most of the practical cases we end up with far too many samples. In such cases a new sampling method has been developed called Compressive Sensing (CS) or Compressive Sampling, where one can reconstruct certain signals and images from far fewer samples or measurements when compared to that of samples in classical theorem. CS theory primarily relies on sparsity principle and it exploits the fact that many natural signals or images are sparse in the sense that they have concise representations when expressed in the proper basis. Since CS theory relies on sparsity, we focused on reconstructing a sparse signal or sparse approximated image from its corresponding few measurements. In this document we focused on 1 l norm minimization problem (convex optimization problem) and its importance in recovering a sparse signal or sparse approximated image in CS. To sparse approximate the image we have transformed the image form standard pixel domain to wavelet domain, because of its concise representation. The algorithms we used to solve the 1 l norm minimization problem are primal-dual interior point method and barrier method. We came up with certain examples in Matlab to explain the differences between barrier method and primal-dual interior point method in solving a 1 l norm minimization problem i.e. recovering a sparse signal or image from very few measurements. While recovering the images the approach we used is block wise approach and treating each block as vector.

On the Use of Compressive Sensing for Image Enhancement

Compressed Sensing (CS), as a new rapidly growing research field, promises to effectively recover a sparse signal at the rate of below Nyquist rate. This revolutionary technology strongly relies on the sparsity of the signal and incoherency between sensing basis and representation basis. Exact recovery of a sparse signal will be occurred in a situation that the signal of interest sensed randomly and the measurements are also taken based on sparsity level and log factor of the signal dimension. In this paper, compressed sensing method is proposed to reduce the noise and reconstruct the image signal. Noise reduction and image reconstruction are formulated in the theoretical framework of compressed sensing using Basis Pursuit (BP) and Compressive Sampling Matching Pursuit (CoSaMP) algorithm when random measurement matrix is utilized to acquire the data. In this research we have evaluated the performance of our proposed image enhancement methods using the quality measure peak signal-to-noise ratio (PSNR).

A Study on Compressive Sensing and Reconstruction Approach

Journal of emerging technologies and innovative research, 2015

This paper gives the conventional approach of reconstructing signals or images from calculated data by following the well-known Shannon sampling theorem. This principle underlies the majority devices of current technology, such as analogto-digital conversion, medical imaging, or audio and video electronics. The primary objective of this paper is to establish the need of compressive sensing in the field of signal processing and image processing. Compressive sensing (CS) is a novel kind of sampling theory, which predicts that sparse signals and images can be reconstructed from what was in the past thought to be partial information. CS has two distinct major approaches to sparse recovery that each present different benefits and shortcomings. The first, l1-minimization methods such as Basis Pursuit use a linear optimization problem to recover the signal. This method provides strong guarantees and stability, but relies on Linear Programming, whose methods do not yet have strong polynomia...

Effective Image Reconstruction Using Various Compressed Sensing Techniques

IEEE Conference , 2024

The prevailing information relies heavily on the principles articulated by Shannon and Nyquist. These theories assert that, to faithfully reconstruct a signal without distortion, the sampling frequency must exceed twice the signal's maximum frequency to prevent aliasing. Following this, signals undergo compression to eliminate inherent redundancies before transmission over the channel. Despite the effectiveness of this approach, a significant drawback arises from the substantial processing overhead involved in sampling and compression. This limitation renders the scheme unsuitable for contemporary applications, given the constraints of current computational capabilities. Compressive Sensing (CS) introduces a novel framework, grounded in signal decomposition and approximation theory. Serving as an alternative to the Nyquist criteria, CS presents advantages such as reduced sensing time and sampling rate. In this approach, signals are sampled below the Nyquist rate through linear projection onto a random basis, ensuring exact reconstruction of the original signal. CS contributes to the reduction of power consumption and computational complexity in handling digital data. The extraction of information is facilitated by the utilization of a sensing matrix. In the context of image restoration, the CS framework provides an innovative approach to recover highquality images from their compressed measurements. The success of image restoration in compressive sensing relies on the development of robust algorithms that can efficiently exploit sparsity while addressing the inherent trade-off between accuracy and computational complexity. We survey state-of-theart reconstruction algorithms, including iterative methods, convex optimization, and deep learning approaches, showcasing their strengths and limitations in restoring images from compressed measurements. The success of image restoration in compressive sensing relies on the development of robust algorithms that can efficiently exploit sparsity while addressing the inherent trade-off between accuracy and computational complexity. We survey state-of-the-art reconstruction algorithms, including iterative methods, convex optimization, and deep learning approaches, showcasing their strengths and limitations in restoring images from compressed measurements.