Robust Sparse Signal Recovery Based on Weighted Median Operator (original) (raw)

Compressive sensing signal reconstruction by weighted median regression estimates

2010 IEEE International Conference on Acoustics, Speech and Signal Processing, 2010

In this paper, we address the compressive sensing signal reconstruction problem by solving an 0-regularized Least Absolute Deviation (LAD) regression problem. A coordinate descent algorithm is developed to solve this 0-LAD optimization problem leading to a twostage operation for signal estimation and basis selection. In the first stage, an estimation of the sparse signal is found by a weighted median operator acting on a shifted-and-scaled version of the measurement samples with weights taken from the entries of the projection matrix. The resultant estimated value is then passed to the second stage that tries to identify whether the corresponding entry is relevant or not. This stage is achieved by a hard threshold operator with adaptable thresholding parameter that is suitably tuned as the algorithm progresses.

Sparse and Robust Signal Reconstruction

Many problems in signal processing and statistical inference are based on finding a sparse solution to an undeter-mined linear system. The reference approach to this problem of finding sparse signal representations, on overcomplete dictionaries, leads to convex unconstrained optimization problems, with a quadratic term ℓ 2 , for the adjustment to the observed signal, and a coefficient vector ℓ 1-norm. This work focus the development and experimental analysis of an algorithm for the solution of ℓ q-ℓ p optimization problems, where p ∈ ]0, 1] ∧ q ∈ [1, 2], of which ℓ 2-ℓ 1 is an instance. The developed algorithm belongs to the majorization-minimization class, where the solution of the problem is given by the minimization of a progression of majorizers of the original function. Each iteration corresponds to the solution of an ℓ 2-ℓ 1 problem, solved by the projected gradient algorithm. When tested on synthetic data and image reconstruction problems, the results shows a good performance of the implemented algorithm, both in compressed sensing and signal restoration scenarios.

Sparse and robust signal reconstruction algorithm

Many problems in signal processing and statistical inference are based on finding a sparse solution to an undetermined linear system. The reference approach to this problem of finding sparse signal representations , on overcomplete dictionaries leads to convex unconstrained optimization problems, with a quadratic term 2 , for the adjustment to the observed signal and a coefficient vector 1-norm. This work focus algorithms development and experimental analysis for the solution of q-p optimization problems, where p, q ∈]0, 2], of which 2-1 is an instance. The q-norm, with q < 2, in the data term, gives statistical robustness to the approximation criterion. The developed algorithms belongs to the majorization-minimization class, where the solution of the problem is given by the minimization of a progression of majorizers of the original function. Each iteration corresponds to the solution of an 2-1 problem. These are reformulated as quadratic programming problems and solved by the projected gradient algorithm. When tested on synthetic data and image reconstruction problems , the results of implemented algorithms shows a good performance both in compressed sensing and signal restoration scenarios.

Greedy Signal Space Recovery Algorithm with Overcomplete Dictionaries in Compressive Sensing

ArXiv, 2019

Compressive Sensing (CS) is a new paradigm for the efficient acquisition of signals that have sparse representation in a certain domain. Traditionally, CS has provided numerous methods for signal recovery over an orthonormal basis. However, modern applications have sparked the emergence of related methods for signals not sparse in an orthonormal basis but in some arbitrary, perhaps highly overcomplete, dictionary, particularly due to their potential to generate different kinds of sparse representation of signals. To this end, we apply a signal space greedy method, which relies on the ability to optimally project a signal onto a small number of dictionary atoms, to address signal recovery in this setting. We describe a generalized variant of the iterative recovery algorithm called Signal space Subspace Pursuit (SSSP) for this more challenging setting. Here, using the Dictionary-Restricted Isometry Property (D-RIP) rather than classical RIP, we derive a low bound on the number of meas...

Stable recovery of sparse overcomplete representations in the presence of noise

IEEE Transactions on Information Theory, 2000

Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. We prove the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system.

Recovering Compressively Sampled Signals Using Partial Support Information

IEEE Transactions on Information Theory, 2000

In this paper we study recovery conditions of weighted ℓ 1 minimization for signal reconstruction from compressed sensing measurements when partial support information is available. We show that if at least 50% of the (partial) support information is accurate, then weighted ℓ 1 minimization is stable and robust under weaker conditions than the analogous conditions for standard ℓ 1 minimization. Moreover, weighted ℓ 1 minimization provides better bounds on the reconstruction error in terms of the measurement noise and the compressibility of the signal to be recovered. We illustrate our results with extensive numerical experiments on synthetic data and real audio and video signals.

Compressive Sensing Using Symmetric Alpha-Stable Distributions For Robust Sparse Signal Reconstruction

IEEE Transactions on Signal Processing

Traditional compressive sensing (CS) primarily assumes light-tailed models for the underlying signal and/or noise statistics. Nevertheless, this assumption is not met in the case of highly impulsive environments, where non-Gaussian infinitevariance processes arise for the signal and/or noise components. This drives the traditional sparse reconstruction methods to failure, since they are incapable of suppressing the effects of heavytailed sampling noise. The family of symmetric alpha-stable (SαS) distributions, as a powerful tool for modeling heavy-tailed behaviors, is adopted in this paper to design a robust algorithm for sparse signal reconstruction from linear random measurements corrupted by infinite-variance additive noise. Specifically, a novel greedy reconstruction method is developed, which achieves increased robustness to impulsive sampling noise by solving a minimum dispersion (MD) optimization problem based on fractional lower-order moments. The MD criterion emerges naturally in the case of additive sampling noise modeled by SαS distributions, as an effective measure of the spread of reconstruction errors around zero, due to the lack of second-order moments. The experimental evaluation demonstrates the improved reconstruction performance of the proposed algorithm when compared against state-of-the-art CS techniques for a broad range of impulsive environments.

Sparse Signal Representation, Sampling, and Recovery in Compressive Sensing Frameworks

IEEE Access

Compressive sensing allows the reconstruction of original signals from a much smaller number of samples as compared to the Nyquist sampling rate. The effectiveness of compressive sensing motivated the researchers for its deployment in a variety of application areas. The use of an efficient sampling matrix for high-performance recovery algorithms improves the performance of the compressive sensing framework significantly. This paper presents the underlying concepts of compressive sensing as well as previous work done in targeted domains in accordance with the various application areas. To develop prospects within the available functional blocks of compressive sensing frameworks, a diverse range of application areas are investigated. The three fundamental elements of a compressive sensing framework (signal sparsity, subsampling, and reconstruction) are thoroughly reviewed in this work by becoming acquainted with the key research gaps previously identified by the research community. Similarly, the basic mathematical formulation is used to outline some primary performance evaluation metrics for 1D and 2D compressive sensing. INDEX TERMS Compressed sensing, compressive sampling, reconstruction algorithms, sensing matrix. IRFAN AHMED received the B.Sc. and M.Sc. degrees in electrical engineering and the Ph.D. degree in computer systems engineering from the University of Engineering & Technology Peshawar. He is currently employed as a full-time Lecturer with the

Sparse signal, image recovery in compressive sensing technique through l1 norm minimization

2012

The classical Shannon Nyquist theorem tells us that, the number of samples required for a signal to reconstruct must be at least twice the bandwidth of the highest frequency for the signal of interest. In fact, this principle is used in all signal processing applications. Unfortunately, in most of the practical cases we end up with far too many samples. In such cases a new sampling method has been developed called Compressive Sensing (CS) or Compressive Sampling, where one can reconstruct certain signals and images from far fewer samples or measurements when compared to that of samples in classical theorem. CS theory primarily relies on sparsity principle and it exploits the fact that many natural signals or images are sparse in the sense that they have concise representations when expressed in the proper basis. Since CS theory relies on sparsity, we focused on reconstructing a sparse signal or sparse approximated image from its corresponding few measurements. In this document we focused on 1 l norm minimization problem (convex optimization problem) and its importance in recovering a sparse signal or sparse approximated image in CS. To sparse approximate the image we have transformed the image form standard pixel domain to wavelet domain, because of its concise representation. The algorithms we used to solve the 1 l norm minimization problem are primal-dual interior point method and barrier method. We came up with certain examples in Matlab to explain the differences between barrier method and primal-dual interior point method in solving a 1 l norm minimization problem i.e. recovering a sparse signal or image from very few measurements. While recovering the images the approach we used is block wise approach and treating each block as vector.