Support recovery in compressed sensing: An estimation theoretic approach (original) (raw)
Performance of Jointly Sparse Support Recovery in Compressed Sensing
ita.ucsd.edu
The problem of jointly sparse support recovery is to determine the common support of jointly sparse signal vectors from multiple measurement vectors (MMV) related to the signals by a linear transformation. The fundamental limit of performance has been studied in terms of a so-called algebraic bound, relating the maximum recoverable sparsity level to the spark of the sensing matrix and the rank of the signal matrix. However, while the algebraic bound provides the necessary and sufficient condition for the success of joint sparse recovery, it is restricted to the noiseless case. We derive a sufficient condition for jointly sparse support recovery for the noisy case. We show that essentially the same deterministic condition as in the noiseless case suffices for perfect support recovery at finite signal-to-noise ratio (SNR). Furthermore, we perform an average case analysis of the recovery problem when the matrix of jointly sparse signal vectors has random left singular vectors, representing the case of signal vectors in general position. In this case, we provide a relaxed deterministic condition on the sensing matrix for support recovery with high probability at finite SNR. Finally, we quantify the improvements for an i.i.d. Gaussian sensing matrix.
Non-Convex Compressed Sensing Using Partial Support Information
In this paper we address the recovery conditions of weighted`p minimization for signal reconstruction from compressed sensing measurements when partial support information is available. We show that weighted`p minimization with H < p < I is stable and robust under weaker sufficient conditions compared to weighted`1 minimization. Moreover, the sufficient recovery conditions of weighted`p are weaker than those of regular`p minimization if at least SH7 of the support estimate is accurate. We also review some algorithms which exist to solve the non-convex`p problem and illustrate our results with numerical experiments.
Compressed Sensing with Non-Gaussian Noise and Partial Support Information
IEEE Signal Processing Letters, 2015
We study the problem of recovering sparse and compressible signals using a weighted minimization with from noisy compressed sensing measurements when part of the support is known a priori. To better model different types of non-Gaussian (bounded) noise, the minimization program is subject to a data-fidelity constraint expressed as the norm of the residual error. We show theoretically that the reconstruction error of this optimization is bounded (stable) if the sensing matrix satisfies an extended restricted isometry property. Numerical results show that the proposed method, which extends the range of and comparing with previous works, outperforms other noise-aware basis pursuit programs. For , since the optimization is not convex, we use a variant of an iterative reweighted algorithm for computing a local minimum.
Recovering Compressively Sampled Signals Using Partial Support Information
IEEE Transactions on Information Theory, 2000
In this paper we study recovery conditions of weighted ℓ 1 minimization for signal reconstruction from compressed sensing measurements when partial support information is available. We show that if at least 50% of the (partial) support information is accurate, then weighted ℓ 1 minimization is stable and robust under weaker conditions than the analogous conditions for standard ℓ 1 minimization. Moreover, weighted ℓ 1 minimization provides better bounds on the reconstruction error in terms of the measurement noise and the compressibility of the signal to be recovered. We illustrate our results with extensive numerical experiments on synthetic data and real audio and video signals.
Compressed Sensing via Iterative Support Detection
Computing Research Repository, 2009
We present a novel sparse signal reconstruction method "ISD", aiming to achieve fast reconstruction and a reduced requirement on the number of measurements compared to the classical 1 minimization approach. ISD addresses failed reconstructions of 1 minimization due to insufficient measurements. It estimates a support set I from a current reconstruction and obtains a new reconstruction by solving the minimization problem min{ i ∈I |xi| : Ax = b}, and it iterates these two steps for a small number of times. ISD differs from the orthogonal matching pursuit (OMP) method, as well as its variants, because (i) the index set I in ISD is not necessarily nested or increasing and (ii) the minimization problem above updates all the components of x at the same time. We generalize the Null Space Property to Truncated Null Space Property and present our analysis of ISD based on the latter.
Performance Analysis of Compressed Sensing Given Insufficient Random Measurements
Etri Journal, 2013
Most of the literature on compressed sensing has not paid enough attention to scenarios in which the number of acquired measurements is insufficient to satisfy minimal exact reconstruction requirements. In practice, encountering such scenarios is highly likely, either intentionally or unintentionally, that is, due to high sensing cost or to the lack of knowledge of signal properties. We analyze signal reconstruction performance in this setting. The main result is an expression of the reconstruction error as a function of the number of acquired measurements.
Local Recovery Bounds for Prior Support Constrained Compressed Sensing
Mathematical Notes, 2022
Prior support constrained compressed sensing has of late become popular due to its potential for applications. The existing results on recovery guarantees provide global recovery bounds in the sense that they deal with full support. However, in some applications, one might be interested in the recovery guarantees limited to the given prior support, such bounds may be termed as local recovery bounds. The present work proposes the local recovery guarantees and analyzes the conditions on associated parameters that make recovery error small.
A Survey of Compressed Sensing
Compressed sensing was introduced some ten years ago as an effective way of acquiring signals, which possess a sparse or nearly sparse representation in a suitable basis or dictionary. Dueto its solid mathematical backgrounds, it quickly attracted the attention of mathematicians from several different areas, so that the most important aspects of the theory are nowadays very well understood. In recent years, its applications started to spread out through applied mathematics, signal processing, and electrical engineering. The aim of this chapter is to provide an introduction into the basic concepts of compressed sensing. In the first part of this chapter, we present the basic mathematical concepts of compressed sensing, including the Null Space Property, Restricted Isometry Property, their connection to basis pursuit and sparse recovery, and construction of matrices with small restricted isometry constants. This presentation is easily accessible, largely self-contained, and includes p...
Introduction to compressed sensing
In recent years, compressed sensing (CS) has attracted considerable attention in areas of applied mathematics, computer science, and electrical engineering by suggesting that it may be possible to surpass the traditional limits of sampling theory. CS builds upon the fundamental fact that we can represent many signals using only a few non-zero coefficients in a suitable basis or dictionary. Nonlinear optimization can then enable recovery of such signals from very few measurements. In this chapter, we provide an up-to-date review of the basic theory underlying CS. After a brief historical overview, we begin with a discussion of sparsity and other low-dimensional signal models. We then treat the central question of how to accurately recover a high-dimensional signal from a small set of measurements and provide performance guarantees for a variety of sparse recovery algorithms. We conclude with a discussion of some extensions of the sparse recovery framework. In subsequent chapters of the book, we will see how the fundamentals presented in this chapter are extended in many exciting directions, including new models for describing structure in both analog and discrete-time signals, new sensing design techniques, more advanced recovery results, and emerging applications.
On the Doubly Sparse Compressed Sensing Problem
A new variant of the Compressed Sensing problem is investigated when the number of measurements corrupted by errors is upper bounded by some value l but there are no more restrictions on errors. We prove that in this case it is enough to make 2(t + l) measurements, where t is the sparsity of original data. Moreover for this case a rather simple recovery algorithm is proposed. An analog of the Singleton bound from coding theory is derived what proves optimality of the corresponding measurement matrices.