New Bounds for RIC in Compressed Sensing (original) (raw)
Related papers
Deterministic Bounds for Restricted Isometry of Compressed Sensing Matrices
Computing Research Repository, 2011
Compressed Sensing (CS) is an emerging field that enables reconstruction of a sparse signal xinmathbbRnx \in {\mathbb R} ^nxinmathbbRn that has only kllnk \ll nklln non-zero coefficients from a small number mllnm \ll nmlln of linear projections. The projections are obtained by multiplying xxx by a matrix PhiinmathbbRmtimesn\Phi \in {\mathbb R}^{m \times n}PhiinmathbbRmtimesn --- called a CS matrix --- where k<mllnk < m \ll nk<mlln. In this work, we ask the following question: given the triplet k,m,n\{k, m, n \}k,m,n that defines the CS problem size, what are the deterministic limits on the performance of the best CS matrix in mathbbRmtimesn{\mathbb R}^{m \times n}mathbbRmtimesn? We select Restricted Isometry as the performance metric. We derive two deterministic converse bounds and one deterministic achievable bound on the Restricted Isometry for matrices in mathbbRmtimesn{\mathbb R}^{m \times n}mathbbRmtimesn in terms of nnn, mmm and kkk. The first converse bound (structural bound) is derived by exploiting the intricate relationships between the singular values of sub-matrices and the complete matrix. The second converse bound (packing bound) and the achievable bound (covering bound) are derived by recognizing the equivalence of CS matrices to codes on Grassmannian spaces. Simulations reveal that random Gaussian Phi\PhiPhi provide far from optimal performance. The derivation of the three bounds offers several new geometric insights that relate optimal CS matrices to equi-angular tight frames, the Welch bound, codes on Grassmannian spaces, and the Generalized Pythagorean Theorem (GPT).
On the perturbation of measurement matrix in non-convex compressed sensing
Signal Processing, 2014
We study l p ð0 o p o 1) minimization under both additive and multiplicative noise. Theorems are presented for completely perturbed l p ð0 o p o 1) minimization. Theorems reveal that under suitable conditions the stability of l p minimization with certain values of 0 o p o 1 is limited by the noise level in the observation. The restricted isometry property condition and the worst case reconstruction error bound are given in terms of restricted isometry constant and relative perturbations. Simulation results are presented and compared to state-of-the-art methods.
A Survey of Compressed Sensing
Compressed sensing was introduced some ten years ago as an effective way of acquiring signals, which possess a sparse or nearly sparse representation in a suitable basis or dictionary. Dueto its solid mathematical backgrounds, it quickly attracted the attention of mathematicians from several different areas, so that the most important aspects of the theory are nowadays very well understood. In recent years, its applications started to spread out through applied mathematics, signal processing, and electrical engineering. The aim of this chapter is to provide an introduction into the basic concepts of compressed sensing. In the first part of this chapter, we present the basic mathematical concepts of compressed sensing, including the Null Space Property, Restricted Isometry Property, their connection to basis pursuit and sparse recovery, and construction of matrices with small restricted isometry constants. This presentation is easily accessible, largely self-contained, and includes p...
Restricted isometry properties and nonconvex compressive sensing
2008
In previous work, numerical experiments showed that p minimization with 0 < p < 1 recovers sparse signals from fewer linear measurements than does 1 minimization. It was also shown that a weaker restricted isometry property is sufficient to guarantee perfect recovery in the p case. In this work, we generalize this result to an p variant of the restricted isometry property, and then determine how many random, Gaussian measurements are sufficient for the condition to hold with high probability. The resulting sufficient condition is met by fewer measurements for smaller p.
Compressed Sensing Using Binary Matrices of Nearly Optimal Dimensions
IEEE Transactions on Signal Processing
In this paper, we study the problem of compressed sensing using binary measurement matrices and 1-norm minimization (basis pursuit) as the recovery algorithm. We derive new upper and lower bounds on the number of measurements to achieve robust sparse recovery with binary matrices. We establish sufficient conditions for a column-regular binary matrix to satisfy the robust null space property (RNSP) and show that the associated sufficient conditions for robust sparse recovery obtained using the RNSP are better by a factor of (3 √ 3)/2 ≈ 2.6 compared to the sufficient conditions obtained using the restricted isometry property (RIP). Next we derive universal lower bounds on the number of measurements that any binary matrix needs to have in order to satisfy the weaker sufficient condition based on the RNSP and show that bipartite graphs of girth six are optimal. Then we display two classes of binary matrices, namely parity check matrices of array codes and Euler squares, which have girth six and are nearly optimal in the sense of almost satisfying the lower bound. In principle, randomly generated Gaussian measurement matrices are "order-optimal." So we compare the phase transition behavior of the basis pursuit formulation using binary array codes and Gaussian matrices and show that (i) there is essentially no difference between the phase transition boundaries in the two cases and (ii) the CPU time of basis pursuit with binary matrices is hundreds of times faster than with Gaussian matrices and the storage requirements are less. Therefore it is suggested that binary matrices are a viable alternative to Gaussian matrices for compressed sensing using basis pursuit.
New Restricted Isometry results for noisy low-rank recovery
2010 IEEE International Symposium on Information Theory, 2010
The problem of recovering a low-rank matrix consistent with noisy linear measurements is a fundamental problem with applications in machine learning, statistics, and control. Reweighted trace minimization, which extends and improves upon the popular nuclear norm heuristic, has been used as an iterative heuristic for this problem. In this paper, we present theoretical guarantees for the reweighted trace heuristic. We quantify its improvement over nuclear norm minimization by proving tighter bounds on the recovery error for low-rank matrices with noisy measurements. Our analysis is based on the Restricted Isometry Property (RIP) and extends some recent results from Compressed Sensing. As a second contribution, we improve the existing RIP recovery results for the nuclear norm heuristic, and show that recovery happens under a weaker assumption on the RIP constants.
Minimization of ell1−2\ell_{1-2}ell1−2 for Compressed Sensing
SIAM Journal on Scientific Computing, 2015
We study minimization of the difference of 1 and 2 norms as a nonconvex and Lipschitz continuous metric for solving constrained and unconstrained compressed sensing problems. We establish exact (stable) sparse recovery results under a restricted isometry property (RIP) condition for the constrained problem, and a full-rank theorem of the sensing matrix restricted to the support of the sparse solution. We present an iterative method for 1−2 minimization based on the difference of convex functions algorithm and prove that it converges to a stationary point satisfying the first-order optimality condition. We propose a sparsity oriented simulated annealing procedure with non-Gaussian random perturbation and prove the almost sure convergence of the combined algorithm (DCASA) to a global minimum. Computation examples on success rates of sparse solution recovery show that if the sensing matrix is ill-conditioned (non RIP satisfying), then our method is better than existing nonconvex compressed sensing solvers in the literature. Likewise in the magnetic resonance imaging (MRI) phantom image recovery problem, 1−2 succeeds with eight projections. Irrespective of the conditioning of the sensing matrix, 1−2 is better than 1 in both the sparse signal and the MRI phantom image recovery problems.
78-82-Deterministic Measurement Matrix in Compressed Sensing
Compressive sensing is a sampling method which provides a new approach to efficient signal compression and recovery by exploiting the fact that a sparse signal can be suitably reconstructed from very few measurements. One of the most concerns in compressive sensing is the construction of the sensing matrices. While random sensing matrices have been widely studied, only a few deterministic sensing matrices have been considered. Originated as a technique for finding sparse solutions to underdetermined linear systems, compressed sensing (CS) has now found widespread applications in both Signal processing and Communication communities, ranging from data compression, data acquisition, inverse Problems, and channel coding. An essential idea of CS is to explore the fact that most natural phenomena are Sparse or compressible in some appropriate basis. By acquiring a relatively small number of samples in the "sparse" domain, the signal of interest can be reconstructed with high accuracy through well-developed optimization procedures. These matrices are highly desirable on structure which allows fast implementation with reduced storage requirements. In this paper, a survey of deterministic sensing matrices for compressive sensing is presented. Some recent results on construction of the deterministic sensing matrices are discussed.
On Compressed Sensing Matrices Breaking the Square-Root Bottleneck
2020 IEEE Information Theory Workshop (ITW)
Compressed sensing is a celebrated framework in signal processing and has many practical applications. One of challenging problems in compressed sensing is to construct deterministic matrices having restricted isometry property (RIP). So far, there are only a few publications providing deterministic RIP matrices beating the square-root bottleneck on the sparsity level. In this paper, we investigate RIP of certain matrices defined by higher power residues modulo primes. Moreover, we prove that the widely-believed generalized Paley graph conjecture implies that these matrices have RIP breaking the square-root bottleneck.