Compressed sensing under matrix uncertainty: Optimum thresholds and robust approximate message passing (original) (raw)

Message-passing algorithms for compressed sensing

Proceedings of the National Academy of Sciences, 2009

Compressed sensing aims to undersample certain high-dimensional signals yet accurately reconstruct them by exploiting signal characteristics. Accurate reconstruction is possible when the object to be recovered is sufficiently sparse in a known basis. Currently, the best known sparsity–undersampling tradeoff is achieved when reconstructing by convex optimization, which is expensive in important large-scale applications. Fast iterative thresholding algorithms have been intensively studied as alternatives to convex ...

Compressed Sensing with uncertainty - the Bayesian estimation perspective

2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 2015

The Compressed Sensing (CS) framework outperforms the sampling rate limits given by Shannon's theory. This gap is possible since it is assumed that the signal of interest admits a linear decomposition of few vectors in a given sparsifying Basis (Fourier, Wavelet, ...). Unfortunately in realistic operating systems, uncertain knowledge of the CS model is inevitable and must be evaluated. Typically, this uncertainty drastically degrades the estimation performance of sparse-based estimators in the low noise variance regime. In this work, the Off-Grid (OG) and Basis Mismatch (BM) problems are compared in a Bayesian estimation perspective. At first glance, we are tempted to think that these two acronyms stand for the same problem. However, by comparing their Bayesian Cramèr-Rao Bounds (BCRB) for the estimation of a L-sparse amplitude vector based on N measurements, it is shown that the BM problem has a lower BCRB than the CS one in a general context. To go further into the analysis we provide for i.i.d. Gaussian amplitudes and in the low noise variance regime an interesting closed-form expression of a normalized 2-norm criterion of the difference of the two BCRB matrices. Based on the analysis of this closed-form expression, we obtain two conclusions. Firstly, the two uncertainty problems cannot be confused for a non-zero mismatch error variance and with finite N and L. Secondly, the two problems turn to be similar for any mismatch error variance in the large system regime, i.e., for N, L → ∞ with constant aspect ratio N/L → ρ.

Probabilistic reconstruction in compressed sensing: algorithms, phase diagrams, and threshold achieving matrices

Journal of Statistical Mechanics: Theory and Experiment, 2012

Compressed sensing is a signal processing method that acquires data directly in a compressed form. This allows one to make less measurements than what was considered necessary to record a signal, enabling faster or more precise measurement protocols in a wide range of applications. Using an interdisciplinary approach, we have recently proposed in [1] a strategy that allows compressed sensing to be performed at acquisition rates approaching to the theoretical optimal limits. In this paper, we give a more thorough presentation of our approach, and introduce many new results. We present the probabilistic approach to reconstruction and discuss its optimality and robustness. We detail the derivation of the message passing algorithm for reconstruction and expectation maximization learning of signal-model parameters. We further develop the asymptotic analysis of the corresponding phase diagrams with and without measurement noise, for different distribution of signals, and discuss the best possible reconstruction performances regardless of the algorithm. We also present new efficient seeding matrices, test them on synthetic data and analyze their performance asymptotically. arXiv:1206.3953v1 [cond-mat.stat-mech] 18 Jun 2012 Acknowledgments 36 A. Derivation of the replica analysis for block matrices 37 B. Phase diagram of the 1 reconstruction for seeding matrices 38 C. Equations for a mixture of Gaussians 40 References 41 : unit coupling : no coupling (null elements) : coupling J J J J J J J J : unit coupling : no coupling (null elements) : coupling J J J J J J J J : unit coupling : no coupling (null elements) : coupling J J J J J J J J

Bayesian Signal Recovery Under Measurement Matrix Uncertainty: Performance Analysis

IEEE Access

Compressive sensing (CS) has gained a lot of attention in recent years due to its benefits in saving measurement time and storage cost in many applications including biomedical imaging, wireless communications, image reconstruction, remote sensing, end etc. The CS framework enables signal recovery from a small number of linear measurements with an acceptable fidelity taking advantage of signal sparsity in some potentially unknown domain. The core idea of different variants of CS methods is incorporating prior knowledge about the input signal (e.g. prior distribution, or sparsity of signals) into the recovery algorithm to restrict the search space and enhance the signal recovery performance. However, the accuracy of signal reconstruction can be significantly compromised if the designed and implemented measurement matrices do not fully match. Often times, the measurement matrix mismatch is treated as an additional noise term in the recovery algorithm ignoring the fact that this mismatch is a learnable quantity which includes random but constant or slow-varying terms during the lifetime of the measurement system. In this work, we consider this problem for a simple case of Gaussian prior with a sparsity-driven diagonal covariance matrix and find strict bounds on the deviation of the reconstructed signal from the optimal case of fully known measurement matrix based on the properties of the mismatch matrix. The obtained bounds are general and hence can be used to assess the performance of learning algorithms designed for learning measurement matrix uncertainty and eliminating its effect from the signal recovery. We provide numerical results to illustrate this concept in real-world applications.

How to Design Message Passing Algorithms for Compressed Sensing

2010

Abstract Finding fast first order methods for recovering signals from compressed measurements is a problem of interest in applications ranging from biology to imaging. Recently, the authors proposed a class of low-complexity algorithms called approximate message passing or AMP. The new algorithms were shown, through extensive simulations and mathematical analysis, to exhibit very fast convergence rate and high phase transition.

Sparse Model Uncertainties in Compressed Sensing with Application to Convolutions and Sporadic Communication

Applied and Numerical Harmonic Analysis, 2015

The success of the compressed sensing paradigm has shown that a substantial reduction in sampling and storage complexity can be achieved in certain linear and non-adaptive estimation problems. It is therefore an advisable strategy for noncoherent information retrieval in, for example, sporadic blind and semi-blind communication and sampling problems. But, the conventional model is not practical here since the compressible signals have to be estimated from samples taken solely on the output of an un-calibrated system which is unknown during measurement but often compressible. Conventionally, one has either to operate at suboptimal sampling rates or the recovery performance substantially suffers from the dominance of model mismatch. In this work we discuss such type of estimation problems and we focus on bilinear inverse problems. We link this problem to the recovery of low-rank and sparse matrices and establish stable low-dimensional embeddings of the uncalibrated receive signals whereby addressing also efficient communication-oriented methods like universal random demodulation. Exemplary, we investigate in more detail sparse convolutions serving as a basic communication channel model. In using some recent results from additive combinatorics we show that such type of signals can be efficiently low-rate sampled by semi-blind methods. Finally, we present a further application of these results in the field of phase retrieval from intensity Fourier measurements.

Support recovery in compressed sensing: An estimation theoretic approach

2009 IEEE International Symposium on Information Theory, 2009

Compressed sensing (CS) deals with the reconstruction of sparse signals from a small number of linear measurements. One of the main challenges in CS is to find the support of a sparse signal from a set of noisy observations. In the CS literature, several information-theoretic bounds on the scaling law of the required number of measurements for exact support recovery have been derived, where the focus is mainly on random measurement matrices.

A User's Guide to Compressed Sensing for Communications Systems

IEICE Transactions on Communications, 2013

Kazunori HAYASHI †a) , Masaaki NAGAHARA †b) , and Toshiyuki TANAKA †c) , Members SUMMARY This survey provides a brief introduction to compressed sensing as well as several major algorithms to solve it and its various applications to communications systems. We firstly review linear simultaneous equations as ill-posed inverse problems, since the idea of compressed sensing could be best understood in the context of the linear equations. Then, we consider the problem of compressed sensing as an underdetermined linear system with a prior information that the true solution is sparse, and explain the sparse signal recovery based on 1 optimization, which plays the central role in compressed sensing, with some intuitive explanations on the optimization problem. Moreover, we introduce some important properties of the sensing matrix in order to establish the guarantee of the exact recovery of sparse signals from the underdetermined system. After summarizing several major algorithms to obtain a sparse solution focusing on the 1 optimization and the greedy approaches, we introduce applications of compressed sensing to communications systems, such as wireless channel estimation, wireless sensor network, network tomography, cognitive radio, array signal processing, multiple access scheme, and networked control. key words: compressed sensing, sparse signal, compressible signal, 1norm, underdetermined system * In the basic framework of compressed sensing, the reconstruction of a finite dimensional vector is commonly considered. † A norm should satisfy: (i) ||x|| = 0 ⇔ x = 0, (ii) ||αx|| = |α| ||x||, ∀α ∈ R, (iii) ||x + y|| ≤ ||x|| + ||y||.

Bayesian Inference and Compressed Sensing

2017

This chapter provides the use of Bayesian inference in compressive sensing (CS), a method in signal processing. Among the recovery methods used in CS literature, the convex relaxation methods are reformulated again using the Bayesian framework and this method is applied in different CS applications such as magnetic resonance imaging (MRI), remote sensing, and wireless communication systems, specifically on multipleinput multiple-output (MIMO) systems. The robustness of Bayesian method in incorporating prior information like sparse and structure among the sparse entries is shown in this chapter.

Compressed sensing with prior information: Optimal strategies, geometry, and bounds

We address the problem of compressed sensing (CS) with prior information: reconstruct a target CS signal with the aid of a similar signal that is known beforehand, our prior information. We integrate the additional knowledge of the similar signal into CS via ℓ1-ℓ1 and ℓ1-ℓ2 minimization. We then establish bounds on the number of measurements required by these problems to successfully reconstruct the original signal. Our bounds and geometrical interpretations reveal that if the prior information has good enough quality, ℓ1-ℓ1 minimization improves the performance of CS dramatically. In contrast, ℓ1-ℓ2 minimization has a performance very similar to classical CS and brings no significant benefits. All our findings are illustrated with experimental results.