Recovery of Low Rank and Jointly Sparse Matrices with Two Sampling Matrices (original) (raw)
Related papers
Subspace Methods for Joint Sparse Recovery
IEEE Transactions on Information Theory, 2000
We propose a robust and efficient algorithm for the recovery of the jointly sparse support in compressed sensing with multiple measurement vectors (the MMV problem). When the unknown matrix of the jointly sparse signals has full rank, MUSIC is a guaranteed algorithm for this problem, achieving the fundamental algebraic bound on the minimum number of measurements. We focus instead on the unfavorable but practically significant case of rank deficiency or bad conditioning. This situation arises with limited number of measurements, or with highly correlated signal components. In this case MUSIC fails, and in practice none of the existing MMV methods can consistently approach the algebraic bounds. We propose iMUSIC, which overcomes these limitations by combining the advantages of both existing methods and MUSIC. It is a computationally efficient algorithm with a performance guarantee.
Subspace-augmented MUSIC for joint sparse recovery with any rank
2010
We propose robust and efficient algorithms for the joint sparse recovery problem in compressed sensing, which simultaneously recover the supports of jointly sparse signals from their multiple measurement vectors obtained through a common sensing matrix.
A simplified approach to recovery conditions for low rank matrices
2011 IEEE International Symposium on Information Theory Proceedings, 2011
Recovering sparse vectors and low-rank matrices from noisy linear measurements has been the focus of much recent research. Various reconstruction algorithms have been studied, including ℓ1 and nuclear norm minimization as well as ℓp minimization with p < 1. These algorithms are known to succeed if certain conditions on the measurement map are satisfied. Proofs of robust recovery for matrices have so far been much more involved than in the vector case. In this paper, we show how several robust classes of recovery conditions can be extended from vectors to matrices in a simple and transparent way, leading to the best known restricted isometry and nullspace conditions for matrix recovery. Our results rely on the ability to "vectorize" matrices through the use of a key singular value inequality.
Optimal Weighted Low-rank Matrix Recovery with Subspace Prior Information
arXiv: Information Theory, 2018
Matrix sensing is the problem of reconstructing a low-rank matrix from a few linear measurements. In many applications such as collaborative filtering, the famous Netflix prize problem, and seismic data interpolation, there exists some prior information about the column and row spaces of the ground-truth low-rank matrix. In this paper, we exploit this prior information by proposing a weighted optimization problem where its objective function promotes both rank and prior subspace information. Using the recent results in conic integral geometry, we obtain the unique optimal weights that minimize the required number of measurements. As simulation results confirm, the proposed convex program with optimal weights requires substantially fewer measurements than the regular nuclear norm minimization.
Low-Rank Matrix Recovery From Errors and Erasures
IEEE Transactions on Information Theory, 2013
This paper considers the recovery of a low-rank matrix from an observed version that simultaneously contains both (a) erasures: most entries are not observed, and (b) errors: values at a constant fraction of (unknown) locations are arbitrarily corrupted. We provide a new unified performance guarantee on when the natural convex relaxation of minimizing rank plus support succeeds in exact recovery. Our result allows for the simultaneous presence of random and deterministic components in both the error and erasure patterns. On the one hand, corollaries obtained by specializing this one single result in different ways recover (up to poly-log factors) all the existing works in matrix completion, and sparse and low-rank matrix recovery. On the other hand, our results also provide the first guarantees for (a) recovery when we observe a vanishing fraction of entries of a corrupted matrix, and (b) deterministic matrix completion.
Low-Rank Matrix Recovery from Row-and-Column Affine Measurements
We propose and study a row-and-column affine measurement scheme for lowrank matrix recovery. Each measurement is a linear combination of elements in one row or one column of a matrix X. This setting arises naturally in applications from different domains. However, current algorithms developed for standard matrix recovery problems do not perform well in our case, hence the need for developing new algorithms and theory for our problem. We propose a simple algorithm for the problem based on Singular Value Decomposition (SV D) and least-squares (LS), which we term SVLS . We prove that (a simplified version of) our algorithm can recover X exactly with the minimum possible number of measurements in the noiseless case. In the general noisy case, we prove performance guarantees on the reconstruction accuracy under the Frobenius norm. In simulations, our row-andcolumn design and SVLS algorithm show improved speed, and comparable and in some cases better accuracy compared to standard measurements designs and algorithms. Our theoretical and experimental results suggest that the proposed rowand-column affine measurements scheme, together with our recovery algorithm, may provide a powerful framework for affine matrix reconstruction.
A Greedy Algorithm for Matrix Recovery with Subspace Prior Information
2019
Matrix recovery is the problem of recovering a low-rank matrix from a few linear measurements. Recently, this problem has gained a lot of attention as it is employed in many applications such as Netflix prize problem, seismic data interpolation and collaborative filtering. In these applications, one might access to additional prior information about the column and row spaces of the matrix. These extra information can potentially enhance the matrix recovery performance. In this paper, we propose an efficient greedy algorithm that exploits prior information in the recovery procedure. The performance of the proposed algorithm is measured in terms of the rank restricted isometry property (R-RIP). Our proposed algorithm with prior subspace information converges under a more milder condition on the R-RIP in compared with the case that we do not use prior information. Additionally, our algorithm performs much better than nuclear norm minimization in terms of both computational complexity a...
A fast majorize-minimize algorithm for the recovery of sparse and low-rank matrices
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society, 2012
We introduce a novel algorithm to recover sparse and low-rank matrices from noisy and undersampled measurements. We pose the reconstruction as an optimization problem, where we minimize a linear combination of data consistency error, nonconvex spectral penalty, and nonconvex sparsity penalty. We majorize the nondifferentiable spectral and sparsity penalties in the criterion by quadratic expressions to realize an iterative three-step alternating minimization scheme. Since each of these steps can be evaluated either analytically or using fast schemes, we obtain a computationally efficient algorithm. We demonstrate the utility of the algorithm in the context of dynamic magnetic resonance imaging (MRI) reconstruction from sub-Nyquist sampled measurements. The results show a significant improvement in signal-to-noise ratio and image quality compared with classical dynamic imaging algorithms. We expect the proposed scheme to be useful in a range of applications including video restoration and multidimensional MRI.
Recovering Low-Rank and Sparse Components of Matrices from Incomplete and Noisy Observations
SIAM Journal on Optimization, 2011
Many applications arising in a variety of fields can be well illustrated by the task of recovering the low-rank and sparse components of a given matrix. Recently, it is discovered that this NP-hard task can be well accomplished, both theoretically and numerically, via heuristically solving a convex relaxation problem where the widely-acknowledged nuclear norm and l 1 norm are utilized to induce low-rank and sparsity. In the literature, it is conventionally assumed that all entries of the matrix to be recovered are exactly known (via observation). To capture even more applications, this paper studies the recovery task in more general settings: only a fraction of entries of the matrix can be observed and the observation is corrupted by both impulsive and Gaussian noise. The resulted model falls into the applicable scope of the classical augmented Lagrangian method. Moreover, the separable structure of the new model enables us to solve the involved subproblems more efficiently by splitting the augmented Lagrangian function. Hence, some implementable numerical algorithms are developed in the spirits of the well-known alternating direction method and the parallel splitting augmented Lagrangian method. Some preliminary numerical experiments verify that these augmented-Lagrangian-based algorithms are easily-implementable and surprisingly-efficient for tackling the new recovery model.