Completing Low-Rank Matrices with Corrupted Samples from Few Coefficients in General Basis (original) (raw)

On Deterministic Sampling Patterns for Robust Low-Rank Matrix Completion

IEEE Signal Processing Letters, 2018

In this letter, we study the deterministic sampling patterns for the completion of low rank matrix, when corrupted with a sparse noise, also known as robust matrix completion. We extend the recent results on the deterministic sampling patterns in the absence of noise based on the geometric analysis on the Grassmannian manifold. A special case where each column has a certain number of noisy entries is considered, where our probabilistic analysis performs very efficiently. Furthermore, assuming that the rank of the original matrix is not given, we provide an analysis to determine if the rank of a valid completion is indeed the actual rank of the data corrupted with sparse noise by verifying some conditions.

Low-Rank Matrix Recovery From Errors and Erasures

IEEE Transactions on Information Theory, 2013

This paper considers the recovery of a low-rank matrix from an observed version that simultaneously contains both (a) erasures: most entries are not observed, and (b) errors: values at a constant fraction of (unknown) locations are arbitrarily corrupted. We provide a new unified performance guarantee on when the natural convex relaxation of minimizing rank plus support succeeds in exact recovery. Our result allows for the simultaneous presence of random and deterministic components in both the error and erasure patterns. On the one hand, corollaries obtained by specializing this one single result in different ways recover (up to poly-log factors) all the existing works in matrix completion, and sparse and low-rank matrix recovery. On the other hand, our results also provide the first guarantees for (a) recovery when we observe a vanishing fraction of entries of a corrupted matrix, and (b) deterministic matrix completion.

Practical Matrix Completion and Corruption Recovery Using Proximal Alternating Robust Subspace Minimization

International Journal of Computer Vision, 2014

Low-rank matrix completion is a problem of immense practical importance. Recent works on the subject often use nuclear norm as a convex surrogate of the rank function. Despite its solid theoretical foundation, the convex version of the problem often fails to work satisfactorily in real-life applications. Real data often suffer from very few observations, with support not meeting the randomness requirements, ubiquitous presence of noise and potentially gross corruptions, sometimes with these simultaneously occurring. This paper proposes a Proximal Alternating Robust Subspace Minimization (PARSuMi) method to tackle the three problems. The proximal alternating scheme explicitly exploits the rank constraint on the completed matrix and uses the 0 pseudo-norm directly in the corruption recovery step. We show that the proposed method for the non-convex and non-smooth model converges to a stationary point. Although it is not guaranteed to find the global optimal solution, in practice we find that our algorithm can typically arrive at a good local minimizer when it is supplied with a reasonably good starting point based on convex optimization. Extensive experiments with challenging synthetic and real data demonstrate that our algorithm succeeds in a much larger range of practical problems where convex optimization fails, and it also outperforms various state-of-the-art algorithms.

Low-rank matrix completion with noisy observations: A quantitative comparison

2009 47th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 2009

We consider a problem of significant practical importance, namely, the reconstruction of a low-rank data matrix from a small subset of its entries. This problem appears in many areas such as collaborative filtering, computer vision and wireless sensor networks. In this paper, we focus on the matrix completion problem in the case when the observed samples are corrupted by noise. We compare the performance of three state-of-the-art matrix completion algorithms (OptSpace, ADMiRA and FPCA) on a single simulation platform and present numerical results. We show that in practice these efficient algorithms can be used to reconstruct real data matrices, as well as randomly generated matrices, accurately.

The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices

2010

This paper proposes scalable and fast algorithms for solving the Robust PCA problem, namely recovering a low-rank matrix with an unknown fraction of its entries being arbitrarily corrupted. This problem arises in many applications, such as image processing, web data ranking, and bioinformatic data analysis. It was recently shown that under surprisingly broad conditions, the Robust PCA problem can be exactly solved via convex optimization that minimizes a combination of the nuclear norm and the ℓ 1 -norm . In this paper, we apply the method of augmented Lagrange multipliers (ALM) to solve this convex program. As the objective function is non-smooth, we show how to extend the classical analysis of ALM to such new objective functions and prove the optimality of the proposed algorithms and characterize their convergence rate. Empirically, the proposed new algorithms can be more than five times faster than the previous state-of-the-art algorithms for Robust PCA, such as the accelerated proximal gradient (APG) algorithm. Moreover, the new algorithms achieve higher precision, yet being less storage/memory demanding. We also show that the ALM technique can be used to solve the (related but somewhat simpler) matrix completion problem and obtain rather promising results too. We further prove the necessary and sufficient condition for the inexact ALM to converge globally. Matlab code of all algorithms discussed are available at

Exact Low-rank Matrix Completion via Convex Optimization

Suppose that one observes an incomplete subset of entries selected uniformly at random from a low-rank matrix. When is it possible to complete the matrix and recover the entries that have not been seen? We show that in very general settings, one can perfectly recover all of the missing entries from a sufficiently large random subset by solving a convex programming problem. This program finds the matrix with the minimum nuclear norm agreeing with the observed entries. The techniques used in this analysis draw upon parallels in the field of compressed sensing, demonstrating that objects other than signals and images can be perfectly reconstructed from very limited information.

Robust Orthonormal Subspace Learning: Efficient Recovery of Corrupted Low-rank Matrices

International Conference on Computer Vision and Pattern Recognition (CVPR), 2014

Low-rank matrix recovery from a corrupted observation has many applications in computer vision. Conventional methods address this problem by iterating between nuclear norm minimization and sparsity minimization. However, iterative nuclear norm minimization is computationally prohibitive for large-scale data (e.g., video) analysis. In this paper, we propose a Robust Orthogonal Subspace Learning (ROSL) method to achieve efficient low-rank recovery. Our intuition is a novel rank measure on the low-rank matrix that imposes the group sparsity of its coefficients under orthonormal subspace. We present an efficient sparse coding algorithm to minimize this rank measure and recover the low-rank matrix at quadratic complexity of the matrix size. We give theoretical proof to validate that this rank measure is lower bounded by nuclear norm and it has the same global minimum as the latter. To further accelerate ROSL to linear complexity, we also describe a faster version (ROSL+) empowered by random sampling. Our extensive experiments demonstrate that both ROSL and ROSL+ provide superior efficiency against the state-of-the-art methods at the same level of recovery accuracy.

Fast algorithms for recovering a corrupted low-rank matrix

2009 3rd IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), 2009

This paper studies algorithms for solving the problem of recovering a low-rank matrix with a fraction of its entries arbitrarily corrupted. This problem can be viewed as a robust version of classical PCA, and arises in a number of application domains, including image processing, web data ranking, and bioinformatic data analysis. It was recently shown that under surprisingly broad conditions, it can be exactly solved via a convex programming surrogate that combines nuclear norm minimization and ℓ 1 -norm minimization. This paper develops and compares two complementary approaches for solving this convex program. The first is an accelerated proximal gradient algorithm directly applied to the primal; while the second is a gradient algorithm applied to the dual problem. Both are several orders of magnitude faster than the previous stateof-the-art algorithm for this problem, which was based on iterative thresholding. Simulations demonstrate the performance improvement that can be obtained via these two algorithms, and clarify their relative merits.

A two-phase rank-based algorithm for low-rank matrix completion

Optimization Letters

Matrix completion aims to recover an unknown low-rank matrix from a small subset of its entries. In many applications, the rank of the unknown target matrix is known in advance. In this paper, first we revisit a recently proposed rank-based heuristic for "known-rank" matrix completion and establish a condition under which the generated sequence is quasi-Fejér convergent to the solution set. Then, by including an acceleration mechanism similar to Nesterov's acceleration, we obtain a new heuristic. Even though the convergence of such heuristic cannot be granted in general, it turns out that it can be very useful as a warm-start phase, providing a suitable estimate for the regularization parameter and a good starting-point, to an accelerated Soft-Impute algorithm. Numerical experiments with both synthetic and real data show that the resulting two-phase rank-based algorithm can recover low-rank matrices, with relatively high precision, faster than other well-established matrix completion algorithms.