Dictionary Learning for Blind One Bit Compressed Sensing (original) (raw)

Adaptive Recovery of Dictionary-sparse Signals using Binary Measurements

2018

One-bit compressive sensing (CS) is an advanced version of sparse recovery in which the sparse signal of interest can be recovered from extremely quantized measurements. Namely, only the sign of each measurement is available to us. In many applications, the ground-truth signal is not sparse itself, but can be represented in a redundant dictionary. A strong line of research has addressed conventional CS in this signal model including its extension to one-bit measurements. However, one-bit CS suffers from the extremely large number of required measurements to achieve a predefined reconstruction error level. A common alternative to resolve this issue is to exploit adaptive schemes. Adaptive sampling acts on the acquired samples to trace the signal in an efficient way. In this work, we utilize an adaptive sampling strategy to recover dictionary-sparse signals from binary measurements. For this task, a multi-dimensional threshold is proposed to incorporate the previous signal estimates i...

New Guarantees for Blind Compressed Sensing

Blind Compressed Sensing (BCS) is an extension of Compressed Sensing (CS) where the optimal sparsifying dictionary is assumed to be unknown and subject to estimation (in addition to the CS sparse coefficients). Since the emergence of BCS, dictionary learning, a.k.a. sparse coding, has been studied as a matrix factorization problem where its sample complexity, uniqueness and identifiability have been addressed thoroughly. However, in spite of the strong connections between BCS and sparse coding, recent results from the sparse coding problem area have not been exploited within the context of BCS. In particular, prior BCS efforts have focused on learning constrained and complete dictionaries that limit the scope and utility of these efforts. In this paper, we develop new theoretical bounds for perfect recovery for the general unconstrained BCS problem. These unconstrained BCS bounds cover the case of overcomplete dictionaries, and hence, they go well beyond the existing BCS theory. Our perfect recovery results integrate the combinatorial theories of sparse coding with some of the recent results from low-rank matrix recovery. In particular, we propose an efficient CS measurement scheme that results in practical recovery bounds for BCS. Moreover, we discuss the performance of BCS under polynomial-time sparse coding algorithms. and radha@msu.edu. 1 That is every 2k ≤ d columns of A are linearly independent.

Dictionary learning-based distributed compressive video sensing

Picture Coding Symposium (PCS …, 2010

We address an important issue of fully low-cost and low-complex video compression for use in resource-extremely limited sensors/devices. Conventional motion estimation-based video compression or distributed video coding (DVC) techniques all rely on the high-cost mechanism, namely, sensing/sampling and compression are disjointedly performed, resulting in unnecessary consumption of resources. That is, most acquired raw video data will be discarded in the (possibly) complex compression stage. In this paper, we propose a dictionary learning-based distributed compressive video sensing (DCVS) framework to "directly" acquire compressed video data. Embedded in the compressive sensing (CS)-based single-pixel camera architecture, DCVS can compressively sense each video frame in a distributed manner. At DCVS decoder, video reconstruction can be formulated as an l 1minimization problem via solving the sparse coefficients with respect to some basis functions. We investigate adaptive dictionary/basis learning for each frame based on the training samples extracted from previous reconstructed neighboring frames and argue that much better basis can be obtained to represent the frame, compared to fixed basis-based representation and recent popular "CS-based DVC" approaches without relying on dictionary learning.

One-bit compressed sensing: Provable support and vector recovery

In this paper, we study the problem of onebit compressed sensing (1-bit CS), where the goal is to design a measurement matrix A and a recovery algorithm such that a k-sparse unit vector x * can be efficiently recovered from the sign of its linear measurements, i.e., b = sign(Ax * ). This is an important problem for signal acquisition and has several learning applications as well, e.g., multi-label classification . We study this problem in two settings: a) support recovery: recover the support of x * , b) approximate vector recovery: recover a unit vectorx such that x − x * 2 ≤ ǫ. For support recovery, we propose two novel and efficient solutions based on two combinatorial structures: union free families of sets and expanders. In contrast to existing methods for support recovery, our methods are universal i.e. a single measurement matrix A can recover all the signals. For approximate recovery, we propose the first method to recover a sparse vector using a near optimal number of measurements. We also empirically validate our algorithms and demonstrate that our algorithms recover the true signal using fewer measurements than the existing methods.

A Fast Noniterative Algorithm for Compressive Sensing Using Binary Measurement Matrices

IEEE Transactions on Signal Processing

In this paper we present a new algorithm for compressive sensing that makes use of binary measurement matrices and achieves exact recovery of ultra sparse vectors, in a single pass and without any iterations. Due to its noniterative nature, our algorithm is hundreds of times faster than 1-norm minimization, and methods based on expander graphs, both of which require multiple iterations. Our algorithm can accommodate nearly sparse vectors, in which case it recovers index set of the largest components, and can also accommodate burst noise measurements. Compared to compressed sensing methods that are guaranteed to achieve exact recovery of all sparse vectors, our method requires fewer measurements. However, methods that achieve statistical recovery, that is, recovery of almost all but not all sparse vectors, can require fewer measurements than our method.

Blind Compressed Sensing Over a Structured Union of Subspaces

This paper addresses the problem of simultaneous signal recovery and dictionary learning based on compressive measurements. Multiple signals are analyzed jointly, with multiple sensing matrices, under the assumption that the unknown signals come from a union of a small number of disjoint subspaces. This problem is important, for instance, in image inpainting applications, in which the multiple signals are constituted by (incomplete) image patches taken from the overall image. This work extends standard dictionary learning and block-sparse dictionary optimization, by considering compressive measurements, e.g., incomplete data). Previous work on blind compressed sensing is also generalized by using multiple sensing matrices and relaxing some of the restrictions on the learned dictionary. Drawing on results developed in the context of matrix completion, it is proven that both the dictionary and signals can be recovered with high probability from compressed measurements. The solution is...

Dictionary and Image Recovery from Incomplete and Random Measurements

This paper tackles algorithmic and theoretical aspects of dictionary learning from incomplete and random blockwise image measurements and the performance of the adaptive dictionary for sparse image recovery. This problem is related to blind compressed sensing in which the sparsifying dictionary or basis is viewed as an unknown variable and subject to estimation during sparse recovery. However, unlike existing guarantees for a successful blind compressed sensing, our results do not rely on additional structural constraints on the learned dictionary or the measured signal. In particular, we rely on the spatial diversity of compressive measurements to guarantee that the solution is unique with a high probability. Moreover, our distinguishing goal is to measure and reduce the estimation error with respect to the ideal dictionary that is based on the complete image. Using recent results from random matrix theory, we show that applying a slightly modified dictionary learning algorithm over compressive measurements results in accurate estimation of the ideal dictionary for large-scale images. Empirically, we experiment with both spaceinvariant and space-varying sensing matrices and demonstrate the critical role of spatial diversity in measurements. Simulation results confirm that the presented algorithm outperforms the typical non-adaptive sparse recovery based on offline-learned universal dictionaries.

Compressive Sensing Algorithms for Signal Processing Applications: A Survey

International Journal of Communications, Network and System Sciences, 2015

In digital signal processing (DSP), Nyquist-rate sampling completely describes a signal by exploiting its bandlimitedness. Compressed Sensing (CS), also known as compressive sampling, is a DSP technique efficiently acquiring and reconstructing a signal completely from reduced number of measurements, by exploiting its compressibility. The measurements are not point samples but more general linear functions of the signal. CS can capture and represent sparse signals at a rate significantly lower than ordinarily used in the Shannon's sampling theorem. It is interesting to notice that most signals in reality are sparse; especially when they are represented in some domain (such as the wavelet domain) where many coefficients are close to or equal to zero. A signal is called K-sparse, if it can be exactly represented by a basis, { } 1 ψ N i i = , and a set of coefficients k x , where only K coefficients are nonzero. A signal is called approximately K-sparse, if it can be represented up to a certain accuracy using K non-zero coefficients. As an example, a K-sparse signal is the class of signals that are the sum of K sinusoids chosen from the N harmonics of the observed time interval. Taking the DFT of any such signal would render only K non-zero values k x. An example of approximately sparse signals is when the coefficients k x , sorted by magnitude, decrease following a power law. In this case the sparse approximation constructed by choosing the K largest coefficients is guaranteed to have an approximation error that decreases with the same power law as the coefficients. The main limitation of CS-based systems is that they are employing iterative algorithms to recover the signal. The sealgorithms are slow and the hardware solution has become crucial for higher performance and speed. This technique enables fewer data samples than traditionally required when capturing a signal with relatively high bandwidth, but a low information rate. As a main feature of CS, efficient algorithms such as 1 -minimization can be used for recovery. This paper gives a survey of both theoretical and numerical aspects of compressive sensing technique and its applications. The theory of CS has many potential applications in signal processing, wireless communication, cognitive radio and medical imaging.

Reconstruction of Compressively Sensed Images using Convex Tikhonov Sparse Dictionary Learning and Adaptive Spectral Filtering

Cornell University - arXiv, 2018

Sparse representation using over-complete dictionaries have shown to produce good quality results in various image processing tasks. Dictionary learning algorithms have made it possible to engineer data adaptive dictionaries which have promising applications in image compression and image enhancement. The most common sparse dictionary learning algorithms use the techniques of matching pursuit and K-SVD iteratively for sparse coding and dictionary learning respectively. While this technique produces good results, it requires a large number of iterations to converge to an optimal solution. In this article, we use a closed form stabilized convex optimization technique for both sparse coding and dictionary learning. The approach results in providing the best possible dictionary and the sparsest representation resulting in minimum reconstruction error. Once the image is reconstructed from the compressively sensed samples, we use adaptive frequency and spatial filtering techniques to move towards exact image recovery. It is clearly seen from the results that the proposed algorithm provides much better reconstruction results than conventional sparse dictionary techniques for a fixed number of iterations. Depending inversely upon the number of details present in the image, the proposed algorithm reaches the optimal solution with a significantly lower number of iterations. Consequently, high PSNR and low MSE is obtained using the proposed algorithm for our compressive sensing framework.

Compressed Sensing with Rank Deficient Dictionaries

Abstract—In compressed sensing it is generally assumed that the dictionary matrix constitutes a (possibly overcomplete) basis of the signal space. In this paper we consider dictionaries that do not span the signal space, ie rank deficient dictionaries. We show that in this case the signal-to-noise ratio (SNR) in the compressed samples can be increased by selecting the rows of the measurement matrix from the column space of the dictionary.