Thresholded smoothed-ℓ0(SL0) dictionary learning for sparse representations (original) (raw)

Thresholded smoothed-ℓ 0 (SL0) dictionary learning for sparse representations

2009

In this paper, we suggest to use a modified version of Smoothed-ℓ0 (SL0) algorithm in the sparse representation step of iterative dictionary learning algorithms. In addition, we use a steepest descent for updating the non unit columnnorm dictionary instead of unit column-norm dictionary. Moreover, to do the dictionary learning task more blindly, we estimate the average number of active atoms in the sparse representation of the training signals, while previous algorithms assumed that it is known in advance. Our simulation results show the advantages of our method over K-SVD in terms of complexity and performance.

Thresholded SMOOTHED-'0(SL0) Dictionary Learning for Sparse Representations

2000

In this paper, we suggest to use a modified version of Smoothed-'0 (SL0) algorithm in the sparse representation step of iterative dictionary learning algorithms. In addition, we use a steepest descent for updating the non unit column- norm dictionary instead of unit column-norm dictionary. Moreover, to do the dictionary learning task more blindly, we estimate the average number of active

An Efficient Dictionary Learning Algorithm for Sparse Representation

2010 Chinese Conference on Pattern Recognition (CCPR), 2010

Sparse and redundant representation of data assumes an ability to describe signals as linear combinations of a few atoms from a dictionary. If the model of the signal is unknown, the dictionary can be learned from a set of training signals. Like the K-SVD, many of the practical dictionary learning algorithms are composed of two main parts: sparse-coding and dictionary-update. This paper first proposes a Stagewise least angle regression (St-LARS) method for performing the sparse-coding operation. The St-LARS applies a hard-thresholding strategy into the original least angle regression (LARS) algorithm, which enables it to select many atoms at each iteration and thus results in fast solutions while still provides good results. Then, a dictionary update method named approximated singular value decomposition (ASVD) is used on the dictionary update stage. It is a quick approximation of the exact SVD computation and can reduce the complexity of it. Experiments on both synthetic data and 3-D image denoising demonstrate the advantages of the proposed algorithm over other dictionary learning methods not only in terms of better trained dictionary but also in terms of computation time.

Dictionary Learning for Sparse Representation: A Novel Approach

IEEE Signal Processing Letters, 2000

A dictionary learning problem is a matrix factorization in which the goal is to factorize a training data matrix, , as the product of a dictionary, , and a sparse coefficient matrix, , as follows, . Current dictionary learning algorithms minimize the representation error subject to a constraint on (usually having unit column-norms) and sparseness of . The resulting problem is not convex with respect to the pair . In this letter, we derive a first order series expansion formula for the factorization, . The resulting objective function is jointly convex with respect to and . We simply solve the resulting problem using alternating minimization and apply some of the previously suggested algorithms onto our new problem. Simulation results on recovery of a known dictionary and dictionary learning for natural image patches show that our new problem considerably improves performance with a little additional computational load.

Supervised Dictionary Learning and Sparse Representation-A Review

Dictionary learning and sparse representation (DLSR) is a recent and successful mathematical model for data representation that achieves state-ofthe-art performance in various fields such as pattern recognition, machine learning, computer vision, and medical imaging. The original formulation for DLSR is based on the minimization of the reconstruction error between the original signal and its sparse representation in the space of the learned dictionary. Although this formulation is optimal for solving problems such as denoising, inpainting, and coding, it may not lead to optimal solution in classification tasks, where the ultimate goal is to make the learned dictio- * Corresponding author Email addresses: mehrdad.gangeh@utoronto.ca (Mehrdad J. Gangeh), afarahat@pami.uwaterloo.ca (Ahmed K. Farahat), aghodsib@uwaterloo.ca (Ali Ghodsi), mkamel@pami.uwaterloo.ca (Mohamed S. Kamel)

Efficient Dictionary Learning with Sparseness-Enforcing Projections

International Journal of Computer Vision, 2015

Learning dictionaries suitable for sparse coding instead of using engineered bases has proven effective in a variety of image processing tasks. This paper studies the optimization of dictionaries on image data where the representation is enforced to be explicitly sparse with respect to a smooth, normalized sparseness measure. This involves the computation of Euclidean projections onto level sets of the sparseness measure. While previous algorithms for this optimization problem had at least quasi-linear time complexity, here the first algorithm with linear time complexity and constant space complexity is proposed. The key for this is the mathematically rigorous derivation of a characterization of the projection's result based on a soft-shrinkage function. This theory is applied in an original algorithm called Easy Dictionary Learning (EZDL), which learns dictionaries with a simple and fast-to-compute Hebbian-like learning rule. The new algorithm is efficient, expressive and particularly simple to implement. It is demonstrated that despite its simplicity, the proposed learning algorithm is able to generate a rich variety of dictionaries, in particular a topographic organization of atoms or separable atoms. Further, the dictionaries are as expressive as those of benchmark learning algorithms in terms of the reproduction quality on entire images, and result in an equivalent denoising performance. EZDL learns approximately 30 % faster than the already very efficient Online Dictionary Learning algorithm, and is Communicated by Julien Mairal, Francis Bach, Michael Elad.

Learned dictionaries for sparse image representation: properties and results

Wavelets and Sparsity XIV, 2011

Sparse representation of images using learned dictionaries have been shown to work well for applications like image denoising, impainting, image compression, etc. In this paper dictionary properties are reviewed from a theoretical approach, and experimental results for learned dictionaries are presented. The main dictionary properties are the upper and lower frame (dictionary) bounds, and (mutual) coherence properties based on the angle between dictionary atoms. Both l 0 sparsity and l 1 sparsity are considered by using a matching pursuit method, order recursive matching Pursuit (ORMP), and a basis pursuit method, i.e. LARS or Lasso. For dictionary learning the following methods are considered: Iterative least squares (ILS-DLA or MOD), recursive least squares (RLS-DLA), K-SVD and online dictionary learning (ODL). Finally, it is shown how these properties relate to an image compression example.

Family of iterative LS-based dictionary learning algorithms, ILS-DLA, for sparse signal representation

Digital Signal Processing, 2007

The use of overcomplete dictionaries, or frames, for sparse signal representation has been given considerable attention in recent years. The major challenges are good algorithms for sparse approximations, i.e., vector selection algorithms, and good methods for choosing or designing dictionaries/frames. This work is concerned with the latter. We present a family of iterative least squares based dictionary learning algorithms (ILS-DLA), including algorithms for design of signal dependent block based dictionaries and overlapping dictionaries, as generalizations of transforms and filter banks, respectively. In addition different constraints can be included in the ILS-DLA, thus we present different constrained design algorithms. Experiments show that ILS-DLA is capable of reconstructing (most of) the generating dictionary vectors from a sparsely generated data set, with and without noise. The dictionaries are shown to be useful in applications like signal representation and compression where experiments demonstrate that our ILS-DLA dictionaries substantially improve compression results compared to traditional signal expansions such as transforms and filter banks/wavelets.

Dictionary Learning Algorithms for Sparse Representation

Neural Computation, 2003

Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial "25 words or less"), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations.

Dictionary Learning Based on Laplacian Score in Sparse Coding

Sparse coding, which produces a vector representation based on sparse linear combination of dictionary atoms, has been widely applied in signal processing, data mining and neuroscience. Constructing a proper dictionary for sparse coding is a common challenging problem. In this paper, we treat dictionary learning as an unsupervised learning process, and propose a Laplacian score dictionary (LSD). This new learning method uses local geometry information to select atoms for the dictionary. Comparisons with alternative clustering based dictionary learning methods are conducted. We also compare LSD with full-training-data-dictionary and others classic methods in the experiments. The classification performances on binary-class datasets and multi-class datasets from UCI repository demonstrate the effectiveness and efficiency of our method.