Thresholded SMOOTHED-'0(SL0) Dictionary Learning for Sparse Representations (original) (raw)

Thresholded smoothed-ℓ 0 (SL0) dictionary learning for sparse representations

2009

In this paper, we suggest to use a modified version of Smoothed-ℓ0 (SL0) algorithm in the sparse representation step of iterative dictionary learning algorithms. In addition, we use a steepest descent for updating the non unit columnnorm dictionary instead of unit column-norm dictionary. Moreover, to do the dictionary learning task more blindly, we estimate the average number of active atoms in the sparse representation of the training signals, while previous algorithms assumed that it is known in advance. Our simulation results show the advantages of our method over K-SVD in terms of complexity and performance.

Thresholded smoothed-&#x2113;<sup>0</sup>(SL0) dictionary learning for sparse representations

2009

In this paper, we suggest to use a modified version of Smoothed-0 (SL0) algorithm in the sparse representation step of iterative dictionary learning algorithms. In addition, we use a steepest descent for updating the non unit columnnorm dictionary instead of unit column-norm dictionary. Moreover, to do the dictionary learning task more blindly, we estimate the average number of active atoms in the sparse representation of the training signals, while previous algorithms assumed that it is known in advance. Our simulation results show the advantages of our method over K-SVD in terms of complexity and performance.

An Efficient Dictionary Learning Algorithm for Sparse Representation

2010 Chinese Conference on Pattern Recognition (CCPR), 2010

Sparse and redundant representation of data assumes an ability to describe signals as linear combinations of a few atoms from a dictionary. If the model of the signal is unknown, the dictionary can be learned from a set of training signals. Like the K-SVD, many of the practical dictionary learning algorithms are composed of two main parts: sparse-coding and dictionary-update. This paper first proposes a Stagewise least angle regression (St-LARS) method for performing the sparse-coding operation. The St-LARS applies a hard-thresholding strategy into the original least angle regression (LARS) algorithm, which enables it to select many atoms at each iteration and thus results in fast solutions while still provides good results. Then, a dictionary update method named approximated singular value decomposition (ASVD) is used on the dictionary update stage. It is a quick approximation of the exact SVD computation and can reduce the complexity of it. Experiments on both synthetic data and 3-D image denoising demonstrate the advantages of the proposed algorithm over other dictionary learning methods not only in terms of better trained dictionary but also in terms of computation time.

Dictionary Learning for Sparse Representation: A Novel Approach

IEEE Signal Processing Letters, 2000

A dictionary learning problem is a matrix factorization in which the goal is to factorize a training data matrix, , as the product of a dictionary, , and a sparse coefficient matrix, , as follows, . Current dictionary learning algorithms minimize the representation error subject to a constraint on (usually having unit column-norms) and sparseness of . The resulting problem is not convex with respect to the pair . In this letter, we derive a first order series expansion formula for the factorization, . The resulting objective function is jointly convex with respect to and . We simply solve the resulting problem using alternating minimization and apply some of the previously suggested algorithms onto our new problem. Simulation results on recovery of a known dictionary and dictionary learning for natural image patches show that our new problem considerably improves performance with a little additional computational load.

Supervised Dictionary Learning and Sparse Representation-A Review

Dictionary learning and sparse representation (DLSR) is a recent and successful mathematical model for data representation that achieves state-ofthe-art performance in various fields such as pattern recognition, machine learning, computer vision, and medical imaging. The original formulation for DLSR is based on the minimization of the reconstruction error between the original signal and its sparse representation in the space of the learned dictionary. Although this formulation is optimal for solving problems such as denoising, inpainting, and coding, it may not lead to optimal solution in classification tasks, where the ultimate goal is to make the learned dictio- * Corresponding author Email addresses: mehrdad.gangeh@utoronto.ca (Mehrdad J. Gangeh), afarahat@pami.uwaterloo.ca (Ahmed K. Farahat), aghodsib@uwaterloo.ca (Ali Ghodsi), mkamel@pami.uwaterloo.ca (Mohamed S. Kamel)

Efficient Dictionary Learning with Sparseness-Enforcing Projections

International Journal of Computer Vision, 2015

Learning dictionaries suitable for sparse coding instead of using engineered bases has proven effective in a variety of image processing tasks. This paper studies the optimization of dictionaries on image data where the representation is enforced to be explicitly sparse with respect to a smooth, normalized sparseness measure. This involves the computation of Euclidean projections onto level sets of the sparseness measure. While previous algorithms for this optimization problem had at least quasi-linear time complexity, here the first algorithm with linear time complexity and constant space complexity is proposed. The key for this is the mathematically rigorous derivation of a characterization of the projection's result based on a soft-shrinkage function. This theory is applied in an original algorithm called Easy Dictionary Learning (EZDL), which learns dictionaries with a simple and fast-to-compute Hebbian-like learning rule. The new algorithm is efficient, expressive and particularly simple to implement. It is demonstrated that despite its simplicity, the proposed learning algorithm is able to generate a rich variety of dictionaries, in particular a topographic organization of atoms or separable atoms. Further, the dictionaries are as expressive as those of benchmark learning algorithms in terms of the reproduction quality on entire images, and result in an equivalent denoising performance. EZDL learns approximately 30 % faster than the already very efficient Online Dictionary Learning algorithm, and is Communicated by Julien Mairal, Francis Bach, Michael Elad.

A New Algorithm for Dictionary Learning Based on Convex Approximation

2019

The purpose of dictionary learning problem is to learn a dictionary D from a training data matrix Y such that Y ≈ DX and the coefficient matrix X is sparse. Many algorithms have been introduced to this aim, which minimize the representation error subject to a sparseness constraint on X. However, the dictionary learning problem is non-convex with respect to the pair (D,X). In a previous work [Sadeghi et at., 2013], a convex approximation to the non-convex term DX has been introduced which makes the whole DL problem convex. This approach can be almost applied to any existing DL algorithm and obtain better algorithms. In the current paper, it is shown that a simple modification on that approach significantly improves its performance, in terms of both accuracy and speed. Simulation results on synthetic dictionary recovery are provided to confirm this claim.

Learned dictionaries for sparse image representation: properties and results

Wavelets and Sparsity XIV, 2011

Sparse representation of images using learned dictionaries have been shown to work well for applications like image denoising, impainting, image compression, etc. In this paper dictionary properties are reviewed from a theoretical approach, and experimental results for learned dictionaries are presented. The main dictionary properties are the upper and lower frame (dictionary) bounds, and (mutual) coherence properties based on the angle between dictionary atoms. Both l 0 sparsity and l 1 sparsity are considered by using a matching pursuit method, order recursive matching Pursuit (ORMP), and a basis pursuit method, i.e. LARS or Lasso. For dictionary learning the following methods are considered: Iterative least squares (ILS-DLA or MOD), recursive least squares (RLS-DLA), K-SVD and online dictionary learning (ODL). Finally, it is shown how these properties relate to an image compression example.

A Dictionary Learning Method for Sparse Representation Using a Homotopy Approach

Lecture Notes in Computer Science, 2015

In this paper, we address the problem of dictionary learning for sparse representation. Considering the regularized form of the dictionary learning problem, we propose a method based on a homotopy approach, in which the regularization parameter is overall decreased along iterations. We estimate the value of the regularization parameter adaptively at each iteration based on the current value of the dictionary and the sparse coefficients, such that it preserves both sparse coefficients and dictionary optimality conditions. This value is, then, gradually decreased for the next iteration to follow a homotopy method. The results show that our method has faster implementation compared to recent dictionary learning methods, while overall it outperforms the other methods in recovering the dictionaries.

Multilevel dictionary learning for sparse representation of images

2011

Abstract Adaptive data-driven dictionaries for sparse approximations provide superior performance compared to predefined dictionaries in applications involving representation and classification of data. In this paper, we propose a novel algorithm for learning global dictionaries particularly suited to the sparse representation of natural images. The proposed algorithm uses a hierarchical energy based learning approach to learn a multilevel dictionary.