Supervised Dictionary Learning and Sparse Representation-A Review (original) (raw)
Related papers
Dictionary Learning for Sparse Representation: A Novel Approach
IEEE Signal Processing Letters, 2000
A dictionary learning problem is a matrix factorization in which the goal is to factorize a training data matrix, , as the product of a dictionary, , and a sparse coefficient matrix, , as follows, . Current dictionary learning algorithms minimize the representation error subject to a constraint on (usually having unit column-norms) and sparseness of . The resulting problem is not convex with respect to the pair . In this letter, we derive a first order series expansion formula for the factorization, . The resulting objective function is jointly convex with respect to and . We simply solve the resulting problem using alternating minimization and apply some of the previously suggested algorithms onto our new problem. Simulation results on recovery of a known dictionary and dictionary learning for natural image patches show that our new problem considerably improves performance with a little additional computational load.
An Efficient Dictionary Learning Algorithm for Sparse Representation
2010 Chinese Conference on Pattern Recognition (CCPR), 2010
Sparse and redundant representation of data assumes an ability to describe signals as linear combinations of a few atoms from a dictionary. If the model of the signal is unknown, the dictionary can be learned from a set of training signals. Like the K-SVD, many of the practical dictionary learning algorithms are composed of two main parts: sparse-coding and dictionary-update. This paper first proposes a Stagewise least angle regression (St-LARS) method for performing the sparse-coding operation. The St-LARS applies a hard-thresholding strategy into the original least angle regression (LARS) algorithm, which enables it to select many atoms at each iteration and thus results in fast solutions while still provides good results. Then, a dictionary update method named approximated singular value decomposition (ASVD) is used on the dictionary update stage. It is a quick approximation of the exact SVD computation and can reduce the complexity of it. Experiments on both synthetic data and 3-D image denoising demonstrate the advantages of the proposed algorithm over other dictionary learning methods not only in terms of better trained dictionary but also in terms of computation time.
Learned dictionaries for sparse image representation: properties and results
Wavelets and Sparsity XIV, 2011
Sparse representation of images using learned dictionaries have been shown to work well for applications like image denoising, impainting, image compression, etc. In this paper dictionary properties are reviewed from a theoretical approach, and experimental results for learned dictionaries are presented. The main dictionary properties are the upper and lower frame (dictionary) bounds, and (mutual) coherence properties based on the angle between dictionary atoms. Both l 0 sparsity and l 1 sparsity are considered by using a matching pursuit method, order recursive matching Pursuit (ORMP), and a basis pursuit method, i.e. LARS or Lasso. For dictionary learning the following methods are considered: Iterative least squares (ILS-DLA or MOD), recursive least squares (RLS-DLA), K-SVD and online dictionary learning (ODL). Finally, it is shown how these properties relate to an image compression example.
Dictionary Learning Algorithms for Sparse Representation
Neural Computation, 2003
Algorithms for data-driven learning of domain-specific overcomplete dictionaries are developed to obtain maximum likelihood and maximum a posteriori dictionary estimates based on the use of Bayesian models with concave/Schur-concave (CSC) negative log priors. Such priors are appropriate for obtaining sparse representations of environmental signals within an appropriately chosen (environmentally matched) dictionary. The elements of the dictionary can be interpreted as concepts, features, or words capable of succinct expression of events encountered in the environment (the source of the measured signals). This is a generalization of vector quantization in that one is interested in a description involving a few dictionary entries (the proverbial "25 words or less"), but not necessarily as succinct as one entry. To learn an environmentally adapted dictionary capable of concise expression of signals generated by the environment, we develop algorithms that iterate between a representative set of sparse representations found by variants of FOCUSS and an update of the dictionary using these sparse representations.
Supervised dictionary learning
2008
It is now well established that sparse signal models are well suited for restoration tasks and can be effectively learned from audio, image, and video data. Recent research has been aimed at learning discriminative sparse models instead of purely reconstructive ones. This paper proposes a new step in that direction, with a novel sparse representation for signals belonging to different classes in terms of a shared dictionary and discriminative class models. The linear version of the proposed model admits a simple probabilistic interpretation, while its most general variant admits an interpretation in terms of kernels. An optimization framework for learning all the components of the proposed model is presented, along with experimental results on standard handwritten digit and texture classification tasks.
Sparse coding and dictionary learning for image understanding
Procedings of the British Machine Vision Conference 2010, 2010
Sparse coding-that is, modeling data vectors as sparse linear combinations of dictionary elements-is widely used in machine learning, neuroscience, signal processing, and statistics. This talk addresses the problem of learning the dictionary, adapting it to specific data and image understanding tasks. In particular, I will present a fast on-line approach to unsupervised dictionary learning and more generally sparse matrix factorization, and demonstrate its applications in image restoration tasks such as denoising, demosaicking, and inpainting. I will also present a general formulation of supervised dictionary learning adapted to tasks such as classification and regression. We have developed an efficient algorithm for solving the corresponding optimization problem, and I will demonstrate its application to handwritten digit classification, image deblurring and digital zooming, inverse half toning, and the detection of fake artworks.
Efficient Dictionary Learning with Sparseness-Enforcing Projections
International Journal of Computer Vision, 2015
Learning dictionaries suitable for sparse coding instead of using engineered bases has proven effective in a variety of image processing tasks. This paper studies the optimization of dictionaries on image data where the representation is enforced to be explicitly sparse with respect to a smooth, normalized sparseness measure. This involves the computation of Euclidean projections onto level sets of the sparseness measure. While previous algorithms for this optimization problem had at least quasi-linear time complexity, here the first algorithm with linear time complexity and constant space complexity is proposed. The key for this is the mathematically rigorous derivation of a characterization of the projection's result based on a soft-shrinkage function. This theory is applied in an original algorithm called Easy Dictionary Learning (EZDL), which learns dictionaries with a simple and fast-to-compute Hebbian-like learning rule. The new algorithm is efficient, expressive and particularly simple to implement. It is demonstrated that despite its simplicity, the proposed learning algorithm is able to generate a rich variety of dictionaries, in particular a topographic organization of atoms or separable atoms. Further, the dictionaries are as expressive as those of benchmark learning algorithms in terms of the reproduction quality on entire images, and result in an equivalent denoising performance. EZDL learns approximately 30 % faster than the already very efficient Online Dictionary Learning algorithm, and is Communicated by Julien Mairal, Francis Bach, Michael Elad.
An Overview on Dictionary and Sparse Representation in Image Denoising
Abstract: The goal of natural image denoising is to estimate a clean version of a given noisy image, utilizing prior knowledge on the statistics of natural images. Noise removal from natural images is a challenging task. Image denoising is an applicable issue for image processing and computer vision problems. There are several existing methods are available for image denoising. A most applicable and expected property of an image denoising is that it should totally remove the noise as well as its preserve edges. This paper represents the review of parameter and algorithms available for image denoising. Index Terms: Image noise, sparse, over-complete dictionary, Redundancy parameters
IJRTE, 2019
Digital image processing uses efficient computer algorithms for image denoising and to improve the image quality. Noisy image is produced due to various reasons in image acquisition, compression, preprocessing, segmentation etc. Over the last decade, various methods have shown promising results in removing zero mean Gaussian noise from images. Apart from different strategies implemented for noise reduction; this paper proposes a method for reducing noise and to improve metric parameters. Without using pre-chosen set of basis functions to represent the image, this paper discuss about performing image denoising using dictionary learning and sparse representation. Instead of removing coefficients of noise, shrinking sparse coefficients of noise is implemented to eliminate noise and it retains the image quality
Structured dictionary learning for classification
Sparsity driven signal processing has gained tremendous popularity in the last decade. At its core, the assumption is that the signal of interest is sparse with respect to either a fixed transformation or a signal dependent dictionary. To better capture the data characteristics, various dictionary learning methods have been proposed for both reconstruction and classification tasks. For classification particularly, most approaches proposed so far have focused on designing explicit constraints on the sparse code to improve classification accuracy while simply adopting l0-norm or l1-norm for sparsity regularization. Motivated by the success of structured sparsity in the area of Compressed Sensing, we propose a structured dictionary learning framework (StructDL) that incorporates the structure information on both group and task levels in the learning process. Its benefits are twofold: (i) the label consistency between dictionary atoms and training data are implicitly enforced; and (ii) the classification performance is more robust in the cases of a small dictionary size or limited training data than other techniques. Using the subspace model, we derive the conditions for StructDL to guarantee the performance and show theoretically that StructDL is superior to l0-norm or l1-norm regularized dictionary learning for classification. Extensive experiments have been performed on both synthetic simulations and real world applications, such as face recognition and object classification, to demonstrate the validity of the proposed DL framework.