0 norm based algorithms have numerous potential applications where a sparse signal is recovered from a small number of measurements. The direct l0 norm optimization problem is NP-hard. In this paper we work with the the smoothed l0(SL0) approximation algorithm for sparse representation. We give an upper bound on the run-time estimation error. This upper bound is tighter than the previously known bound. Subsequently, we develop a reliable stopping criterion. This criterion is helpful in avoiding the problems due to the underlying discontinuities of the l0 cost function. Furthermore, we propose an alternative optimization strategy, which results in a Newton like algorithm.">
An Improved Smoothed $\ell^0$ Approximation Algorithm for Sparse Representation (original) (raw)