Serena Papi - Academia.edu (original) (raw)
Papers by Serena Papi
Journal of Computational and Applied Mathematics, 2006
In this work we describe a method for removing Gaussian noise from digital images, based on the c... more In this work we describe a method for removing Gaussian noise from digital images, based on the combination of the wavelet packet transform and the principal component analysis. In particular, since the aim of denoising is to retain the energy of the signal while discarding the energy of the noise, our basic idea is to construct powerful tailored filters by applying the Karhunen-Loéve transform in the wavelet packet domain, thus obtaining a compaction of the signal energy into a few principal components, while the noise is spread over all the transformed coefficients. This allows us to act with a suitable shrinkage function on these new coefficients, removing the noise without blurring the edges and the important characteristics of the images. The results of a large numerical experimentation encourage us to keep going in this direction with our studies.
Vector thresholding is a recently proposed technique for the denoising of one-dimensional signals... more Vector thresholding is a recently proposed technique for the denoising of one-dimensional signals by means of multiwavelet shrinkage. It is more suited both to dealing with the multiwavelet vector coefficients and to taking into account the correlations which can be introduced among the starting vector coefficients by the use of a suitable prefilter. Motivated by the successful results of the multiwavelet transform when used in image processing, the aim of this paper is to extend vector thresholding to the two-dimensional case by introducing the notion of matrix thresholding. This new method allows us to easily exploit the "matrix" nature of the two-dimensional multiwavelet transform, and represents the natural extension of vector thresholding to the 2-D case. Afterwards, as the choice of the threshold level is very important in the practical application of thresholding methods, we propose a first attempt to extend the recently introduced method of H-curve to a multiple wavelet setting. The results of extensive numerical simulations confirm the effectiveness of our proposals and encourage us to keep going in this direction with further studies.
IEEE Transactions on Signal Processing, 2009
The problem of recovering sparse signals and sparse gradient signals from a small collection of l... more The problem of recovering sparse signals and sparse gradient signals from a small collection of linear measurements is one that arises naturally in many scientific fields. The recently developed Compressed Sensing Framework states that such problems can be solved by searching for the signal of minimum L 1-norm, or minimum Total Variation, that satisfies the given acquisition constraints. While L 1 optimization algorithms, based on Linear Programming techniques, are highly effective at generating excellent signal reconstructions, their complexity is still too high and renders them impractical for many real applications. In this paper, we propose a novel approach to solve the L 1 optimization problems, based on the use of suitable nonlinear filters widely applied for signal and image denoising. The corresponding algorithm has two main advantages: low computational cost and reconstruction capabilities similar to those of Linear Programming optimization methods. We illustrate the effectiveness of the proposed approach with many numerical examples and comparisons.
IEEE Transactions on Medical Imaging, 2011
The problem of high-resolution image volume reconstruction from reduced frequency acquisition seq... more The problem of high-resolution image volume reconstruction from reduced frequency acquisition sequences has drawn significant attention from the scientific community because of its practical importance in medical diagnosis. To address this issue, several reconstruction strategies have been recently proposed, which aim to recover the missing information either by exploiting the spatio-temporal correlations of the image series, or by imposing suitable constraints on the reconstructed image volume. The main contribution of this paper is to combine both these strategies in a compressed sensing framework by exploiting the gradient sparsity of the image volume. The resulting constrained 3D minimization problem is then solved using a penalized forward-backward splitting approach that leads to a convergent iterative two-step procedure. In the first step, the updating rule accords with the sequential nature of the data acquisitions, in the second step a truly 3D filtering strategy exploits the spatio-temporal correlations of the image sequences. The resulting NFCS-3D algorithm is very general and suitable for several kinds of medical image reconstruction problems. Moreover, it is fast, stable and yields very good reconstructions, even in the case of highly undersampled image sequences. The results of several numerical experiments highlight the optimal performance of the proposed algorithm and confirm that it is competitive with state of the art algorithms.
IEEE Transactions on Image Processing, 2011
Compressed sensing is a new paradigm for signal recovery and sampling. It states that a relativel... more Compressed sensing is a new paradigm for signal recovery and sampling. It states that a relatively small number of linear measurements of a sparse signal can contain most of its salient information and that the signal can be exactly reconstructed from these highly incomplete observations. The major challenge in practical applications of compressed sensing consists in providing efficient, stable and fast recovery algorithms which, in a few seconds, evaluate a good approximation of a compressible image from highly incomplete and noisy samples. In this paper, we propose to approach the compressed sensing image recovery problem using adaptive nonlinear filtering strategies in an iterative framework, and we prove the convergence of the resulting two-steps iterative scheme. The results of several numerical experiments confirm that the corresponding algorithm possesses the required properties of efficiency, stability and low computational cost and that its performance is competitive with those of the state of the art algorithms.
BIT Numerical Mathematics, 2003
Thresholding estimators in an orthonormal wavelet basis are well established tools for Gaussian n... more Thresholding estimators in an orthonormal wavelet basis are well established tools for Gaussian noise removal. However, the universal threshold choice, suggested by Donoho and Johnstone, sometimes leads to over-smoothed approximations. For the denoising problem this paper uses the deterministic approach proposed by Chambolle et al., which handles it as a variational problem, whose solution can be formulated in terms of wavelet shrinkage. This allows us to use wavelet shrinkage successfully for more general denoising problems and to propose a new criterion for the choice of the shrinkage parameter, which we call H-curve criterion. It is based on the plot, for different parameter values, of the B 1 1 (L1)-norm of the computed solution versus the L2-norm of the residual, considered in logarithmic scale. Extensive numerical experimentation shows that this new choice of shrinkage parameter yields good results both for Gaussian and other kinds of noise.
Expert Systems with Applications
2016 International Conference of the Biometrics Special Interest Group (BIOSIG), 2016
In this paper we propose some techniques to generate synthetic altered fingerprints and prove the... more In this paper we propose some techniques to generate synthetic altered fingerprints and prove the utility of the generated datasets for developing, tuning and evaluating algorithms for altered fingerprint detection/matching. Due to the lack of public databases of altered fingerprints the generation tool proposed (and made freely available) can be a valid instrument to boost research on these challenging problems.
Lecture Notes in Computer Science, 2015
In this paper we show that Saliency-based keypoint selection makes natural landmark detection and... more In this paper we show that Saliency-based keypoint selection makes natural landmark detection and object recognition quite effective and efficient, thus enabling augmented reality techniques in a plethora of applications in smart city contexts. As a case study we address a tour of a museum where a modern smart device like a tablet or smartphone can be used to recognize paintings, retrieve their pose and graphically overlay useful information.
Pattern Recognition Letters, 2015
ABSTRACT In this paper we present a new approach to rank and select keypoints based on their sali... more ABSTRACT In this paper we present a new approach to rank and select keypoints based on their saliency for object detection and matching under moderate viewpoint and lighting changes. Saliency is defined in terms of detectability, repeatability and distinctiveness by considering both the keypoint strength (as returned by the detector algorithm) and the associated local descriptor discriminating power. Our experiments prove that selecting a small amount of available keypoints (e.g., 10%) not only boosts efficiency but can also lead to better detection/matching accuracy thus making the proposed method attractive for real-time applications (e.g., augmented reality).
Signal Processing, 2015
ABSTRACT We consider the problem of recovering a sparse signal when its nonzero coefficients tend... more ABSTRACT We consider the problem of recovering a sparse signal when its nonzero coefficients tend to cluster into blocks, whose number, dimension and position are unknown. We refer to this problem as blind cluster structured sparse recovery. For its solution, differently from the existing methods that consider the problem in a statistical context, we propose a deterministic neighborhood based approach characterized by the use both of a nonconvex, nonseparable sparsity inducing function and of a penalized version of the iterative ℓ1 reweighted method. Despite the high nonconvexity of the approach, a suitable integration of these building elements led to the development of MB-NFCS (Model Based Nonlinear Filtering for Compressed Sensing), an iterative fast, self-adaptive, and efficient algorithm that, without requiring any information on the sparsity pattern, adjusts at each iteration the action of the sparsity inducing function in order to strongly encourage the emerging cluster structure. The effectiveness of the proposed approach is demonstrated by a large set of numerical experiments that show the superior performance of MB-NFCS to the state-of-the-art algorithms.
Numerical Algorithms, 2003
Vector thresholding is a recently proposed technique for the denoising of one-dimensional signals... more Vector thresholding is a recently proposed technique for the denoising of one-dimensional signals by means of multiwavelet shrinkage. It is more suited both to dealing with the multiwavelet vector coefficients and to taking into account the correlations which can be introduced among the starting vector coefficients by the use of a suitable prefilter. Motivated by the successful results of the multiwavelet transform when used in image processing, the aim of this paper is to extend vector thresholding to the two-dimensional case by introducing the notion of matrix thresholding. This new method allows us to easily exploit the "matrix" nature of the two-dimensional multiwavelet transform, and represents the natural extension of vector thresholding to the 2-D case. Afterwards, as the choice of the threshold level is very important in the practical application of thresholding methods, we propose a first attempt to extend the recently introduced method of H-curve to a multiple wavelet setting. The results of extensive numerical simulations confirm the effectiveness of our proposals and encourage us to keep going in this direction with further studies.
Signal Processing, 2013
ABSTRACT This paper addresses the problem of sparse signal recovery from a lower number of measur... more ABSTRACT This paper addresses the problem of sparse signal recovery from a lower number of measurements than those requested by the classical compressed sensing theory. This problem is formalized as a constrained minimization problem, where the objective function is nonconvex and singular at the origin. Several algorithms have been recently proposed, which rely on iterative reweighting schemes, that produce better estimates at each new minimization step. Two such methods are iterative reweighted l2 and l1 minimization that have been shown to be effective and general, but very computationally demanding. The main contribution of this paper is the proposal of the algorithm WNFCS, where the reweighted schemes represent the core of a penalized approach to the solution of the constrained nonconvex minimization problem. The algorithm is fast, and succeeds in exactly recovering a sparse signal from a smaller number of measurements than the l1 minimization and in a shorter time. WNFCS is very general, since it represents an algorithmic framework that can easily be adapted to different reweighting strategies and nonconvex objective functions. Several numerical experiments and comparisons with some of the most recent nonconvex minimization algorithms confirm the capabilities of the proposed algorithm.
Journal of Computational and Applied Mathematics, 2004
When working with nonlinear ÿltering algorithms for image denoising problems, there are two cruci... more When working with nonlinear ÿltering algorithms for image denoising problems, there are two crucial aspects, namely, the choice of the thresholding parameter and the use of a proper ÿlter function. Both greatly in uence the quality of the resulting denoised image. In this paper we propose two new ÿlters, which are a piecewise quadratic and an exponential function of , respectively, arid we show how they can be successfully used instead of the classical Donoho and Johnstone's Soft thresholding ÿlter. We exploit the increased regularity and exibility of the new ÿlters to improve the quality of the ÿnal results. Moreover, we prove that our ÿltered approximation is a near-minimizer of the functional which has to be minimized to solve the denoising problem. We also show that the quadratic ÿlter, due to its shape, yields good results if we choose as the Donoho and Johnstone universal threshold, while the exponential one is more suitable if we use the recently proposed H-curve criterion. Encouraging results in extensive numerical experiments on several test images conÿrm the e ectiveness of our proposal.
International Journal of Wavelets, Multiresolution and Information Processing, 2006
In recent years, many papers have been devoted to the topic of balanced multiwavelets, namely, mu... more In recent years, many papers have been devoted to the topic of balanced multiwavelets, namely, multiwavelet bases which are especially designed to avoid the prefiltering step in the implementation of the multiwavelet transform. In this work, we give a simple algebraic proof of how scalar wavelets can be reinterpreted as the most natural balanced multiwavelets, which maintain the good properties of the wavelet bases they come from. We then show how these new bases can be successfully used to apply matrix thresholding for the denoising of images corrupted by Gaussian noise. In fact, this new approach discovers a balanced matrix nature in Daubechies bases, hence obtaining better numerical results with respect to those achieved via scalar thresholding. In particular, this reinterpretation of scalar wavelets as balanced multiwavelets allows us to successfully use the thresholding filters, previously introduced in the scalar case, in a matrix setting.
Journal of Computational and Applied Mathematics, 2007
ABSTRACT In this work we describe a method for removing Gaussian noise from digital images, based... more ABSTRACT In this work we describe a method for removing Gaussian noise from digital images, based on the combination of the wavelet packet transform and the principal component analysis. In particular, since the aim of denoising is to retain the energy of the signal while discarding the energy of the noise, our basic idea is to construct powerful tailored filters by applying the Karhunen–Loéve transform in the wavelet packet domain, thus obtaining a compaction of the signal energy into a few principal components, while the noise is spread over all the transformed coefficients. This allows us to act with a suitable shrinkage function on these new coefficients, removing the noise without blurring the edges and the important characteristics of the images. The results of a large numerical experimentation encourage us to keep going in this direction with our studies.
Journal of Computational and Applied Mathematics, 2006
In this work we describe a method for removing Gaussian noise from digital images, based on the c... more In this work we describe a method for removing Gaussian noise from digital images, based on the combination of the wavelet packet transform and the principal component analysis. In particular, since the aim of denoising is to retain the energy of the signal while discarding the energy of the noise, our basic idea is to construct powerful tailored filters by applying the Karhunen-Loéve transform in the wavelet packet domain, thus obtaining a compaction of the signal energy into a few principal components, while the noise is spread over all the transformed coefficients. This allows us to act with a suitable shrinkage function on these new coefficients, removing the noise without blurring the edges and the important characteristics of the images. The results of a large numerical experimentation encourage us to keep going in this direction with our studies.
Vector thresholding is a recently proposed technique for the denoising of one-dimensional signals... more Vector thresholding is a recently proposed technique for the denoising of one-dimensional signals by means of multiwavelet shrinkage. It is more suited both to dealing with the multiwavelet vector coefficients and to taking into account the correlations which can be introduced among the starting vector coefficients by the use of a suitable prefilter. Motivated by the successful results of the multiwavelet transform when used in image processing, the aim of this paper is to extend vector thresholding to the two-dimensional case by introducing the notion of matrix thresholding. This new method allows us to easily exploit the "matrix" nature of the two-dimensional multiwavelet transform, and represents the natural extension of vector thresholding to the 2-D case. Afterwards, as the choice of the threshold level is very important in the practical application of thresholding methods, we propose a first attempt to extend the recently introduced method of H-curve to a multiple wavelet setting. The results of extensive numerical simulations confirm the effectiveness of our proposals and encourage us to keep going in this direction with further studies.
IEEE Transactions on Signal Processing, 2009
The problem of recovering sparse signals and sparse gradient signals from a small collection of l... more The problem of recovering sparse signals and sparse gradient signals from a small collection of linear measurements is one that arises naturally in many scientific fields. The recently developed Compressed Sensing Framework states that such problems can be solved by searching for the signal of minimum L 1-norm, or minimum Total Variation, that satisfies the given acquisition constraints. While L 1 optimization algorithms, based on Linear Programming techniques, are highly effective at generating excellent signal reconstructions, their complexity is still too high and renders them impractical for many real applications. In this paper, we propose a novel approach to solve the L 1 optimization problems, based on the use of suitable nonlinear filters widely applied for signal and image denoising. The corresponding algorithm has two main advantages: low computational cost and reconstruction capabilities similar to those of Linear Programming optimization methods. We illustrate the effectiveness of the proposed approach with many numerical examples and comparisons.
IEEE Transactions on Medical Imaging, 2011
The problem of high-resolution image volume reconstruction from reduced frequency acquisition seq... more The problem of high-resolution image volume reconstruction from reduced frequency acquisition sequences has drawn significant attention from the scientific community because of its practical importance in medical diagnosis. To address this issue, several reconstruction strategies have been recently proposed, which aim to recover the missing information either by exploiting the spatio-temporal correlations of the image series, or by imposing suitable constraints on the reconstructed image volume. The main contribution of this paper is to combine both these strategies in a compressed sensing framework by exploiting the gradient sparsity of the image volume. The resulting constrained 3D minimization problem is then solved using a penalized forward-backward splitting approach that leads to a convergent iterative two-step procedure. In the first step, the updating rule accords with the sequential nature of the data acquisitions, in the second step a truly 3D filtering strategy exploits the spatio-temporal correlations of the image sequences. The resulting NFCS-3D algorithm is very general and suitable for several kinds of medical image reconstruction problems. Moreover, it is fast, stable and yields very good reconstructions, even in the case of highly undersampled image sequences. The results of several numerical experiments highlight the optimal performance of the proposed algorithm and confirm that it is competitive with state of the art algorithms.
IEEE Transactions on Image Processing, 2011
Compressed sensing is a new paradigm for signal recovery and sampling. It states that a relativel... more Compressed sensing is a new paradigm for signal recovery and sampling. It states that a relatively small number of linear measurements of a sparse signal can contain most of its salient information and that the signal can be exactly reconstructed from these highly incomplete observations. The major challenge in practical applications of compressed sensing consists in providing efficient, stable and fast recovery algorithms which, in a few seconds, evaluate a good approximation of a compressible image from highly incomplete and noisy samples. In this paper, we propose to approach the compressed sensing image recovery problem using adaptive nonlinear filtering strategies in an iterative framework, and we prove the convergence of the resulting two-steps iterative scheme. The results of several numerical experiments confirm that the corresponding algorithm possesses the required properties of efficiency, stability and low computational cost and that its performance is competitive with those of the state of the art algorithms.
BIT Numerical Mathematics, 2003
Thresholding estimators in an orthonormal wavelet basis are well established tools for Gaussian n... more Thresholding estimators in an orthonormal wavelet basis are well established tools for Gaussian noise removal. However, the universal threshold choice, suggested by Donoho and Johnstone, sometimes leads to over-smoothed approximations. For the denoising problem this paper uses the deterministic approach proposed by Chambolle et al., which handles it as a variational problem, whose solution can be formulated in terms of wavelet shrinkage. This allows us to use wavelet shrinkage successfully for more general denoising problems and to propose a new criterion for the choice of the shrinkage parameter, which we call H-curve criterion. It is based on the plot, for different parameter values, of the B 1 1 (L1)-norm of the computed solution versus the L2-norm of the residual, considered in logarithmic scale. Extensive numerical experimentation shows that this new choice of shrinkage parameter yields good results both for Gaussian and other kinds of noise.
Expert Systems with Applications
2016 International Conference of the Biometrics Special Interest Group (BIOSIG), 2016
In this paper we propose some techniques to generate synthetic altered fingerprints and prove the... more In this paper we propose some techniques to generate synthetic altered fingerprints and prove the utility of the generated datasets for developing, tuning and evaluating algorithms for altered fingerprint detection/matching. Due to the lack of public databases of altered fingerprints the generation tool proposed (and made freely available) can be a valid instrument to boost research on these challenging problems.
Lecture Notes in Computer Science, 2015
In this paper we show that Saliency-based keypoint selection makes natural landmark detection and... more In this paper we show that Saliency-based keypoint selection makes natural landmark detection and object recognition quite effective and efficient, thus enabling augmented reality techniques in a plethora of applications in smart city contexts. As a case study we address a tour of a museum where a modern smart device like a tablet or smartphone can be used to recognize paintings, retrieve their pose and graphically overlay useful information.
Pattern Recognition Letters, 2015
ABSTRACT In this paper we present a new approach to rank and select keypoints based on their sali... more ABSTRACT In this paper we present a new approach to rank and select keypoints based on their saliency for object detection and matching under moderate viewpoint and lighting changes. Saliency is defined in terms of detectability, repeatability and distinctiveness by considering both the keypoint strength (as returned by the detector algorithm) and the associated local descriptor discriminating power. Our experiments prove that selecting a small amount of available keypoints (e.g., 10%) not only boosts efficiency but can also lead to better detection/matching accuracy thus making the proposed method attractive for real-time applications (e.g., augmented reality).
Signal Processing, 2015
ABSTRACT We consider the problem of recovering a sparse signal when its nonzero coefficients tend... more ABSTRACT We consider the problem of recovering a sparse signal when its nonzero coefficients tend to cluster into blocks, whose number, dimension and position are unknown. We refer to this problem as blind cluster structured sparse recovery. For its solution, differently from the existing methods that consider the problem in a statistical context, we propose a deterministic neighborhood based approach characterized by the use both of a nonconvex, nonseparable sparsity inducing function and of a penalized version of the iterative ℓ1 reweighted method. Despite the high nonconvexity of the approach, a suitable integration of these building elements led to the development of MB-NFCS (Model Based Nonlinear Filtering for Compressed Sensing), an iterative fast, self-adaptive, and efficient algorithm that, without requiring any information on the sparsity pattern, adjusts at each iteration the action of the sparsity inducing function in order to strongly encourage the emerging cluster structure. The effectiveness of the proposed approach is demonstrated by a large set of numerical experiments that show the superior performance of MB-NFCS to the state-of-the-art algorithms.
Numerical Algorithms, 2003
Vector thresholding is a recently proposed technique for the denoising of one-dimensional signals... more Vector thresholding is a recently proposed technique for the denoising of one-dimensional signals by means of multiwavelet shrinkage. It is more suited both to dealing with the multiwavelet vector coefficients and to taking into account the correlations which can be introduced among the starting vector coefficients by the use of a suitable prefilter. Motivated by the successful results of the multiwavelet transform when used in image processing, the aim of this paper is to extend vector thresholding to the two-dimensional case by introducing the notion of matrix thresholding. This new method allows us to easily exploit the "matrix" nature of the two-dimensional multiwavelet transform, and represents the natural extension of vector thresholding to the 2-D case. Afterwards, as the choice of the threshold level is very important in the practical application of thresholding methods, we propose a first attempt to extend the recently introduced method of H-curve to a multiple wavelet setting. The results of extensive numerical simulations confirm the effectiveness of our proposals and encourage us to keep going in this direction with further studies.
Signal Processing, 2013
ABSTRACT This paper addresses the problem of sparse signal recovery from a lower number of measur... more ABSTRACT This paper addresses the problem of sparse signal recovery from a lower number of measurements than those requested by the classical compressed sensing theory. This problem is formalized as a constrained minimization problem, where the objective function is nonconvex and singular at the origin. Several algorithms have been recently proposed, which rely on iterative reweighting schemes, that produce better estimates at each new minimization step. Two such methods are iterative reweighted l2 and l1 minimization that have been shown to be effective and general, but very computationally demanding. The main contribution of this paper is the proposal of the algorithm WNFCS, where the reweighted schemes represent the core of a penalized approach to the solution of the constrained nonconvex minimization problem. The algorithm is fast, and succeeds in exactly recovering a sparse signal from a smaller number of measurements than the l1 minimization and in a shorter time. WNFCS is very general, since it represents an algorithmic framework that can easily be adapted to different reweighting strategies and nonconvex objective functions. Several numerical experiments and comparisons with some of the most recent nonconvex minimization algorithms confirm the capabilities of the proposed algorithm.
Journal of Computational and Applied Mathematics, 2004
When working with nonlinear ÿltering algorithms for image denoising problems, there are two cruci... more When working with nonlinear ÿltering algorithms for image denoising problems, there are two crucial aspects, namely, the choice of the thresholding parameter and the use of a proper ÿlter function. Both greatly in uence the quality of the resulting denoised image. In this paper we propose two new ÿlters, which are a piecewise quadratic and an exponential function of , respectively, arid we show how they can be successfully used instead of the classical Donoho and Johnstone's Soft thresholding ÿlter. We exploit the increased regularity and exibility of the new ÿlters to improve the quality of the ÿnal results. Moreover, we prove that our ÿltered approximation is a near-minimizer of the functional which has to be minimized to solve the denoising problem. We also show that the quadratic ÿlter, due to its shape, yields good results if we choose as the Donoho and Johnstone universal threshold, while the exponential one is more suitable if we use the recently proposed H-curve criterion. Encouraging results in extensive numerical experiments on several test images conÿrm the e ectiveness of our proposal.
International Journal of Wavelets, Multiresolution and Information Processing, 2006
In recent years, many papers have been devoted to the topic of balanced multiwavelets, namely, mu... more In recent years, many papers have been devoted to the topic of balanced multiwavelets, namely, multiwavelet bases which are especially designed to avoid the prefiltering step in the implementation of the multiwavelet transform. In this work, we give a simple algebraic proof of how scalar wavelets can be reinterpreted as the most natural balanced multiwavelets, which maintain the good properties of the wavelet bases they come from. We then show how these new bases can be successfully used to apply matrix thresholding for the denoising of images corrupted by Gaussian noise. In fact, this new approach discovers a balanced matrix nature in Daubechies bases, hence obtaining better numerical results with respect to those achieved via scalar thresholding. In particular, this reinterpretation of scalar wavelets as balanced multiwavelets allows us to successfully use the thresholding filters, previously introduced in the scalar case, in a matrix setting.
Journal of Computational and Applied Mathematics, 2007
ABSTRACT In this work we describe a method for removing Gaussian noise from digital images, based... more ABSTRACT In this work we describe a method for removing Gaussian noise from digital images, based on the combination of the wavelet packet transform and the principal component analysis. In particular, since the aim of denoising is to retain the energy of the signal while discarding the energy of the noise, our basic idea is to construct powerful tailored filters by applying the Karhunen–Loéve transform in the wavelet packet domain, thus obtaining a compaction of the signal energy into a few principal components, while the noise is spread over all the transformed coefficients. This allows us to act with a suitable shrinkage function on these new coefficients, removing the noise without blurring the edges and the important characteristics of the images. The results of a large numerical experimentation encourage us to keep going in this direction with our studies.