Bartomeu Coll | Universitat de les Illes Balears (original) (raw)
Papers by Bartomeu Coll
Springer eBooks, 1996
Based on the phenomenological description of Gaetano Kanizsa, we discuss the physical generation ... more Based on the phenomenological description of Gaetano Kanizsa, we discuss the physical generation process of images as a combination of basic operations: occlusions, transparencies and constrast changes. These operations generate the essential singularities, which we call junctions. We deduce a mathematical and computational model to detect the „atoms” of the image: level lines joining T- or X-junctions. Then we propose the adequate modification of morphological filtering algorithms so that they smooth the „atoms” without altering the junctions. Finally, we give some experiments on real and synthetic images.
Common satellite imagery products consist of a panchromatic image at high spatial resolution and ... more Common satellite imagery products consist of a panchromatic image at high spatial resolution and several misregistered spectral bands at lower resolution. Pansharpening is the fusion process by which a high-resolution multispectral image is inferred. We propose a variational model for which pan-sharpening is defined as an optimization problem minimizing a cost function with nonlocal regularization. We incorporate a new term preserving the radiometric ratio between the panchromatic and each spectral band. The resulting model is channel-decoupled, thus permitting the application to misregistered spectral data. The experimental results illustrate the superiority of the proposed method to preserve spatial details, reduce color artifacts, and avoid aliasing.
arXiv (Cornell University), Jan 18, 2017
We reconsider the classic problem of estimating accurately a 2D transformation from point matches... more We reconsider the classic problem of estimating accurately a 2D transformation from point matches between images containing outliers. RANSAC discriminates outliers by randomly generating minimalistic sampled hypotheses and verifying their consensus over the input data. Its response is based on the single hypothesis that obtained the largest inlier support. In this article we show that the resulting accuracy can be improved by aggregating all generated hypotheses. This yields RANSAAC, a framework that improves systematically over RANSAC and its state-of-the-art variants by statistically aggregating hypotheses. To this end, we introduce a simple strategy that allows to rapidly average 2D transformations, leading to an almost negligible extra computational cost. We give practical applications on projective transforms and homography+distortion models and demonstrate a significant performance gain in both cases.
Springer eBooks, 1997
Models for image smoothing have not fully incorporated the restrictions imposed by the physics of... more Models for image smoothing have not fully incorporated the restrictions imposed by the physics of image generation. In particular, any image smoothing process should respect the essential singularities of images (which we call junctions) and should be invariant with respect to contrast changes. We conclude that local image smoothing is possible provided singular points of the image have been previously detected and preserved. We define the associated degenerate partial differential equation and sketch the proof of its mathematical validity. Our analysis uses the phenomenological theory of image perception of Gaetano Kanizsa, whose mathematical translation yields an algorithm for the detection of the discontinuity points of digital images.
Inverse Problems and Imaging, 2015
Image restoration is the problem of recovering an original image from an observation of it in ord... more Image restoration is the problem of recovering an original image from an observation of it in order to extract the most meaningful information. In this paper, we study this problem from a variational point of view through the minimization of energies composed of a quadratic data-fidelity term and a nonsmooth nonconvex regularization term. In the discrete setting, existence of minimizer is proved for arbitrary linear operators. For this kind of problems, fully segmented solutions can be found by minimizing objective nonconvex functionals. We propose a dual formulation of the model by introducing an auxiliary variable with a double function. On one hand, it marks the edges and it ensures their preservation from smoothing. On the other hand, it makes the criterion half-linear in the sense that the dual energy depends linearly on the gradient of the image to be recovered. This leads to design an efficient optimization algorithm with wide applicability to several image restoration tasks such as denoising and deconvolution. Finally, we present experimental results and we compare them with TV-based image restoration algorithms.
We propose a new measure, the method noise, to evaluate and compare the performance of digital im... more We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the non local means (NL-means), based on a non local averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.
... Cahier du CEREMADE n 9039. [7] Evans LC, Spruck J.: Motion of level sets by mean curvature IJ... more ... Cahier du CEREMADE n 9039. [7] Evans LC, Spruck J.: Motion of level sets by mean curvature IJ Diff. Geometry, 33, pp. 635-681, (1991). [8] Kass M.,Witkin A.,Terzopoulos D.: Snakes:Active con-tour models. Int Journal Comp. Vision,l,321-331,(1988). [9] Kimia ??. ...
Recherche, 2006
Resume. Le bruit sonore est de plus en plus considere comme une nuisance intolerable, mais qu’en ... more Resume. Le bruit sonore est de plus en plus considere comme une nuisance intolerable, mais qu’en est-il du bruit visuel? Toutes les images et les films digitaux comportent un bruit, qui gene la vision et impose des contraintes technologiques de prix, de taille et d’energie aux cameras. Des progres recents dans la comprehension de la structure des images permettent d’eliminer le bruit des images et des films sans les abimer.
Séminaire Équations aux dérivées partielles (Polytechnique), 1996
Partial differential equations and image smoothing Séminaire Équations aux dérivées partielles (P... more Partial differential equations and image smoothing Séminaire Équations aux dérivées partielles (Polytechnique) (1995-1996), exp. n o 21, p. 1-30 <http://www.numdam.org/item?id=SEDP_1995-1996____A21_0> © Séminaire Équations aux dérivées partielles (Polytechnique) (École Polytechnique), 1995-1996, tous droits réservés. L'accès aux archives du séminaire Équations aux dérivées partielles (http://sedp.cedram.org) implique l'accord avec les conditions générales d'utilisation (http://www.numdam.org/conditions). Toute utilisation commerciale ou impression systématique est constitutive d'une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright. Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques http://www.numdam.org/
Communications of The ACM, May 1, 2011
The search for efficient image denoising methods is still a valid challenge at the crossing of fu... more The search for efficient image denoising methods is still a valid challenge at the crossing of functional analysis and statistics. In spite of the sophistication of the recently proposed methods, most algorithms have not yet attained a desirable level of applicability. All show an outstanding performance when the image model corresponds to the algorithm assumptions but fail in general and create artifacts or remove image fine structures. The main focus of this paper is, first, to define a general mathematical and experimental methodology to compare and classify classical image denoising algorithms and, second, to describe the nonlocal means (NL-means) algorithm 6 introduced in 2005 and its more recent extensions. The mathematical analysis is based on the analysis of the "method noise," defined as the difference between a digital image and its denoised version. NL-means, which uses image self-similarities, is proven to be asymptotically optimal under a generic statistical image model. The denoising performance of all considered methods are compared in four ways: mathematical, asymptotic order of magnitude of the method noise under regularity assumptions; perceptual-mathematical, the algorithms artifacts and their explanation as a violation of the image model; perceptual-mathematical, analysis of algorithms when applied to noise samples; quantitative experimental, by tables of L 2 distances of the denoised version to the original image.
SIAM Journal on Numerical Analysis, Dec 1, 1996
In this paper, we propose a geometric partial differential equation (PDE) for tracking one or sev... more In this paper, we propose a geometric partial differential equation (PDE) for tracking one or several moving objects from a sequence of images, which is based on a geometric model for active contours. The active contour approach permits us to simultaneously handle both aspects: finding the boundaries and tracking them. We also describe a numerical scheme to solve the geometric equation and we present some numerical experiments.
Numerische Mathematik, Sep 12, 2006
Denoising images can be achieved by a spatial averaging of nearby pixels. However, although this ... more Denoising images can be achieved by a spatial averaging of nearby pixels. However, although this method removes noise it creates blur. Hence, neighborhood filters are usually preferred. These filters perform an average of neighboring pixels, but only under the condition that their grey level is close enough to the one of the pixel in restoration. This very popular method unfortunately creates shocks and staircasing effects. In this paper, we perform an asymptotic analysis of neighborhood filters as the size of the neighborhood shrinks to zero. We prove that these filters are asymptotically equivalent to the Perona-Malik equation, one of the first nonlinear PDE's proposed for image restoration. As a solution, we propose an extremely simple variant of the neighborhood filter using a linear regression instead of an average. By analyzing its subjacent PDE, we prove that this variant does not create shocks: it is actually related to the mean curvature motion. We extend the study to more general local polynomial estimates of the image in a grey level neighborhood and introduce two new fourth order evolution equations.
IEEE transactions on image processing, Jun 1, 2006
Many classical image denoising methods are based on a local averaging of the color, which increas... more Many classical image denoising methods are based on a local averaging of the color, which increases the signal/noise ratio. One of the most used algorithms is the neighborhood filter by Yaroslavsky or sigma filter by Lee, also called in a variant "SUSAN" by Smith and Brady or "Bilateral filter" by Tomasi and Manduchi. These filters replace the actual value of the color at a point by an average of all values of points which are simultaneously close in space and in color. Unfortunately, these filters show a "staircase effect", that is, the creation in the image of flat regions separated by artifact boundaries. In this paper, we first explain the staircase effect by finding the subjacent PDE of the filter. We show that this ill-posed PDE is a variant of another famous image processing model, the Perona-Malik equation, which suffers the same artifacts. As we prove, a simple variant of the neighborhood filter solves the problem. We find the subjacent stable PDE of this variant. Finally, we apply the same correction to the recently introduced NL-means algorithm which had the same staircase effect, for the same reason.
International Journal of Computer Vision, Jul 4, 2007
Neighborhood filters are nonlocal image and movie filters which reduce the noise by averaging sim... more Neighborhood filters are nonlocal image and movie filters which reduce the noise by averaging similar pixels. The first object of the paper is to present a unified theory of these filters and reliable criteria to compare them to other filter classes. A CCD noise model will be presented justifying the involvement of neighborhood filters. A classification of neighborhood filters will be proposed, including classical image and movie denoising methods and discussing further a recently introduced neighborhood filter, NL-means. In order to compare denoising methods three principles will be discussed. The first principle, "method noise", specifies that only noise must be removed from an image. A second principle will be introduced, "noise to noise", according to which a denoising method must transform a white noise into a white noise. Contrarily to "method noise", this principle, which characterizes artifact-free methods, eliminates any subjectivity and can be checked by mathematical arguments and Fourier analysis. "Noise to noise" will be proven to rule out most denoising methods, with the exception of neighborhood filters. This is why a third and new comparison principle, the "statistical optimality", is needed and will be introduced to compare the performance of all neighborhood filters. The three principles will be applied to compare ten different image and movie denoising methods. It will be first shown that only wavelet thresholding methods and NL-means give an acceptable method noise. Second, that neighborhood filters are the only ones to satisfy the "noise to noise" principle. Third, that among them NL-means is closest to statistical optimality. A particular attention will be paid to the application of the statistical optimality criterion for movie denoising methods. It will be pointed out that current movie denoising methods are motion compensated neighborhood filters. This amounts to say that they are neighborhood filters and that the ideal neighborhood of a pixel is its trajectory. Unfortunately the aperture problem makes it impossible to estimate ground true trajectories. It will be demonstrated that computing trajectories and restricting the neighborhood to them is harmful for denoising purposes and that space-time NL-means preserves more movie details.
HAL (Le Centre pour la Communication Scientifique Directe), 2006
We discuss the physical generation process of images as a combination of basic operations: occlus... more We discuss the physical generation process of images as a combination of basic operations: occlusions, transparencies and contrast changes. These operations generate the essential singularities which we call junctions. We deduce a mathematical and computational model for image analysis according to which the "atoms" of the image must be "pieces of level lines joining junctions", fitting the phenomenological description of Gaetano Kanizsa (1990). A parameter free junction detection algorithm is proposed for the computation of the previously defined "atoms". Then we propose an adequate modification of the morphological filtering algorithms so that they smooth the "atoms" without altering the junctions. Finally, we give some experiments on real and synthetic images.
Image Processing On Line, Feb 28, 2014
This paper focuses on the implementation of the pansharpened image fusion technique proposed in t... more This paper focuses on the implementation of the pansharpened image fusion technique proposed in the companion paper [A Nonlocal Variational Model for Pansharpening Image Fusion, SIAM Journal on Imaging Sciences, 2014, to appear]. Pansharpening refers to the process of inferring a high resolution multispectral image from a high resolution panchromatic image and low resolution multispectral one. Although quite successful in terms of relative error, state-of-the-art pansharpening methods still introduce relevant color artifacts. The variational pansharpening model proposed by Duran et al. incorporates a nonlocal regularization term that takes advantage of image self-similarity, leading to significant reduction of the above-mentioned color artifacts. Source Code ANSI C source code to produce the same results as the demo is accessible at the IPOL web page of this article 1 .
Lecture Notes in Computer Science, 1997
We call “natural” image any photograph of an outdoor or indoor scene taken by a standard camera. ... more We call “natural” image any photograph of an outdoor or indoor scene taken by a standard camera. In such images, most observed objects undergo occlusions and the illumination condition and contrast response of the camera are unknown. Actual Scale Space theories do not incorporate obvious restrictions imposed by the physics of image generation. The heat equation (linear scale space) is not contrast invariant and destroys T-junctions. The same is true for the recently proposed curvature equations (mean curvature motion and affine shortening): They break the symmetry of junctions. To apply directly these models to natural world images, with occlusions, is irrevelant. Returning to the edge detection problem, in which scale space theory originates, we show how level lines can be found in an image without smoothing. As an alternative to edge detection/scale space, we propose to define the line structure in a natural image by its topographic map (set of all level lines). We also show that a modification of morphological scale space can help to the visualization of the topographic map.
We address the problem of extending topographic maps to color images. A topographic map gives a m... more We address the problem of extending topographic maps to color images. A topographic map gives a morphological and a geometrical representation of the information contained in natural images. Two approaches are presented and discussed. The first one is new and consists in defining a total order in IR 3 in accordance with the human visual perception of shapes. This allows to define color topographic maps in the same way that what it has been done for graylevel topographic maps. It has the advantage of leading all properties known in the gray-level case to remain true in the color case. But the map contains a so huge quantity of data that it has to be drastically simplified. The second approach, based on a so far unpublished result [4], allows to build a simplified representation by using the geometry given by the luminance component only. We present experiments which illustrate the advantages and the drawbacks of each method.
Springer eBooks, 1996
Based on the phenomenological description of Gaetano Kanizsa, we discuss the physical generation ... more Based on the phenomenological description of Gaetano Kanizsa, we discuss the physical generation process of images as a combination of basic operations: occlusions, transparencies and constrast changes. These operations generate the essential singularities, which we call junctions. We deduce a mathematical and computational model to detect the „atoms” of the image: level lines joining T- or X-junctions. Then we propose the adequate modification of morphological filtering algorithms so that they smooth the „atoms” without altering the junctions. Finally, we give some experiments on real and synthetic images.
Common satellite imagery products consist of a panchromatic image at high spatial resolution and ... more Common satellite imagery products consist of a panchromatic image at high spatial resolution and several misregistered spectral bands at lower resolution. Pansharpening is the fusion process by which a high-resolution multispectral image is inferred. We propose a variational model for which pan-sharpening is defined as an optimization problem minimizing a cost function with nonlocal regularization. We incorporate a new term preserving the radiometric ratio between the panchromatic and each spectral band. The resulting model is channel-decoupled, thus permitting the application to misregistered spectral data. The experimental results illustrate the superiority of the proposed method to preserve spatial details, reduce color artifacts, and avoid aliasing.
arXiv (Cornell University), Jan 18, 2017
We reconsider the classic problem of estimating accurately a 2D transformation from point matches... more We reconsider the classic problem of estimating accurately a 2D transformation from point matches between images containing outliers. RANSAC discriminates outliers by randomly generating minimalistic sampled hypotheses and verifying their consensus over the input data. Its response is based on the single hypothesis that obtained the largest inlier support. In this article we show that the resulting accuracy can be improved by aggregating all generated hypotheses. This yields RANSAAC, a framework that improves systematically over RANSAC and its state-of-the-art variants by statistically aggregating hypotheses. To this end, we introduce a simple strategy that allows to rapidly average 2D transformations, leading to an almost negligible extra computational cost. We give practical applications on projective transforms and homography+distortion models and demonstrate a significant performance gain in both cases.
Springer eBooks, 1997
Models for image smoothing have not fully incorporated the restrictions imposed by the physics of... more Models for image smoothing have not fully incorporated the restrictions imposed by the physics of image generation. In particular, any image smoothing process should respect the essential singularities of images (which we call junctions) and should be invariant with respect to contrast changes. We conclude that local image smoothing is possible provided singular points of the image have been previously detected and preserved. We define the associated degenerate partial differential equation and sketch the proof of its mathematical validity. Our analysis uses the phenomenological theory of image perception of Gaetano Kanizsa, whose mathematical translation yields an algorithm for the detection of the discontinuity points of digital images.
Inverse Problems and Imaging, 2015
Image restoration is the problem of recovering an original image from an observation of it in ord... more Image restoration is the problem of recovering an original image from an observation of it in order to extract the most meaningful information. In this paper, we study this problem from a variational point of view through the minimization of energies composed of a quadratic data-fidelity term and a nonsmooth nonconvex regularization term. In the discrete setting, existence of minimizer is proved for arbitrary linear operators. For this kind of problems, fully segmented solutions can be found by minimizing objective nonconvex functionals. We propose a dual formulation of the model by introducing an auxiliary variable with a double function. On one hand, it marks the edges and it ensures their preservation from smoothing. On the other hand, it makes the criterion half-linear in the sense that the dual energy depends linearly on the gradient of the image to be recovered. This leads to design an efficient optimization algorithm with wide applicability to several image restoration tasks such as denoising and deconvolution. Finally, we present experimental results and we compare them with TV-based image restoration algorithms.
We propose a new measure, the method noise, to evaluate and compare the performance of digital im... more We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the non local means (NL-means), based on a non local averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.
... Cahier du CEREMADE n 9039. [7] Evans LC, Spruck J.: Motion of level sets by mean curvature IJ... more ... Cahier du CEREMADE n 9039. [7] Evans LC, Spruck J.: Motion of level sets by mean curvature IJ Diff. Geometry, 33, pp. 635-681, (1991). [8] Kass M.,Witkin A.,Terzopoulos D.: Snakes:Active con-tour models. Int Journal Comp. Vision,l,321-331,(1988). [9] Kimia ??. ...
Recherche, 2006
Resume. Le bruit sonore est de plus en plus considere comme une nuisance intolerable, mais qu’en ... more Resume. Le bruit sonore est de plus en plus considere comme une nuisance intolerable, mais qu’en est-il du bruit visuel? Toutes les images et les films digitaux comportent un bruit, qui gene la vision et impose des contraintes technologiques de prix, de taille et d’energie aux cameras. Des progres recents dans la comprehension de la structure des images permettent d’eliminer le bruit des images et des films sans les abimer.
Séminaire Équations aux dérivées partielles (Polytechnique), 1996
Partial differential equations and image smoothing Séminaire Équations aux dérivées partielles (P... more Partial differential equations and image smoothing Séminaire Équations aux dérivées partielles (Polytechnique) (1995-1996), exp. n o 21, p. 1-30 <http://www.numdam.org/item?id=SEDP_1995-1996____A21_0> © Séminaire Équations aux dérivées partielles (Polytechnique) (École Polytechnique), 1995-1996, tous droits réservés. L'accès aux archives du séminaire Équations aux dérivées partielles (http://sedp.cedram.org) implique l'accord avec les conditions générales d'utilisation (http://www.numdam.org/conditions). Toute utilisation commerciale ou impression systématique est constitutive d'une infraction pénale. Toute copie ou impression de ce fichier doit contenir la présente mention de copyright. Article numérisé dans le cadre du programme Numérisation de documents anciens mathématiques http://www.numdam.org/
Communications of The ACM, May 1, 2011
The search for efficient image denoising methods is still a valid challenge at the crossing of fu... more The search for efficient image denoising methods is still a valid challenge at the crossing of functional analysis and statistics. In spite of the sophistication of the recently proposed methods, most algorithms have not yet attained a desirable level of applicability. All show an outstanding performance when the image model corresponds to the algorithm assumptions but fail in general and create artifacts or remove image fine structures. The main focus of this paper is, first, to define a general mathematical and experimental methodology to compare and classify classical image denoising algorithms and, second, to describe the nonlocal means (NL-means) algorithm 6 introduced in 2005 and its more recent extensions. The mathematical analysis is based on the analysis of the "method noise," defined as the difference between a digital image and its denoised version. NL-means, which uses image self-similarities, is proven to be asymptotically optimal under a generic statistical image model. The denoising performance of all considered methods are compared in four ways: mathematical, asymptotic order of magnitude of the method noise under regularity assumptions; perceptual-mathematical, the algorithms artifacts and their explanation as a violation of the image model; perceptual-mathematical, analysis of algorithms when applied to noise samples; quantitative experimental, by tables of L 2 distances of the denoised version to the original image.
SIAM Journal on Numerical Analysis, Dec 1, 1996
In this paper, we propose a geometric partial differential equation (PDE) for tracking one or sev... more In this paper, we propose a geometric partial differential equation (PDE) for tracking one or several moving objects from a sequence of images, which is based on a geometric model for active contours. The active contour approach permits us to simultaneously handle both aspects: finding the boundaries and tracking them. We also describe a numerical scheme to solve the geometric equation and we present some numerical experiments.
Numerische Mathematik, Sep 12, 2006
Denoising images can be achieved by a spatial averaging of nearby pixels. However, although this ... more Denoising images can be achieved by a spatial averaging of nearby pixels. However, although this method removes noise it creates blur. Hence, neighborhood filters are usually preferred. These filters perform an average of neighboring pixels, but only under the condition that their grey level is close enough to the one of the pixel in restoration. This very popular method unfortunately creates shocks and staircasing effects. In this paper, we perform an asymptotic analysis of neighborhood filters as the size of the neighborhood shrinks to zero. We prove that these filters are asymptotically equivalent to the Perona-Malik equation, one of the first nonlinear PDE's proposed for image restoration. As a solution, we propose an extremely simple variant of the neighborhood filter using a linear regression instead of an average. By analyzing its subjacent PDE, we prove that this variant does not create shocks: it is actually related to the mean curvature motion. We extend the study to more general local polynomial estimates of the image in a grey level neighborhood and introduce two new fourth order evolution equations.
IEEE transactions on image processing, Jun 1, 2006
Many classical image denoising methods are based on a local averaging of the color, which increas... more Many classical image denoising methods are based on a local averaging of the color, which increases the signal/noise ratio. One of the most used algorithms is the neighborhood filter by Yaroslavsky or sigma filter by Lee, also called in a variant "SUSAN" by Smith and Brady or "Bilateral filter" by Tomasi and Manduchi. These filters replace the actual value of the color at a point by an average of all values of points which are simultaneously close in space and in color. Unfortunately, these filters show a "staircase effect", that is, the creation in the image of flat regions separated by artifact boundaries. In this paper, we first explain the staircase effect by finding the subjacent PDE of the filter. We show that this ill-posed PDE is a variant of another famous image processing model, the Perona-Malik equation, which suffers the same artifacts. As we prove, a simple variant of the neighborhood filter solves the problem. We find the subjacent stable PDE of this variant. Finally, we apply the same correction to the recently introduced NL-means algorithm which had the same staircase effect, for the same reason.
International Journal of Computer Vision, Jul 4, 2007
Neighborhood filters are nonlocal image and movie filters which reduce the noise by averaging sim... more Neighborhood filters are nonlocal image and movie filters which reduce the noise by averaging similar pixels. The first object of the paper is to present a unified theory of these filters and reliable criteria to compare them to other filter classes. A CCD noise model will be presented justifying the involvement of neighborhood filters. A classification of neighborhood filters will be proposed, including classical image and movie denoising methods and discussing further a recently introduced neighborhood filter, NL-means. In order to compare denoising methods three principles will be discussed. The first principle, "method noise", specifies that only noise must be removed from an image. A second principle will be introduced, "noise to noise", according to which a denoising method must transform a white noise into a white noise. Contrarily to "method noise", this principle, which characterizes artifact-free methods, eliminates any subjectivity and can be checked by mathematical arguments and Fourier analysis. "Noise to noise" will be proven to rule out most denoising methods, with the exception of neighborhood filters. This is why a third and new comparison principle, the "statistical optimality", is needed and will be introduced to compare the performance of all neighborhood filters. The three principles will be applied to compare ten different image and movie denoising methods. It will be first shown that only wavelet thresholding methods and NL-means give an acceptable method noise. Second, that neighborhood filters are the only ones to satisfy the "noise to noise" principle. Third, that among them NL-means is closest to statistical optimality. A particular attention will be paid to the application of the statistical optimality criterion for movie denoising methods. It will be pointed out that current movie denoising methods are motion compensated neighborhood filters. This amounts to say that they are neighborhood filters and that the ideal neighborhood of a pixel is its trajectory. Unfortunately the aperture problem makes it impossible to estimate ground true trajectories. It will be demonstrated that computing trajectories and restricting the neighborhood to them is harmful for denoising purposes and that space-time NL-means preserves more movie details.
HAL (Le Centre pour la Communication Scientifique Directe), 2006
We discuss the physical generation process of images as a combination of basic operations: occlus... more We discuss the physical generation process of images as a combination of basic operations: occlusions, transparencies and contrast changes. These operations generate the essential singularities which we call junctions. We deduce a mathematical and computational model for image analysis according to which the "atoms" of the image must be "pieces of level lines joining junctions", fitting the phenomenological description of Gaetano Kanizsa (1990). A parameter free junction detection algorithm is proposed for the computation of the previously defined "atoms". Then we propose an adequate modification of the morphological filtering algorithms so that they smooth the "atoms" without altering the junctions. Finally, we give some experiments on real and synthetic images.
Image Processing On Line, Feb 28, 2014
This paper focuses on the implementation of the pansharpened image fusion technique proposed in t... more This paper focuses on the implementation of the pansharpened image fusion technique proposed in the companion paper [A Nonlocal Variational Model for Pansharpening Image Fusion, SIAM Journal on Imaging Sciences, 2014, to appear]. Pansharpening refers to the process of inferring a high resolution multispectral image from a high resolution panchromatic image and low resolution multispectral one. Although quite successful in terms of relative error, state-of-the-art pansharpening methods still introduce relevant color artifacts. The variational pansharpening model proposed by Duran et al. incorporates a nonlocal regularization term that takes advantage of image self-similarity, leading to significant reduction of the above-mentioned color artifacts. Source Code ANSI C source code to produce the same results as the demo is accessible at the IPOL web page of this article 1 .
Lecture Notes in Computer Science, 1997
We call “natural” image any photograph of an outdoor or indoor scene taken by a standard camera. ... more We call “natural” image any photograph of an outdoor or indoor scene taken by a standard camera. In such images, most observed objects undergo occlusions and the illumination condition and contrast response of the camera are unknown. Actual Scale Space theories do not incorporate obvious restrictions imposed by the physics of image generation. The heat equation (linear scale space) is not contrast invariant and destroys T-junctions. The same is true for the recently proposed curvature equations (mean curvature motion and affine shortening): They break the symmetry of junctions. To apply directly these models to natural world images, with occlusions, is irrevelant. Returning to the edge detection problem, in which scale space theory originates, we show how level lines can be found in an image without smoothing. As an alternative to edge detection/scale space, we propose to define the line structure in a natural image by its topographic map (set of all level lines). We also show that a modification of morphological scale space can help to the visualization of the topographic map.
We address the problem of extending topographic maps to color images. A topographic map gives a m... more We address the problem of extending topographic maps to color images. A topographic map gives a morphological and a geometrical representation of the information contained in natural images. Two approaches are presented and discussed. The first one is new and consists in defining a total order in IR 3 in accordance with the human visual perception of shapes. This allows to define color topographic maps in the same way that what it has been done for graylevel topographic maps. It has the advantage of leading all properties known in the gray-level case to remain true in the color case. But the map contains a so huge quantity of data that it has to be drastically simplified. The second approach, based on a so far unpublished result [4], allows to build a simplified representation by using the geometry given by the luminance component only. We present experiments which illustrate the advantages and the drawbacks of each method.