Ingrid Daubechies | Duke University (original) (raw)
Papers by Ingrid Daubechies
ABSTRACT The art of Jan van Eyck is renowned for its capacity to reveal with extreme accuracy the... more ABSTRACT The art of Jan van Eyck is renowned for its capacity to reveal with extreme accuracy the materiality of object surfaces. The interaction of light with the surface of materials results in reflected light that carries to the viewer the information on the quality of the surface material. Apparently Jan van Eyck understands that seeing a visual object has more to do with tracing perceptually what kind of light is falling on the object and where it comes from than with any conceptual identification of the object. Tracing the light back to its source is in vision studies known as inverse optics. In a visual display, the light source is particularly revealed by the highlights which are mirroring that source. In his pictures Jan van Eyck uses these highlights as anchor points to indicate where in the visual scene the highest lightness is to be found. With further understanding of the physics involved, he carefully observes the specific fall off of the intensity of reflected light around highlights to characterize particular materials. His special sensibility for highlights might have originated in his involvement with the production of miniatures in illuminated books where, depending on the locations of light source and viewer and orientation of the illuminated page, gilded segments can drastically change in brightness. We will compare color and luminance distribution properties around highlights in some Eyckian pictures with comparable pictures of contemporaries (Campin, Van der Weyden) with special attention to metals and gold brocade.
Ghent University Ghent University Academic Bibliography. ...
IEEE Transactions on Signal Processing, 1997
There are many signal processing tasks for which convolution-based continuous signal representati... more There are many signal processing tasks for which convolution-based continuous signal representations such as splines and wavelets provide an interesting and practical alternative to the more traditional sinc-based methods. The coefficients of the corresponding signal approximations are typically obtained by direct sampling (interpolation or quasiinterpolation) or by using least squares techniques that apply a prefilter prior to sampling. Here, we compare the performance of these approaches and provide quantitative error estimates that can be used for the appropriate selection of the sampling step h: Specifically, we review several results in approximation theory with a special emphasis on the Strang-Fix conditions, which relate the general O(h L ) behavior of the error to the ability of the representation to reproduce polynomials of degree n = L 01: We use this theory to derive pointwise error estimates for the various algorithms and to obtain the asymptotic limit of the L2-error as h tends to zero. We also propose a new improved L2-error bound for the least squares case. In the process, we provide all the relevant bound constants for polynomial splines. Some of our results suggest the existence of an intermediate range of sampling steps where the least squares method is roughly equivalent to an interpolator with twice the order. We present experimental examples that illustrate the theory and confirm the adequacy of our various bound and limit determinations.
The EMD algorithm, first proposed in [11], made more robust as well as more versatile in [12], is... more The EMD algorithm, first proposed in [11], made more robust as well as more versatile in [12], is a technique that aims to decompose into their building blocks functions that are the superposition of a (reasonably) small number of components, well separated in the time-frequency plane, each of which can be viewed as approximately harmonic locally, with slowly varying amplitudes
In the fast wavelet transform algorithm, m0 and ml are the transfer functions of a low-pass and a... more In the fast wavelet transform algorithm, m0 and ml are the transfer functions of a low-pass and a high-pass filter that split the discrete signal into two channels. These channels are decimated (only one sample out of two is retained), and the process is iterated on the low-...
Applied and Computational Harmonic Analysis, 2000
Tree approximation is a new form of nonlinear approximation which appears naturally in some appli... more Tree approximation is a new form of nonlinear approximation which appears naturally in some applications such as image processing and adaptive numerical methods. It is somewhat more restrictive than the usual n-term approximation. We show that the restrictions of tree approximation cost little in terms of rates of approximation. We then use that result to design encoders for compression. These
Journal of electrocardiology
In this report we provide a method for automated detection of J wave, defined as a notch or slur ... more In this report we provide a method for automated detection of J wave, defined as a notch or slur in the descending slope of the terminal positive wave of the QRS complex, using signal processing and functional data analysis techniques. Two different sets of ECG tracings were selected from the EPICARE ECG core laboratory, Wake Forest School of Medicine, Winston Salem, NC. The first set was a training set comprised of 100 ECGs of which 50 ECGs had J-wave and the other 50 did not. The second set was a test set (n=116 ECGs) in which the J-wave status (present/absent) was only known by the ECG Center staff. All ECGs were recorded using GE MAC 1200 (GE Marquette, Milwaukee, Wisconsin) at 10mm/mV calibration, speed of 25mm/s and 500HZ sampling rate. All ECGs were initially inspected visually for technical errors and inadequate quality, and then automatically processed with the GE Marquette 12-SL program 2001 version (GE Marquette, Milwaukee, WI). We excluded ECG tracings with major abnorma...
IEEE Transactions on Information Theory - TIT
Revista Matemática Iberoamericana, 2000
Revista Matemática Iberoamericana, 2000
Abstract We establish new results on the space BV of functions with bounded variation. While it i... more Abstract We establish new results on the space BV of functions with bounded variation. While it is well known that this space admits no unconditional basis, we show that it is" almost" characterized by wavelet expansions in the following sense: if a function $ f $ is in ...
Proceedings of the National Academy of Sciences, 2009
InfoMax and FastICA are the independent component analysis algorithms most used and apparently mo... more InfoMax and FastICA are the independent component analysis algorithms most used and apparently most effective for brain fMRI. We show that this is linked to their ability to handle effectively sparse components rather than independent components as such. The mathematical design of better analysis tools for brain fMRI should thus emphasize other mathematical characteristics than independence.
Proceedings of the National Academy of Sciences, 2011
We describe new approaches for distances between pairs of 2dimensional surfaces (embedded in 3-di... more We describe new approaches for distances between pairs of 2dimensional surfaces (embedded in 3-dimensional space) that use local structures and global information contained in inter-structure geometric relationships. We present algorithms to automatically determine these distances as well as geometric correspondences. This is motivated by the aspiration of students of natural science to understand the continuity of form that unites the diversity of life. At present, scientists using physical traits to study evolutionary relationships among living and extinct animals analyze data extracted from carefully defined anatomical correspondence points (landmarks). Identifying and recording these landmarks is time consuming and can be done accurately only by trained morphologists. This renders these studies inaccessible to non-morphologists, and causes phenomics to lag behind genomics in elucidating evolutionary patterns. Unlike other algorithms presented for morphological correspondences our approach does not require any preliminary marking of special features or landmarks by the user. It also differs from other seminal work in computational geometry in that our algorithms are polynomial in nature and thus faster, making pairwise comparisons feasible for significantly larger numbers of digitized surfaces. We illustrate our approach using three datasets representing teeth and different bones of primates and humans, and show that it leads to highly accurate results. homology | phenomics | morphometrics | Procrustes | Mobius transformations | automatic species recognition T o document and understand physical and biological phenomena (e.g., geological sedimentation, chemical reactions, ontogenetic development, speciation, evolutionary adaptation, etc.), it is important to quantify the similarity or dissimilarity of objects affected or produced by the phenomena under study. The grain size or elasticity of rocks, geographic distances between populations, or hormone levels and body masses of individuals -these can be readily measured, and the resulting numerical values can be used to compute similarities/distances that help build understanding. Other properties like genetic makeup or gross anatomical structure can not be quantified by a single number; determining how to measure and compare these is more involved . Representing the structure of a gene (through sequencing) or quantification of an anatomical structure (through the digitization of its surface geometry) leads to more complex numerical representations; even though these are not measurements allowing direct comparison with their counterparts for other genes or anatomical structures, they represent an essential initial step for such quantitative comparisons. The 1dimensional, sequential arrangement of genomes and the discrete variation (four nucleotide base types) for each of thousands of available correspondence points help reduce the computational complexity of determining the most likely alignment between genomes; alignment procedures are now increasingly automated . The resulting, rapidly generated and massive data sets, analyzed with increasing sophistication and flexible in-depth exploration due to advances in comput-ing technology, have lead to spectacular progress. For instance phylogenetics has begun to unravel mysteries of large scale evolutionary relationships experienced as extraordinarily difficult by morphologists .
Linear Algebra and its Applications, 1992
An infinite product IIT= lMi of matrices converges (on the right) if limi __ M, . . . Mi exists. ... more An infinite product IIT= lMi of matrices converges (on the right) if limi __ M, . . . Mi exists. A set Z = (Ai: i > l} of n X n matrices is called an RCP set (rightconvergent product set) if all infinite products with each element drawn from Z converge. Such sets of matrices arise in constructing self-similar objects like von Koch's snowflake curve, in various interpolation schemes, in constructing wavelets of compact support, and in studying nonhomogeneous Markov chains. This paper gives necessary conditions and also some sufficient conditions for a set X to be an RCP set. These are conditions on the eigenvalues and left eigenspaces of matrices in 2 and finite products of these matrices. Necessary and sufficient conditions are given for a finite set Z to be an RCP set having a limit function M,(d) = rIT= lAd,, where d = (d,, . , d,, . .>, which is a continuous function on the space of all sequences d with the sequence topology. Finite RCP sets of column-stochastic matrices are completely characterized. Some results are given on the problem of algorithmically deciding if a given set X is an RCP set.
Linear Algebra and its Applications, 2001
This corrigendum/addendum supplies corrected statements and proofs of some results in our paper a... more This corrigendum/addendum supplies corrected statements and proofs of some results in our paper appearing in Linear Algebra Appl. 161 (1992) 227-263. These results concern special kinds of bounded semigroups of matrices. It also reports on progress on the topics of this paper made in the last eight years.
The Journal of Fourier Analysis and Applications, 1997
We study the existence and regularity of compactly supported solutions = ( ) r;1 =0 of vector re ... more We study the existence and regularity of compactly supported solutions = ( ) r;1 =0 of vector re nement equations. The space spanned by the translates of can only provide approximation order if the re nement m a s k P has certain particular factorization properties. We s h o w, how the factorization of P can lead to decay o f j^ (u)j as juj ! 1 . The results on decay a r e used in order to prove uniqueness of solutions and convergence of the cascade algorithm.
ABSTRACT The art of Jan van Eyck is renowned for its capacity to reveal with extreme accuracy the... more ABSTRACT The art of Jan van Eyck is renowned for its capacity to reveal with extreme accuracy the materiality of object surfaces. The interaction of light with the surface of materials results in reflected light that carries to the viewer the information on the quality of the surface material. Apparently Jan van Eyck understands that seeing a visual object has more to do with tracing perceptually what kind of light is falling on the object and where it comes from than with any conceptual identification of the object. Tracing the light back to its source is in vision studies known as inverse optics. In a visual display, the light source is particularly revealed by the highlights which are mirroring that source. In his pictures Jan van Eyck uses these highlights as anchor points to indicate where in the visual scene the highest lightness is to be found. With further understanding of the physics involved, he carefully observes the specific fall off of the intensity of reflected light around highlights to characterize particular materials. His special sensibility for highlights might have originated in his involvement with the production of miniatures in illuminated books where, depending on the locations of light source and viewer and orientation of the illuminated page, gilded segments can drastically change in brightness. We will compare color and luminance distribution properties around highlights in some Eyckian pictures with comparable pictures of contemporaries (Campin, Van der Weyden) with special attention to metals and gold brocade.
Ghent University Ghent University Academic Bibliography. ...
IEEE Transactions on Signal Processing, 1997
There are many signal processing tasks for which convolution-based continuous signal representati... more There are many signal processing tasks for which convolution-based continuous signal representations such as splines and wavelets provide an interesting and practical alternative to the more traditional sinc-based methods. The coefficients of the corresponding signal approximations are typically obtained by direct sampling (interpolation or quasiinterpolation) or by using least squares techniques that apply a prefilter prior to sampling. Here, we compare the performance of these approaches and provide quantitative error estimates that can be used for the appropriate selection of the sampling step h: Specifically, we review several results in approximation theory with a special emphasis on the Strang-Fix conditions, which relate the general O(h L ) behavior of the error to the ability of the representation to reproduce polynomials of degree n = L 01: We use this theory to derive pointwise error estimates for the various algorithms and to obtain the asymptotic limit of the L2-error as h tends to zero. We also propose a new improved L2-error bound for the least squares case. In the process, we provide all the relevant bound constants for polynomial splines. Some of our results suggest the existence of an intermediate range of sampling steps where the least squares method is roughly equivalent to an interpolator with twice the order. We present experimental examples that illustrate the theory and confirm the adequacy of our various bound and limit determinations.
The EMD algorithm, first proposed in [11], made more robust as well as more versatile in [12], is... more The EMD algorithm, first proposed in [11], made more robust as well as more versatile in [12], is a technique that aims to decompose into their building blocks functions that are the superposition of a (reasonably) small number of components, well separated in the time-frequency plane, each of which can be viewed as approximately harmonic locally, with slowly varying amplitudes
In the fast wavelet transform algorithm, m0 and ml are the transfer functions of a low-pass and a... more In the fast wavelet transform algorithm, m0 and ml are the transfer functions of a low-pass and a high-pass filter that split the discrete signal into two channels. These channels are decimated (only one sample out of two is retained), and the process is iterated on the low-...
Applied and Computational Harmonic Analysis, 2000
Tree approximation is a new form of nonlinear approximation which appears naturally in some appli... more Tree approximation is a new form of nonlinear approximation which appears naturally in some applications such as image processing and adaptive numerical methods. It is somewhat more restrictive than the usual n-term approximation. We show that the restrictions of tree approximation cost little in terms of rates of approximation. We then use that result to design encoders for compression. These
Journal of electrocardiology
In this report we provide a method for automated detection of J wave, defined as a notch or slur ... more In this report we provide a method for automated detection of J wave, defined as a notch or slur in the descending slope of the terminal positive wave of the QRS complex, using signal processing and functional data analysis techniques. Two different sets of ECG tracings were selected from the EPICARE ECG core laboratory, Wake Forest School of Medicine, Winston Salem, NC. The first set was a training set comprised of 100 ECGs of which 50 ECGs had J-wave and the other 50 did not. The second set was a test set (n=116 ECGs) in which the J-wave status (present/absent) was only known by the ECG Center staff. All ECGs were recorded using GE MAC 1200 (GE Marquette, Milwaukee, Wisconsin) at 10mm/mV calibration, speed of 25mm/s and 500HZ sampling rate. All ECGs were initially inspected visually for technical errors and inadequate quality, and then automatically processed with the GE Marquette 12-SL program 2001 version (GE Marquette, Milwaukee, WI). We excluded ECG tracings with major abnorma...
IEEE Transactions on Information Theory - TIT
Revista Matemática Iberoamericana, 2000
Revista Matemática Iberoamericana, 2000
Abstract We establish new results on the space BV of functions with bounded variation. While it i... more Abstract We establish new results on the space BV of functions with bounded variation. While it is well known that this space admits no unconditional basis, we show that it is" almost" characterized by wavelet expansions in the following sense: if a function $ f $ is in ...
Proceedings of the National Academy of Sciences, 2009
InfoMax and FastICA are the independent component analysis algorithms most used and apparently mo... more InfoMax and FastICA are the independent component analysis algorithms most used and apparently most effective for brain fMRI. We show that this is linked to their ability to handle effectively sparse components rather than independent components as such. The mathematical design of better analysis tools for brain fMRI should thus emphasize other mathematical characteristics than independence.
Proceedings of the National Academy of Sciences, 2011
We describe new approaches for distances between pairs of 2dimensional surfaces (embedded in 3-di... more We describe new approaches for distances between pairs of 2dimensional surfaces (embedded in 3-dimensional space) that use local structures and global information contained in inter-structure geometric relationships. We present algorithms to automatically determine these distances as well as geometric correspondences. This is motivated by the aspiration of students of natural science to understand the continuity of form that unites the diversity of life. At present, scientists using physical traits to study evolutionary relationships among living and extinct animals analyze data extracted from carefully defined anatomical correspondence points (landmarks). Identifying and recording these landmarks is time consuming and can be done accurately only by trained morphologists. This renders these studies inaccessible to non-morphologists, and causes phenomics to lag behind genomics in elucidating evolutionary patterns. Unlike other algorithms presented for morphological correspondences our approach does not require any preliminary marking of special features or landmarks by the user. It also differs from other seminal work in computational geometry in that our algorithms are polynomial in nature and thus faster, making pairwise comparisons feasible for significantly larger numbers of digitized surfaces. We illustrate our approach using three datasets representing teeth and different bones of primates and humans, and show that it leads to highly accurate results. homology | phenomics | morphometrics | Procrustes | Mobius transformations | automatic species recognition T o document and understand physical and biological phenomena (e.g., geological sedimentation, chemical reactions, ontogenetic development, speciation, evolutionary adaptation, etc.), it is important to quantify the similarity or dissimilarity of objects affected or produced by the phenomena under study. The grain size or elasticity of rocks, geographic distances between populations, or hormone levels and body masses of individuals -these can be readily measured, and the resulting numerical values can be used to compute similarities/distances that help build understanding. Other properties like genetic makeup or gross anatomical structure can not be quantified by a single number; determining how to measure and compare these is more involved . Representing the structure of a gene (through sequencing) or quantification of an anatomical structure (through the digitization of its surface geometry) leads to more complex numerical representations; even though these are not measurements allowing direct comparison with their counterparts for other genes or anatomical structures, they represent an essential initial step for such quantitative comparisons. The 1dimensional, sequential arrangement of genomes and the discrete variation (four nucleotide base types) for each of thousands of available correspondence points help reduce the computational complexity of determining the most likely alignment between genomes; alignment procedures are now increasingly automated . The resulting, rapidly generated and massive data sets, analyzed with increasing sophistication and flexible in-depth exploration due to advances in comput-ing technology, have lead to spectacular progress. For instance phylogenetics has begun to unravel mysteries of large scale evolutionary relationships experienced as extraordinarily difficult by morphologists .
Linear Algebra and its Applications, 1992
An infinite product IIT= lMi of matrices converges (on the right) if limi __ M, . . . Mi exists. ... more An infinite product IIT= lMi of matrices converges (on the right) if limi __ M, . . . Mi exists. A set Z = (Ai: i > l} of n X n matrices is called an RCP set (rightconvergent product set) if all infinite products with each element drawn from Z converge. Such sets of matrices arise in constructing self-similar objects like von Koch's snowflake curve, in various interpolation schemes, in constructing wavelets of compact support, and in studying nonhomogeneous Markov chains. This paper gives necessary conditions and also some sufficient conditions for a set X to be an RCP set. These are conditions on the eigenvalues and left eigenspaces of matrices in 2 and finite products of these matrices. Necessary and sufficient conditions are given for a finite set Z to be an RCP set having a limit function M,(d) = rIT= lAd,, where d = (d,, . , d,, . .>, which is a continuous function on the space of all sequences d with the sequence topology. Finite RCP sets of column-stochastic matrices are completely characterized. Some results are given on the problem of algorithmically deciding if a given set X is an RCP set.
Linear Algebra and its Applications, 2001
This corrigendum/addendum supplies corrected statements and proofs of some results in our paper a... more This corrigendum/addendum supplies corrected statements and proofs of some results in our paper appearing in Linear Algebra Appl. 161 (1992) 227-263. These results concern special kinds of bounded semigroups of matrices. It also reports on progress on the topics of this paper made in the last eight years.
The Journal of Fourier Analysis and Applications, 1997
We study the existence and regularity of compactly supported solutions = ( ) r;1 =0 of vector re ... more We study the existence and regularity of compactly supported solutions = ( ) r;1 =0 of vector re nement equations. The space spanned by the translates of can only provide approximation order if the re nement m a s k P has certain particular factorization properties. We s h o w, how the factorization of P can lead to decay o f j^ (u)j as juj ! 1 . The results on decay a r e used in order to prove uniqueness of solutions and convergence of the cascade algorithm.