Roberto Gallé | Real Academia de Ciencias y Artes de Barcelona (original) (raw)

Papers by Roberto Gallé

Research paper thumbnail of Data for section 3.2 of Stellar Population Properties in the Stellar Streams Around SPRC047

Zenodo (CERN European Organization for Nuclear Research), Dec 18, 2023

Research paper thumbnail of Star-Image Centering with Deep Learning II: HST/WFPC2 Full Field of View

arXiv (Cornell University), Apr 25, 2024

Research paper thumbnail of Star-image Centering with Deep Learning: HST/WFPC2 Images

Publications of the Astronomical Society of the Pacific

A deep learning (DL) algorithm is built and tested for its ability to determine centers of star i... more A deep learning (DL) algorithm is built and tested for its ability to determine centers of star images in HST/WFPC2 exposures, in filters F555W and F814W. These archival observations hold great potential for proper-motion studies, but the undersampling in the camera’s detectors presents challenges for conventional centering algorithms. Two exquisite data sets of over 600 exposures of the cluster NGC 104 in these filters are used as a testbed for training and evaluating the DL code. Results indicate a single-measurement standard error from 8.5 to 11 mpix, depending on the detector and filter. This compares favorably to the ∼20 mpix achieved with the customary “effective point spread function (PSF)” centering procedure for WFPC2 images. Importantly, the pixel-phase error is largely eliminated when using the DL method. The current tests are limited to the central portion of each detector; in future studies, the DL code will be modified to allow for the known variation of the PSF across...

Research paper thumbnail of Image reconstruction of extended objects: demonstration with the Starfire Optical Range 3.5m telescope

Optics in Atmospheric Propagation and Adaptive Systems XV, 2012

When collecting images through turbulence it is always useful to have an estimate of the turbulen... more When collecting images through turbulence it is always useful to have an estimate of the turbulence strength during the time of the observations. This is particularly true when post-processing of the collected imagery is considered. For space-based objects, one usually resorts to observing a single star for this purpose. We show how this time-consuming procedure can be avoided by estimating the turbulence strength, and the corresponding transfer function, directly from the target observations. The images were collected with the 3.5 telescope at the Starfire Optical Range USAF facility in Albuquerque, New Mexico.

Research paper thumbnail of Anisoplanatic Imaging Through Turbulence Using Principal Component Analysis

The performance of optical systems is highly degraded by atmospheric turbulence when observing bo... more The performance of optical systems is highly degraded by atmospheric turbulence when observing both vertically (e.g., astronomy, remote sensing) or horizontally (e.g. long-range surveillance). This problem can be partially alleviated using adaptive optics (AO) but only for small fields of view (FOV), described by the isoplanatic angle, for which the turbulence-induced aberrations can be considered constant. Additionally, this problem can also be tackled using post-processing techniques such as deconvolution algorithms which take into account the variability of the point spread function (PSF) in anisoplanatic conditions. Variability of the PSF across the field of view in anisoplanatic imagery can be described using principal component analysis. Then, a certain number of variable PSFs can be used to create new basis functions, called principal components (PC), which can be considered constant across the FOV and, therefore, potentially be used to perform global deconvolution. Our appro...

Research paper thumbnail of Toward Long-Term and Archivable Reproducibility

Zenodo (CERN European Organization for Nuclear Research), May 10, 2022

Analysis pipelines commonly use high-level technologies that are popular when created, but are un... more Analysis pipelines commonly use high-level technologies that are popular when created, but are unlikely to be readable, executable, or sustainable in the long term. A set of criteria is introduced to address this problem: Completeness (no execution requirement beyond a minimal Unix-like operating system, no administrator privileges, no network connection, and storage primarily in plain text); modular design; minimal complexity; scalability; verifiable inputs and outputs; version control; linking analysis with narrative; and free and open source software. As a proof of concept, we introduce "Maneage" (Managing data lineage), enabling cheap archiving, provenance extraction, and peer verification that has been tested in several research publications. We show that longevity is a realistic requirement that does not sacrifice immediate or short-term reproducibility. The caveats (with proposed solutions) are then discussed and we conclude with the benefits for the various stakeholders. This article is itself a Maneage'd project (project commit 54e4eb2). Appendices-Two comprehensive appendices that review the longevity of existing solutions; available after the main body of this paper (Appendices A and B). Reproducibility-Products available in zenodo.6533902. Git history of this paper is at git.maneage.org/paper-concept.git, which is also archived in Software Heritage 1 .

Research paper thumbnail of Extended PSF for Subaru Hyper-Suprime Camera

Contributions to the XIV.0 Scientific Meeting (virtual) of the Spanish Astronomical Society, Jul 1, 2020

Research paper thumbnail of Galaxy And Mass Assembly (GAMA): extended intragroup light in a group at <i>z</i> = 0.2 from deep Hyper Suprime-Cam images

Monthly Notices of the Royal Astronomical Society, Nov 24, 2022

We present a pilot study to assess the potential of Hyper Suprime-Cam Public Data Release 2 (HSC-... more We present a pilot study to assess the potential of Hyper Suprime-Cam Public Data Release 2 (HSC-PDR2) images for the analysis of extended faint structures within groups of galaxies. We examine the intragroup light (IGL) of the group 400138 (M dyn = 1.3 ± 0.5 × 10 13 M , z ∼ 0.2) from the Galaxy And Mass Assembly (GAMA) surv e y using Hyper Suprime-Cam Subaru Strategic Program Public Data Release 2 (HSC-SSP PDR2) images in g , r , and i bands. We present the most extended IGL measurement to date, reaching down to μ lim g = 30. 76 mag arcsec −2 (3 σ ; 10 × 10 arcsec 2) at a semimajor axis of 275 kpc. The IGL shows mean colour values of g − i = 0.92, g − r = 0.60, and r − i = 0.32 (±0.01). The IGL stellar populations are younger (2-2.5 Gyr) and less metal rich ([Fe/H] ∼ −0.4) than those of the host group galaxies. We find a range of IGL fractions as a function of total group luminosity of ∼2-36 per cent depending on the definition of IGL, with larger fractions the bluer the observation wavelength. The early-type to late-type galaxy ratio suggests that 400138 is a more evolved group, dominated by early-type galaxies, and the IGL fraction agrees with that of other similarly evolved groups. These results are consistent with tidal stripping of the outer parts of Milky Way-like galaxies as the main driver of the IGL build-up. This is supported by the detection of substructure in the IGL towards the galaxy member 1660615 suggesting a recent interaction (< 1 Gyr ago) of that galaxy with the core of the group.

Research paper thumbnail of Toward a jointly super-resolved and optically sectioned reconstruction for structured illumination retinal imaging

Le Centre pour la Communication Scientifique Directe - HAL - Ecole polytechnique, Aug 26, 2019

La microscopie par illumination structurée (SIM en anglais) est une technique d'imagerie utilisée... more La microscopie par illumination structurée (SIM en anglais) est une technique d'imagerie utilisée en microscopie plein champ qui permet d'obtenir super-résolution (SR) et sectionnement optique (SO) grâce à des algorithmes de reconstruction dédiés. Dans cette communication, nous nous intéressons à l'application de la SIM à l'imagerie rétinienne in-vivo qui offre de nouvelles possibilités pour l'étude des structures et fonctions rétiniennes. Les méthodes de reconstruction SIM couramment utilisées en microscopie supposent un objet statique et donc ne conviennent pas au cas de l'imagerie rétinienne in-vivo à cause des mouvements oculaires. Les quelques approches proposées pour l'imagerie rétinienne ne permettent de réaliser que de la SR. Nous proposons une méthode de reconstruction adaptée à l'imagerie rétinienne qui réalise SR et SO conjointement. Elle repose sur un modèle physique multi-couches de la formation d'images, qui prend en compte les mouvements de l'objet. L'idée originale de notre approche est de reconstruire un objet bi-couche comprenant la section optique super-résolue de l'objet et une tranche où sont rejetées toutes les contributions défocalisées de l'objet. Nous validons la méthode proposée par simulations et sur des données expérimentales de microscopie.

Research paper thumbnail of Effects of the curvelet transform over interferometric images

International Journal of Imaging Systems and Technology, 2010

One of the major challenges of the current imaging techniques is to obtain good results from imag... more One of the major challenges of the current imaging techniques is to obtain good results from images acquired with interferometric techniques. The huge complexity of these images—presence of numerous negative pixels (∼50%), undesired structures introduced by the sparse sampling of the frequencial domain, noise, etc.—advises us to use multiresolution techniques to separate the different problems or features and isolate them in different scales at each resolution level. In this article, we introduce a new tool known as curvelets to work with these images. Its good properties, oriented to classify the visual information in the image depending on its elongated structures, make it an interesting tool to separate the real information from artifacts that belong to the psf sidelobes in different scales. We have decomposed, using both the Wavelet and the Curvelet transform, several interferometric images simulated and acquired with astronomical arrays of radiotelescopes, with which we cover a wide range of situations, and compared each coefficients scale obtained with both transforms. We have found that celestial sources shape keeps better its symmetry in each curvelet scale than in wavelets, and the capability of differentiation between extended sources and point ones is also higher. The identification of the sources is more clear with curvelets as well, because better with curvelets as well because of a high increase of the target enhancement respect the background in the negative pixels of the image. Furthermore, it is possible to create scales with only psf sidelobe information. © 2010 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 20, 333–353, 2010

Research paper thumbnail of Optical sectioning with Structured Illumination Microscopy for retinal imaging : inverse problem approach

Structured Illumination Microscopy (SIM) is an imaging technique for obtaining super-resolution a... more Structured Illumination Microscopy (SIM) is an imaging technique for obtaining super-resolution and optical sectioning (OS) in wide-field fluorescence microscopy. The object sample is illuminated by sinusoidal fringe patterns at different orientations and phase shifts. This has the effect of introducing high frequency information of the object into the support of the transfer function by aliasing. The resulting image is processed with dedicated reconstruction softwares which allow recovering high frequencies beyond the instrument cut-off and, simultaneously, removing the light coming from the out-of-focus slices of a 3D volume (which is called optical sectioning). Unfortunately, whereas for static samples the phase shifts of the sinusoids can be set by the user thus providing analytical solutions, this is not possible for in-vivo samples, and in particular for retinal images, due to the uncontrolled eye movements. The aim of this communication is to demonstrate that SIM can be appli...

Research paper thumbnail of Automatic scattered field estimation for the Stripe82 survey

Contributions to the XIV.0 Scientific Meeting (virtual) of the Spanish Astronomical Society, Jul 1, 2020

Research paper thumbnail of Les mille et un visages de Segundo de Chomón

Segundo de Chomón (1871-1929) est l’un des maîtres incontestés des premiers trucages cinématograp... more Segundo de Chomón (1871-1929) est l’un des maîtres incontestés des premiers trucages cinématographiques et des débuts de la mise en couleurs des images animées. Néanmoins, ce pionnier espagnol est bien plus que cela et l’importance de son œuvre aurait sans doute été mieux étudiée sans l’ombre portée de Georges Méliès. Si Chomón a pu s’inspirer du célèbre prestidigitateur français dans certains de ses films à trucs, il s’en distingue toutefois clairement par son exploitation magistrale du tour de manivelle, des ombres chinoises et du mouvement inversé. Par ailleurs, il reste l’un des rares à avoir réussi le passage entre le cinéma monstratif des films à trucs des années 1900 et le cinéma institutionnalisé des années 1910. Les trucages de ses premières scènes à trucs chez Pathé frères deviendront effets spéciaux dans les films narratifs dont il assurera l’exécution, tel Maciste alpino de Giovanni Pastrone en 1916. Cet ouvrage propose de revisiter son œuvre et de comprendre les mille et un visages de ce formidable pionnier du cinématographe, truqueur, coloriste et cinématographiste. Cet ouvrage est issu d’un colloque organisé en novembre 2017 par la Fondation Jérôme Seydoux-Pathé et « Les Arts trompeurs. Machines. Magie. Médias ».Segundo de Chomón (1871-1929) is one of the undisputed masters in first cinematographic tricks and early coloring of moving images / early moving images coloring

Research paper thumbnail of Long term and archivable reproducibility, a summary

Contributions to the XIV.0 Scientific Meeting (virtual) of the Spanish Astronomical Society, Jul 1, 2020

Research paper thumbnail of Differential Photometry in Adaptive Optics Imaging

One application of adaptive optics (AO) is high-resolution imaging of closely-spaced objects. Det... more One application of adaptive optics (AO) is high-resolution imaging of closely-spaced objects. Determining differential photometry between the two or more components of a system is essential for deducing their physical properties such as mass and/or internal structure. The task has implications for (i) Space Situational Awareness, such as the monitoring of fainter microsatellites or debris nearby a larger object, and (ii) astronomy such as the observations of close stellar faint companions. We have applied several algorithms to the task of determining the relative photometry of point sources with overlapping point spread functions in images collected with adaptive optics. These algorithms cover a wide range of approaches in the field of image processing. Specifically we have tested: PSF-fitting, multi-frame and single-frame blind deconvolution, maximum-likelihood approach combined with wavelet decomposition, and a novel one-dimensional deconvolution technique which separates signal a...

Research paper thumbnail of Unsupervised blind deconvolution

To reduce the influence of atmospheric turbulence on images of space-based objects we are develop... more To reduce the influence of atmospheric turbulence on images of space-based objects we are developing a maximum a posteriori deconvolution approach. In contrast to techniques found in the literature, we are focusing on the statistics of the point-spread function (PSF) instead of the object. We incorporated statistical information about the PSF into multi-frame blind deconvolution. Theoretical constraints on the average PSF shape come from the work of D. L. Fried while for the univariate speckle statistics we rely on the gamma distribution adopted from radar/laser speckle studies of J. W. Goodman. Our aim is to develop deconvolution strategy which is reference-less, i.e., no calibration PSF is required, extendable to longer exposures, and applicable to imaging with adaptive optics. The theory and resulting deconvolution framework were validated using simulations and real data from the 3.5m telescope at the Starfire Optical Range (SOR) in New Mexico. 1.

Research paper thumbnail of Extended object reconstruction in adaptive-optics imaging: the multiresolution approach," submitted to Astronomy & Astrophysics

Aims. We propose the application of multiresolution transforms, such as wavelets (WT) and curvele... more Aims. We propose the application of multiresolution transforms, such as wavelets (WT) and curvelets (CT), to the reconstruction of images of extended objects that have been acquired with adaptive optics (AO) systems. Such multichannel approaches normally make use of probabilistic tools in order to distinguish significant structures from noise and reconstruction residuals. Therefore, we also test the performance of two different probabilistic masks: one based on local correlation and the other on local standard deviation. Furthermore, we aim to check the historical assumption that image-reconstruction algorithms using static PSFs are not suitable for AO imaging. Methods. We convolve an image of Saturn taken with the Hubble Space Telescope (HST) with AO PSFs from the 5-m Hale telescope at the Palomar Observatory and add both shot and readout noise. Subsequently, we apply different approaches to the blurred and noisy data in order to recover the original object. The approaches include ...

Research paper thumbnail of Unsupervised blind deconvolution

To reduce the influence of atmospheric turbulence on images of space-based objects we are develop... more To reduce the influence of atmospheric turbulence on images of space-based objects we are developing a maximum a posteriori deconvolution approach. In contrast to techniques found in the literature, we are focusing on the statistics of the point-spread function (PSF) instead of the object. We incorporated statistical information about the PSF into multi-frame blind deconvolution. Theoretical constraints on the average PSF shape come from the work of D. L. Fried while for the univariate speckle statistics we rely on the gamma distribution adopted from radar/laser speckle studies of J. W. Goodman. Our aim is to develop deconvolution strategy which is reference-less, i.e., no calibration PSF is required, extendable to longer exposures, and applicable to imaging with adaptive optics. The theory and resulting deconvolution framework were validated using simulations and real data from the 3.5m telescope at the Starfire Optical Range (SOR) in New Mexico. 1.

Research paper thumbnail of Extended object reconstruction in adaptive-optics imaging: the multiresolution approach

We propose the application of multiresolution transforms, such as wavelets (WT) and curvelets (CT... more We propose the application of multiresolution transforms, such as wavelets (WT) and curvelets (CT), to the reconstruction of images of extended objects that have been acquired with adaptive optics (AO) systems. Such multichannel approaches normally make use of probabilistic tools in order to distinguish significant structures from noise and reconstruction residuals. Furthermore, we aim to check the historical assumption that image-reconstruction algorithms using static PSFs are not suitable for AO imaging. We convolve an image of Saturn taken with the Hubble Space Telescope (HST) with AO PSFs from the 5-m Hale telescope at the Palomar Observatory and add both shot and readout noise. Subsequently, we apply different approaches to the blurred and noisy data in order to recover the original object. The approaches include multi-frame blind deconvolution (with the algorithm IDAC), myopic deconvolution with regularization (with MISTRAL) and wavelets- or curvelets-based static PSF deconvol...

Research paper thumbnail of Towards Long-term and Archivable Reproducibility

ArXiv, 2020

Reproducible workflow solutions commonly use high-level technologies that were popular when they ... more Reproducible workflow solutions commonly use high-level technologies that were popular when they were created, providing an immediate solution which is unlikely to be sustainable in the long term. We therefore introduce a set of criteria to address this problem and demonstrate their practicality and implementation. The criteria have been tested in several research publications and can be summarized as: completeness (no dependency beyond a POSIX-compatible operating system, no administrator privileges, no network connection and storage primarily in plain text); modular design; minimal complexity; scalability; verifiable inputs and outputs; temporal provenance; linking analysis with narrative; and free-and-open-source software. As a proof of concept, we have implemented "Maneage", a solution which stores the project in machine-actionable and human-readable plain-text, enables version-control, cheap archiving, automatic parsing to extract data provenance, and peer-reviewable ...

Research paper thumbnail of Data for section 3.2 of Stellar Population Properties in the Stellar Streams Around SPRC047

Zenodo (CERN European Organization for Nuclear Research), Dec 18, 2023

Research paper thumbnail of Star-Image Centering with Deep Learning II: HST/WFPC2 Full Field of View

arXiv (Cornell University), Apr 25, 2024

Research paper thumbnail of Star-image Centering with Deep Learning: HST/WFPC2 Images

Publications of the Astronomical Society of the Pacific

A deep learning (DL) algorithm is built and tested for its ability to determine centers of star i... more A deep learning (DL) algorithm is built and tested for its ability to determine centers of star images in HST/WFPC2 exposures, in filters F555W and F814W. These archival observations hold great potential for proper-motion studies, but the undersampling in the camera’s detectors presents challenges for conventional centering algorithms. Two exquisite data sets of over 600 exposures of the cluster NGC 104 in these filters are used as a testbed for training and evaluating the DL code. Results indicate a single-measurement standard error from 8.5 to 11 mpix, depending on the detector and filter. This compares favorably to the ∼20 mpix achieved with the customary “effective point spread function (PSF)” centering procedure for WFPC2 images. Importantly, the pixel-phase error is largely eliminated when using the DL method. The current tests are limited to the central portion of each detector; in future studies, the DL code will be modified to allow for the known variation of the PSF across...

Research paper thumbnail of Image reconstruction of extended objects: demonstration with the Starfire Optical Range 3.5m telescope

Optics in Atmospheric Propagation and Adaptive Systems XV, 2012

When collecting images through turbulence it is always useful to have an estimate of the turbulen... more When collecting images through turbulence it is always useful to have an estimate of the turbulence strength during the time of the observations. This is particularly true when post-processing of the collected imagery is considered. For space-based objects, one usually resorts to observing a single star for this purpose. We show how this time-consuming procedure can be avoided by estimating the turbulence strength, and the corresponding transfer function, directly from the target observations. The images were collected with the 3.5 telescope at the Starfire Optical Range USAF facility in Albuquerque, New Mexico.

Research paper thumbnail of Anisoplanatic Imaging Through Turbulence Using Principal Component Analysis

The performance of optical systems is highly degraded by atmospheric turbulence when observing bo... more The performance of optical systems is highly degraded by atmospheric turbulence when observing both vertically (e.g., astronomy, remote sensing) or horizontally (e.g. long-range surveillance). This problem can be partially alleviated using adaptive optics (AO) but only for small fields of view (FOV), described by the isoplanatic angle, for which the turbulence-induced aberrations can be considered constant. Additionally, this problem can also be tackled using post-processing techniques such as deconvolution algorithms which take into account the variability of the point spread function (PSF) in anisoplanatic conditions. Variability of the PSF across the field of view in anisoplanatic imagery can be described using principal component analysis. Then, a certain number of variable PSFs can be used to create new basis functions, called principal components (PC), which can be considered constant across the FOV and, therefore, potentially be used to perform global deconvolution. Our appro...

Research paper thumbnail of Toward Long-Term and Archivable Reproducibility

Zenodo (CERN European Organization for Nuclear Research), May 10, 2022

Analysis pipelines commonly use high-level technologies that are popular when created, but are un... more Analysis pipelines commonly use high-level technologies that are popular when created, but are unlikely to be readable, executable, or sustainable in the long term. A set of criteria is introduced to address this problem: Completeness (no execution requirement beyond a minimal Unix-like operating system, no administrator privileges, no network connection, and storage primarily in plain text); modular design; minimal complexity; scalability; verifiable inputs and outputs; version control; linking analysis with narrative; and free and open source software. As a proof of concept, we introduce "Maneage" (Managing data lineage), enabling cheap archiving, provenance extraction, and peer verification that has been tested in several research publications. We show that longevity is a realistic requirement that does not sacrifice immediate or short-term reproducibility. The caveats (with proposed solutions) are then discussed and we conclude with the benefits for the various stakeholders. This article is itself a Maneage'd project (project commit 54e4eb2). Appendices-Two comprehensive appendices that review the longevity of existing solutions; available after the main body of this paper (Appendices A and B). Reproducibility-Products available in zenodo.6533902. Git history of this paper is at git.maneage.org/paper-concept.git, which is also archived in Software Heritage 1 .

Research paper thumbnail of Extended PSF for Subaru Hyper-Suprime Camera

Contributions to the XIV.0 Scientific Meeting (virtual) of the Spanish Astronomical Society, Jul 1, 2020

Research paper thumbnail of Galaxy And Mass Assembly (GAMA): extended intragroup light in a group at <i>z</i> = 0.2 from deep Hyper Suprime-Cam images

Monthly Notices of the Royal Astronomical Society, Nov 24, 2022

We present a pilot study to assess the potential of Hyper Suprime-Cam Public Data Release 2 (HSC-... more We present a pilot study to assess the potential of Hyper Suprime-Cam Public Data Release 2 (HSC-PDR2) images for the analysis of extended faint structures within groups of galaxies. We examine the intragroup light (IGL) of the group 400138 (M dyn = 1.3 ± 0.5 × 10 13 M , z ∼ 0.2) from the Galaxy And Mass Assembly (GAMA) surv e y using Hyper Suprime-Cam Subaru Strategic Program Public Data Release 2 (HSC-SSP PDR2) images in g , r , and i bands. We present the most extended IGL measurement to date, reaching down to μ lim g = 30. 76 mag arcsec −2 (3 σ ; 10 × 10 arcsec 2) at a semimajor axis of 275 kpc. The IGL shows mean colour values of g − i = 0.92, g − r = 0.60, and r − i = 0.32 (±0.01). The IGL stellar populations are younger (2-2.5 Gyr) and less metal rich ([Fe/H] ∼ −0.4) than those of the host group galaxies. We find a range of IGL fractions as a function of total group luminosity of ∼2-36 per cent depending on the definition of IGL, with larger fractions the bluer the observation wavelength. The early-type to late-type galaxy ratio suggests that 400138 is a more evolved group, dominated by early-type galaxies, and the IGL fraction agrees with that of other similarly evolved groups. These results are consistent with tidal stripping of the outer parts of Milky Way-like galaxies as the main driver of the IGL build-up. This is supported by the detection of substructure in the IGL towards the galaxy member 1660615 suggesting a recent interaction (< 1 Gyr ago) of that galaxy with the core of the group.

Research paper thumbnail of Toward a jointly super-resolved and optically sectioned reconstruction for structured illumination retinal imaging

Le Centre pour la Communication Scientifique Directe - HAL - Ecole polytechnique, Aug 26, 2019

La microscopie par illumination structurée (SIM en anglais) est une technique d'imagerie utilisée... more La microscopie par illumination structurée (SIM en anglais) est une technique d'imagerie utilisée en microscopie plein champ qui permet d'obtenir super-résolution (SR) et sectionnement optique (SO) grâce à des algorithmes de reconstruction dédiés. Dans cette communication, nous nous intéressons à l'application de la SIM à l'imagerie rétinienne in-vivo qui offre de nouvelles possibilités pour l'étude des structures et fonctions rétiniennes. Les méthodes de reconstruction SIM couramment utilisées en microscopie supposent un objet statique et donc ne conviennent pas au cas de l'imagerie rétinienne in-vivo à cause des mouvements oculaires. Les quelques approches proposées pour l'imagerie rétinienne ne permettent de réaliser que de la SR. Nous proposons une méthode de reconstruction adaptée à l'imagerie rétinienne qui réalise SR et SO conjointement. Elle repose sur un modèle physique multi-couches de la formation d'images, qui prend en compte les mouvements de l'objet. L'idée originale de notre approche est de reconstruire un objet bi-couche comprenant la section optique super-résolue de l'objet et une tranche où sont rejetées toutes les contributions défocalisées de l'objet. Nous validons la méthode proposée par simulations et sur des données expérimentales de microscopie.

Research paper thumbnail of Effects of the curvelet transform over interferometric images

International Journal of Imaging Systems and Technology, 2010

One of the major challenges of the current imaging techniques is to obtain good results from imag... more One of the major challenges of the current imaging techniques is to obtain good results from images acquired with interferometric techniques. The huge complexity of these images—presence of numerous negative pixels (∼50%), undesired structures introduced by the sparse sampling of the frequencial domain, noise, etc.—advises us to use multiresolution techniques to separate the different problems or features and isolate them in different scales at each resolution level. In this article, we introduce a new tool known as curvelets to work with these images. Its good properties, oriented to classify the visual information in the image depending on its elongated structures, make it an interesting tool to separate the real information from artifacts that belong to the psf sidelobes in different scales. We have decomposed, using both the Wavelet and the Curvelet transform, several interferometric images simulated and acquired with astronomical arrays of radiotelescopes, with which we cover a wide range of situations, and compared each coefficients scale obtained with both transforms. We have found that celestial sources shape keeps better its symmetry in each curvelet scale than in wavelets, and the capability of differentiation between extended sources and point ones is also higher. The identification of the sources is more clear with curvelets as well, because better with curvelets as well because of a high increase of the target enhancement respect the background in the negative pixels of the image. Furthermore, it is possible to create scales with only psf sidelobe information. © 2010 Wiley Periodicals, Inc. Int J Imaging Syst Technol, 20, 333–353, 2010

Research paper thumbnail of Optical sectioning with Structured Illumination Microscopy for retinal imaging : inverse problem approach

Structured Illumination Microscopy (SIM) is an imaging technique for obtaining super-resolution a... more Structured Illumination Microscopy (SIM) is an imaging technique for obtaining super-resolution and optical sectioning (OS) in wide-field fluorescence microscopy. The object sample is illuminated by sinusoidal fringe patterns at different orientations and phase shifts. This has the effect of introducing high frequency information of the object into the support of the transfer function by aliasing. The resulting image is processed with dedicated reconstruction softwares which allow recovering high frequencies beyond the instrument cut-off and, simultaneously, removing the light coming from the out-of-focus slices of a 3D volume (which is called optical sectioning). Unfortunately, whereas for static samples the phase shifts of the sinusoids can be set by the user thus providing analytical solutions, this is not possible for in-vivo samples, and in particular for retinal images, due to the uncontrolled eye movements. The aim of this communication is to demonstrate that SIM can be appli...

Research paper thumbnail of Automatic scattered field estimation for the Stripe82 survey

Contributions to the XIV.0 Scientific Meeting (virtual) of the Spanish Astronomical Society, Jul 1, 2020

Research paper thumbnail of Les mille et un visages de Segundo de Chomón

Segundo de Chomón (1871-1929) est l’un des maîtres incontestés des premiers trucages cinématograp... more Segundo de Chomón (1871-1929) est l’un des maîtres incontestés des premiers trucages cinématographiques et des débuts de la mise en couleurs des images animées. Néanmoins, ce pionnier espagnol est bien plus que cela et l’importance de son œuvre aurait sans doute été mieux étudiée sans l’ombre portée de Georges Méliès. Si Chomón a pu s’inspirer du célèbre prestidigitateur français dans certains de ses films à trucs, il s’en distingue toutefois clairement par son exploitation magistrale du tour de manivelle, des ombres chinoises et du mouvement inversé. Par ailleurs, il reste l’un des rares à avoir réussi le passage entre le cinéma monstratif des films à trucs des années 1900 et le cinéma institutionnalisé des années 1910. Les trucages de ses premières scènes à trucs chez Pathé frères deviendront effets spéciaux dans les films narratifs dont il assurera l’exécution, tel Maciste alpino de Giovanni Pastrone en 1916. Cet ouvrage propose de revisiter son œuvre et de comprendre les mille et un visages de ce formidable pionnier du cinématographe, truqueur, coloriste et cinématographiste. Cet ouvrage est issu d’un colloque organisé en novembre 2017 par la Fondation Jérôme Seydoux-Pathé et « Les Arts trompeurs. Machines. Magie. Médias ».Segundo de Chomón (1871-1929) is one of the undisputed masters in first cinematographic tricks and early coloring of moving images / early moving images coloring

Research paper thumbnail of Long term and archivable reproducibility, a summary

Contributions to the XIV.0 Scientific Meeting (virtual) of the Spanish Astronomical Society, Jul 1, 2020

Research paper thumbnail of Differential Photometry in Adaptive Optics Imaging

One application of adaptive optics (AO) is high-resolution imaging of closely-spaced objects. Det... more One application of adaptive optics (AO) is high-resolution imaging of closely-spaced objects. Determining differential photometry between the two or more components of a system is essential for deducing their physical properties such as mass and/or internal structure. The task has implications for (i) Space Situational Awareness, such as the monitoring of fainter microsatellites or debris nearby a larger object, and (ii) astronomy such as the observations of close stellar faint companions. We have applied several algorithms to the task of determining the relative photometry of point sources with overlapping point spread functions in images collected with adaptive optics. These algorithms cover a wide range of approaches in the field of image processing. Specifically we have tested: PSF-fitting, multi-frame and single-frame blind deconvolution, maximum-likelihood approach combined with wavelet decomposition, and a novel one-dimensional deconvolution technique which separates signal a...

Research paper thumbnail of Unsupervised blind deconvolution

To reduce the influence of atmospheric turbulence on images of space-based objects we are develop... more To reduce the influence of atmospheric turbulence on images of space-based objects we are developing a maximum a posteriori deconvolution approach. In contrast to techniques found in the literature, we are focusing on the statistics of the point-spread function (PSF) instead of the object. We incorporated statistical information about the PSF into multi-frame blind deconvolution. Theoretical constraints on the average PSF shape come from the work of D. L. Fried while for the univariate speckle statistics we rely on the gamma distribution adopted from radar/laser speckle studies of J. W. Goodman. Our aim is to develop deconvolution strategy which is reference-less, i.e., no calibration PSF is required, extendable to longer exposures, and applicable to imaging with adaptive optics. The theory and resulting deconvolution framework were validated using simulations and real data from the 3.5m telescope at the Starfire Optical Range (SOR) in New Mexico. 1.

Research paper thumbnail of Extended object reconstruction in adaptive-optics imaging: the multiresolution approach," submitted to Astronomy & Astrophysics

Aims. We propose the application of multiresolution transforms, such as wavelets (WT) and curvele... more Aims. We propose the application of multiresolution transforms, such as wavelets (WT) and curvelets (CT), to the reconstruction of images of extended objects that have been acquired with adaptive optics (AO) systems. Such multichannel approaches normally make use of probabilistic tools in order to distinguish significant structures from noise and reconstruction residuals. Therefore, we also test the performance of two different probabilistic masks: one based on local correlation and the other on local standard deviation. Furthermore, we aim to check the historical assumption that image-reconstruction algorithms using static PSFs are not suitable for AO imaging. Methods. We convolve an image of Saturn taken with the Hubble Space Telescope (HST) with AO PSFs from the 5-m Hale telescope at the Palomar Observatory and add both shot and readout noise. Subsequently, we apply different approaches to the blurred and noisy data in order to recover the original object. The approaches include ...

Research paper thumbnail of Unsupervised blind deconvolution

To reduce the influence of atmospheric turbulence on images of space-based objects we are develop... more To reduce the influence of atmospheric turbulence on images of space-based objects we are developing a maximum a posteriori deconvolution approach. In contrast to techniques found in the literature, we are focusing on the statistics of the point-spread function (PSF) instead of the object. We incorporated statistical information about the PSF into multi-frame blind deconvolution. Theoretical constraints on the average PSF shape come from the work of D. L. Fried while for the univariate speckle statistics we rely on the gamma distribution adopted from radar/laser speckle studies of J. W. Goodman. Our aim is to develop deconvolution strategy which is reference-less, i.e., no calibration PSF is required, extendable to longer exposures, and applicable to imaging with adaptive optics. The theory and resulting deconvolution framework were validated using simulations and real data from the 3.5m telescope at the Starfire Optical Range (SOR) in New Mexico. 1.

Research paper thumbnail of Extended object reconstruction in adaptive-optics imaging: the multiresolution approach

We propose the application of multiresolution transforms, such as wavelets (WT) and curvelets (CT... more We propose the application of multiresolution transforms, such as wavelets (WT) and curvelets (CT), to the reconstruction of images of extended objects that have been acquired with adaptive optics (AO) systems. Such multichannel approaches normally make use of probabilistic tools in order to distinguish significant structures from noise and reconstruction residuals. Furthermore, we aim to check the historical assumption that image-reconstruction algorithms using static PSFs are not suitable for AO imaging. We convolve an image of Saturn taken with the Hubble Space Telescope (HST) with AO PSFs from the 5-m Hale telescope at the Palomar Observatory and add both shot and readout noise. Subsequently, we apply different approaches to the blurred and noisy data in order to recover the original object. The approaches include multi-frame blind deconvolution (with the algorithm IDAC), myopic deconvolution with regularization (with MISTRAL) and wavelets- or curvelets-based static PSF deconvol...

Research paper thumbnail of Towards Long-term and Archivable Reproducibility

ArXiv, 2020

Reproducible workflow solutions commonly use high-level technologies that were popular when they ... more Reproducible workflow solutions commonly use high-level technologies that were popular when they were created, providing an immediate solution which is unlikely to be sustainable in the long term. We therefore introduce a set of criteria to address this problem and demonstrate their practicality and implementation. The criteria have been tested in several research publications and can be summarized as: completeness (no dependency beyond a POSIX-compatible operating system, no administrator privileges, no network connection and storage primarily in plain text); modular design; minimal complexity; scalability; verifiable inputs and outputs; temporal provenance; linking analysis with narrative; and free-and-open-source software. As a proof of concept, we have implemented "Maneage", a solution which stores the project in machine-actionable and human-readable plain-text, enables version-control, cheap archiving, automatic parsing to extract data provenance, and peer-reviewable ...