Experimental digital Gabor hologram rendering by a model-trained convolutional neural network (original) (raw)
Digital hologram rendering can be performed by a convolutional neural network, trained with image pairs calculated by numerical wave propagation from sparse generating images. 512-by-512 pixel digital Gabor magnitude holograms are successfully estimated from experimental interferograms by a standard UNet trained with 50,000 synthetic image pairs over 70 epochs. Convolutional neural networks already have demonstrated their potential for digital hologram rendering from optically-acquired interferograms in free-space propagation conditions [1-4] and through scattering media [5-7]. Our aim here is to determine whether an auto-encoder convolutional neural network, a UNet [8], can be trained over a synthetic database for digital hologram rendering from experimental interferograms. A model of wave propagation is used to create synthetic Gabor interferograms and synthetic Gabor magnitude holograms from random images. This image formation model is based on angular spectrum propagation and magnitude calculation of the wave field from the object to the sensor array, and from the sensor to the object. In contrast with previously reported computational image rendering schemes with convolutional neural networks, where image formation is statistically inferred through experimental data [1-4], in our approach it is inferred from synthetic data created by physical modeling of wave interference and propagation. Since the UNet training strategy relies on the strong use of a large and diverse database [8], training on synthetic data alleviates the need for numerous experimental data and data augmentation. The convolutional neural network used in this study is (sketched in Fig. 1) is a standard UNet [8] with an input image of 512 2 pixels, a depth of 7 down sampling blocks and 7 up sampling blocks. Convolution kernels are 3-by-3-by-n pixels, where n is the number of channels of the input feature map. The first set of 16 kernels generates a feature map of n = 16 channels from the input image which has only n = 1 channel. In the down sampling part, the lateral size of the features is divided by two and the number of channels n is multiplied by two between blocks. In the up sampling part, the lateral size of the features is multiplied by two and the number of channels n is divided by two between blocks. Mirror features from the down sampling part are concatenated to their up sampling counterparts. The UNet is trained with 50,000 image pairs (among which 15% are used for validation purposes). The chosen loss function is the mean-square error between predicted image H ′ and actual training output H during the validation process. It is used to measure their inconsistency; the optimization (or deep learning) of the network consists in finding the set of network weights for which this loss function is minimum. The learning rate controls how much the weights of the network are adjusted with respect to the gradient of the loss function. We construct a database of training input and output image pairs by the procedure illustrated in the flowchart from Fig. 2. A square generating image A of 512 2 pixels that describes the amplitude transmission function of a synthetic object is constructed by setting a given number N of source points at random locations with random brightness on a black background, and spatial filtering by a circular aperture in the Fourier plane. The diameter of the aperture is one half of the diagonal of the reciprocal plane. The values of the array A are positive real numbers. A synthetic Gabor interferogram I is calculated from this generating image A by angular spectrum propagation [10] of the wave field described by the transmission function A with a distance parameter −z, followed by a rectification consisting of taking the magnitude of the complex-valued array points
Sign up for access to the world's latest research
checkGet notified about relevant papers
checkSave papers to use in your research
checkJoin the discussion with peers
checkTrack your impact
Sign up for access to the world's latest research
Related papers
2020
Digital magnitude image rendering in Gabor holography can be performed by a convolutional neural network trained with a fully synthetic database formed by image pairs generated randomly. These pairs are linked by a numerical model propagation of a scalar wave field from the object to the sensor array. The synthetic database is formed by generating images made from source points at random locations with random brightness on a black background. Successful prediction of experimental Gabor holograms of microscopic worms by a UNet trained with 50,000 random image pairs is achieved, and a classifier-based regularization for twin-image removal is investigated.
Reconstruction of digital holograms of large objects by convolution algorithm
RESUMEN La reconstrucción numérica de los hologramas registrados digitalmentellamada holografía digital (HD)ha sido denominada una re-invención de la holografía. La HD supera el procesamiento químico de la holografía óptica y la provee de la manipulación numérica de los campos ópticos reconstruidos. Estos campos reconstruidos son obtenidos modelando numéricamente la difracción de la onda que llega sobre el holograma registrado; se han encontrado diferentes aproximaciones para evaluar tal proceso de difracción, algunos de ellos son la transformada discreta de Fresnel, el espectro angular y la convolución. La aproximación por convolución es de gran interés dado que permite el cálculo por tres transformadas rápidas de Fourier logrando altas velocidades de reconstrucción. Su principal característica es que el tamaño del pixel de registro es igual al de reconstrucción, lo cual limita su aplicación a la reconstrucción con objetos de pequeñas dimensiones. En este trabajo nosotros presentamos una aproximación simple para extender el rango de aplicación del método de convolución para reconstruir objetos grandes; se presentan resultados experimentales para validar el método propuesto. Palabras Clave: Convolución; Reconstrucción de objetos grandes; Integral de difracción.
IRJET- Phase Recovery for Holography using Deep Learning
IRJET, 2021
Computer-generated holography (CGH) is the strategy for carefully creating holographic interference designs. A hologram is a true account of an interference design that utilizes diffraction to repeat a 3D light field, bringing about a picture that has the depth, parallax, and different properties of the original scene. A holographic picture can be produced for example by carefully registering a holographic interference example and printing it onto a cover or film for ensuing brightening by a reasonable intelligible light source. However, CGH is an iterative technique to register this interference which is time and asset demanding. This paper proposes a technique utilizing deep learning networks that uses a non-iterative calculation which is proficient when contrasted with CGH and galvanizes the plan to utilize this technique to consolidate computer vision and the field of optics.
Randomness assisted in-line holography with deep learning
Scientific Reports
We propose and demonstrate a holographic imaging scheme exploiting random illuminations for recording hologram and then applying numerical reconstruction and twin image removal. We use an in-line holographic geometry to record the hologram in terms of the second-order correlation and apply the numerical approach to reconstruct the recorded hologram. This strategy helps to reconstruct high-quality quantitative images in comparison to the conventional holography where the hologram is recorded in the intensity rather than the second-order intensity correlation. The twin image issue of the in-line holographic scheme is resolved by an unsupervised deep learning based method using an auto-encoder scheme. Proposed learning technique leverages the main characteristic of autoencoders to perform blind single-shot hologram reconstruction, and this does not require a dataset of samples with available ground truth for training and can reconstruct the hologram solely from the captured sample. Exp...
Phase imaging with an untrained neural network
Light: Science & Applications, 2020
Most of the neural networks proposed so far for computational imaging (CI) in optics employ a supervised training strategy, and thus need a large training set to optimize their weights and biases. Setting aside the requirements of environmental and system stability during many hours of data acquisition, in many practical applications, it is unlikely to be possible to obtain sufficient numbers of ground-truth images for training. Here, we propose to overcome this limitation by incorporating into a conventional deep neural network a complete physical model that represents the process of image formation. The most significant advantage of the resulting physics-enhanced deep neural network (PhysenNet) is that it can be used without training beforehand, thus eliminating the need for tens of thousands of labeled data. We take single-beam phase imaging as an example for demonstration. We experimentally show that one needs only to feed PhysenNet a single diffraction pattern of a phase object...
Deep DIH : Statistically Inferred Reconstruction of Digital In-Line Holography by Deep Learning
ArXiv, 2020
Digital in-line holography is commonly used to reconstruct 3D images from 2D holograms for microscopic objects. One of the technical challenges that arise in the signal processing stage is removing the twin image that is caused by the phase-conjugate wavefront from the recorded holograms. Twin image removal is typically formulated as a non-linear inverse problem due to the irreversible scattering process when generating the hologram. Recently, end-to-end deep learning-based methods have been utilized to reconstruct the object wavefront (as a surrogate for the 3D structure of the object) directly from a single-shot in-line digital hologram. However, massive data pairs are required to train deep learning models for acceptable reconstruction precision. In contrast to typical image processing problems, well-curated datasets for in-line digital holography does not exist. Also, the trained model highly influenced by the morphological properties of the object and hence can vary for differe...
Comprehensive deep learning model for optical holography
2021
Holography is a vital tool used in various applications from microscopy, solar energy, imaging, display to information encryption. Generation of a holographic image and reconstruction of object/hologram information from a holographic image using the current algorithms are timeconsuming processes. Versatile, fast in the meantime accurate methodologies are required to compute holograms performing color imaging at multiple observation planes and reconstruct object/sample information from a holographic image for widely accommodating optical holograms. Here, we focus on design of optical holograms for generation of holographic images at multiple observation planes and colors via a deep learning model the DOENet. The DOENet produces optical holograms which show multitasking performance as multiplexing color holographic image planes by tuning holographic structures. Furthermore, our deep learning model retrieves an object/hologram information from an intensity holographic image without req...
Convolutional Neural Network (CNN): The architecture and applications
Applied Journal of Physical Science Volume 4(4), pages 42-50, December 2022 Article Number: 1378CAD82 ISSN: 2756-6684 https://doi.org/10.31248/AJPS2022.085 https://integrityresjournals.org/journal/AJPS, 2022
The human brain is made up of several hundreds of billions of interconnected neurons that process information in parallel. Researchers in the field of artificial intelligence have successfully demonstrated a considerable level of intelligence on chips and this has been termed Neural Networks (NNs). Neural networks, also known as artificial neural networks (ANNs) or simulated neural networks (SNNs), are a subset of machine learning (ML) and they are at the heart of deep learning algorithms. These subsets of ML have their names and structures derived from the human brain and the way that biological neurons signal to one another. A class of NNs that are often used in processing digital data images is the Convolutional Neural Network (CNN or ConvNet). The human brain processes a huge amount of information with each neuron having its own receptive field connected to other neurons in a way that they cover the entire visual field. Mimicking the biological technique, where the neurons only respond to stimuli in the restricted region of the visual field referred to as the receptive field, each neuron in the CNN processes data only in its receptive field. In this review paper, the architecture and application of CNN are presented. Its evolution, concepts, and approaches to solving problems related to digital images, computer vision and are also examined.
Generative Adversarial Networks for Synthesizing InSAR Patches
ArXiv, 2020
Generative Adversarial Networks (GANs) have been employed with certain success for image translation tasks between optical and real-valued SAR intensity imagery. Applications include aiding interpretability of SAR scenes with their optical counterparts by artificial patch generation and automatic SAR-optical scene matching. The synthesis of artificial complex-valued InSAR image stacks asks for, besides good perceptual quality, more stringent quality metrics like phase noise and phase coherence. This paper provides a signal processing model of generative CNN structures, describes effects influencing those quality metrics and presents a mapping scheme of complex-valued data to given CNN structures based on popular Deep Learning frameworks.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.