Superresolution Research Papers - Academia.edu (original) (raw)

Recently, there has been a great deal of work developing super-resolution algorithms for combining a set of lowquality images to produce a set of higher quality images. Either explicitly or implicitly, such algorithms must perform the... more

Recently, there has been a great deal of work developing super-resolution algorithms for combining a set of lowquality images to produce a set of higher quality images. Either explicitly or implicitly, such algorithms must perform the joint task of registering and fusing the low-quality image data. While many such algorithms have been proposed, very little work has addressed the performance bounds for such problems. In this paper, we analyze the performance limits from statistical first principles using Cramér-Rao inequalities. Such analysis offers insight into the fundamental super-resolution performance bottlenecks as they relate to the subproblems of image registration, reconstruction, and image restoration.

We describe the change of the spatial distribution of the state of polarisation occurring during two-dimensional imaging through a multilayer and in particular through a layered metallic flat lens. Linear or circular polarisation of... more

We describe the change of the spatial distribution of the state of polarisation occurring during two-dimensional imaging through a multilayer and in particular through a layered metallic flat lens. Linear or circular polarisation of incident light is not preserved due to the difference in the amplitude transfer functions for the TM and TE polarisations. In effect, the transfer function and the point spread function that characterize 2D imaging through a multilayer both have a matrix form and cross-polarisation coupling is observed for spatially modulated beams with a linear or circular incident polarisation. The point spread function in a matrix form is used to characterise the resolution of the superlens for different polarisation states. We demonstrate how the 2D PSF may be used to design a simple diffractive nanoelement consisting of two radial slits. The structure assures the separation of non-diffracting radial beams originating from two slits in the mask and exhibits an interesting property of a backward power flow in between the two rings.

Deep convolutional neural networks (CNNs) based approaches are the state-of-the-art in various computer vision tasks, including face recognition. Considerable research effort is currently being directed towards further improving deep CNNs... more

Deep convolutional neural networks (CNNs) based approaches are the state-of-the-art in various computer vision tasks, including face recognition. Considerable research effort is currently being directed towards further improving deep CNNs by focusing on more powerful model architectures and better learning techniques. However, studies systematically exploring the strengths and weaknesses of existing deep models for face recognition are still relatively scarce in the literature. In this paper, we try to fill this gap and study the effects of different covariates on the verification performance of four recent deep CNN models using the Labeled Faces in the Wild (LFW) dataset. Specifically, we investigate the influence of covariates related to: image quality – blur, JPEG compression, occlusion, noise, image brightness, contrast, missing pixels; and model characteristics – CNN architecture, color information, descriptor computation; and analyze their impact on the face verification performance of AlexNet, VGG-Face, GoogLeNet, and SqueezeNet. Based on comprehensive and rigorous experimentation, we identify the strengths and weaknesses of the deep learning models, and present key areas for potential future research. Our results indicate that high levels of noise, blur, missing pixels, and brightness have a detrimental effect on the verification performance of all models, whereas the impact of contrast changes and compression artifacts is limited. It has been found that the descriptor computation strategy and color information does not have a significant influence on performance.

We introduce the concept of the double-directional mobile radio channel. It is called this because it includes angular information at both link ends, e.g., at the base station and at the mobile station. We show that this angular... more

We introduce the concept of the double-directional mobile radio channel. It is called this because it includes angular information at both link ends, e.g., at the base station and at the mobile station. We show that this angular information can be obtained with synchronized antenna arrays at both link ends. In wideband high-resolution measurements, we used a switched linear array at the receiver and a virtual-cross array at the transmitter. We evaluated the raw measurement data with a technique that alternately used estimation and beamforming, and that relied on ESPRIT (Estimation of Signal Parameters via Rotational Invariance Techniques) to obtain isuperresolution in both angular domains and in the delay domain. In sample microcellular scenarios (open and closed courtyard, line-of-sight and obstructed line-of-sight), up to 50 individual propagation paths were determined. The major multipath components could be matched precisely to the physical environment by geometrical considerations. Up to three reflectionlscattering points per .propagation path were identified and localized, lending insight into the multipath spreading properties in a microcell. The extracted multipath parameters allow unambiguous scatterer identification and channel characterization, independently of a specific antenna, its configuration (single/array), and its pattern. The measurement results demonstrated a considerable amount of power being carried via multiply reflected components, thus suggesting revisiting the popular single-bounce propagation models. It turned out that the wideband double-directional evaluation is a most complete method for separating multipath components. Due to its excellent spatial resolution, the double-directional concept provides accurate estimates of the channel's multipath-richness, which is the important parameter for the capacity of multiple-input multiple-output (MIMO) channels.

The theme for this thesis is the application of the inverse problem framework with sparsity-enforcing regularization to passive source localization in sensor array processing. The approach involves reformulating the problem in an... more

The theme for this thesis is the application of the inverse problem framework with sparsity-enforcing regularization to passive source localization in sensor array processing. The approach involves reformulating the problem in an optimization framework by using an overcomplete basis, and applying sparsifying regularization, thus focusing the signal energy to achieve excellent resolution. We develop numerical methods for enforcing sparsity by using 1 and p regularization. We use the second order cone programming framework for 1 regularization, which allows efficient solutions using interior point methods. For the p counterpart, the numerical solution is based on halfquadratic regularization. We propose several approaches of using multiple time samples of sensor outputs in synergy, and a method for the automatic choice of the regularization parameter. We conduct extensive numerical experiments analyzing the behavior of our approach and comparing it to existing source localization methods. This analysis demonstrates that our approach has important advantages such as superresolution, robustness to noise and limited data, robustness to correlation of the sources and lack of need for accurate initialization. The approach is also extended to allow self-calibration of sensor position errors by using a procedure similar in spirit to block-coordinate descent on an augmented objective function including both the locations of the sources and the positions of the sensors.

We compare the performance of video-rate Stimulated Emission Depletion (STED) and confocal microscopy in imaging the interior of living neurons. A lateral resolution of 65 nm is observed in STED movies of 28 frames per second, which is... more

We compare the performance of video-rate Stimulated Emission Depletion (STED) and confocal microscopy in imaging the interior of living neurons. A lateral resolution of 65 nm is observed in STED movies of 28 frames per second, which is 4-fold higher in spatial resolution than in their confocal counterparts. STED microscopy, but not confocal microscopy, allows discrimination of single features at high spatial densities. Specific patterns of movement within the confined space of the axon are revealed in STED microscopy, while confocal imaging is limited to reporting gross motion. Further progress is to be expected, as we demonstrate that the use of continuous wave (CW) beams for excitation and STED is viable for video-rate STED recording of living neurons. Tentatively providing a larger photon flux, CW beams should facilitate extending fast STED imaging towards imaging fainter living samples.

A technique based on superresolution by digital holographic microscopic imaging is presented. We used a two dimensional (2-D) vertical-cavity self-emitting laser (VCSEL) array as spherical-wave illumination sources. The method is defined... more

A technique based on superresolution by digital holographic microscopic imaging is presented. We used a two dimensional (2-D) vertical-cavity self-emitting laser (VCSEL) array as spherical-wave illumination sources. The method is defined in terms of an incoherent superposition of tilted wavefronts. The tilted spherical wave originating from the 2-D VCSEL elements illuminates the target in transmission mode to obtain a hologram in a Mach-Zehnder interferometer configuration. Superresolved images of the input object above the common lens diffraction limit are generated by sequential recording of the individual holograms and numerical reconstruction of the image with the extended spatial frequency range. We have experimentally tested the approach for a microscope objective with an exact 2-D reconstruction image of the input object. The proposed approach has implementation advantages for applications in biological imaging or the microelectronic industry in which structured targets are being inspected.

Recently, convolutional neural networks (CNN) have been successfully applied to many remote sensing problems. However , deep learning techniques for multi-image super-resolution from multitemporal unregistered imagery have received little... more

Recently, convolutional neural networks (CNN) have been successfully applied to many remote sensing problems. However , deep learning techniques for multi-image super-resolution from multitemporal unregistered imagery have received little attention so far. This work proposes a novel CNN-based technique that exploits both spatial and temporal correlations to combine multiple images. This novel framework integrates the spatial registration task directly inside the CNN, and allows to exploit the representation learning capabilities of the network to enhance registration accuracy. The entire super-resolution process relies on a single CNN with three main stages: shared 2D convolutions to extract high-dimensional features from the input images; a subnetwork proposing registration filters derived from the high-dimensional feature representations; 3D convolutions for slow fusion of the features from multiple images. The whole network can be trained end-to-end to recover a single high resolution image from multiple unregistered low resolution images. The method presented in this paper is the winner of the PROBA-V super-resolution challenge issued by the European Space Agency.

We present PiCam (Pelican Imaging Camera-Array), an ultra-thin high performance monolithic camera array, that captures light fields and synthesizes high resolution images along with a range image (scene depth) through integrated parallax... more

We present PiCam (Pelican Imaging Camera-Array), an ultra-thin high performance monolithic camera array, that captures light fields and synthesizes high resolution images along with a range image (scene depth) through integrated parallax detection and superresolution. The camera is passive, supporting both stills and video, low light capable, and small enough to be included in the next generation of mobile devices including smartphones. Prior works [Rander et al. 1997; Yang et al. 2002; Zhang and Chen 2004; Tanida et al. 2001; Tanida et al. 2003; Duparré et al. 2004] in camera arrays have explored multiple facets of light field capture - from viewpoint synthesis, synthetic refocus, computing range images, high speed video, and micro-optical aspects of system miniaturization. However, none of these have addressed the modifications needed to achieve the strict form factor and image quality required to make array cameras practical for mobile devices. In our approach, we customize many ...

We describe an innovative methodology for determining the quality of digital images. The method is based on measuring the variance of the expected entropy of a given image upon a set of predefined directions. Entropy can be calculated on... more

We describe an innovative methodology for determining the quality of digital images. The method is based on measuring the variance of the expected entropy of a given image upon a set of predefined directions. Entropy can be calculated on a local basis by using a spatial/spatial-frequency distribution as an approximation for a probability density function. The generalized Rényi entropy and the normalized pseudo-Wigner distribution (PWD) have been selected for this purpose. As a consequence, a pixel-by-pixel entropy value can be calculated, and therefore entropy histograms can be generated as well. The variance of the expected entropy is measured as a function of the directionality, and it has been taken as an anisotropy indicator. For this purpose, directional selectivity can be attained by using an oriented 1-D PWD implementation. Our main purpose is to show how such an anisotropy measure can be used as a metric to assess both the fidelity and quality of images. Experimental results show that an index such as this presents some desirable features that resemble those from an ideal image quality function, constituting a suitable quality index for natural images. Namely, infocus, noise-free natural images have shown a maximum of this metric in comparison with other degraded, blurred, or noisy versions. This result provides a way of identifying in-focus, noise-free images from other degraded versions, allowing an automatic and nonreference classification of images according to their relative quality. It is also shown that the new measure is well correlated with classical reference metrics such as the peak signal-to-noise ratio.

The authors describe the application of modern spectral analysis techniques to synthetic aperture radar data. The purpose is to improve the geometrical resolution of the image with respect to the numerical values related to the compressed... more

The authors describe the application of modern spectral analysis techniques to synthetic aperture radar data. The purpose is to improve the geometrical resolution of the image with respect to the numerical values related to the compressed coded waveform and the synthetic aperture, so that subsequent classification procedures will have improved performance as well. The classical spectral estimator, i.e. the FFT, produces an image with resolution in azimuth and range bounded by the Rayleigh limits. Super-resolved images are obtained by replacing the FFT with parametric spectral estimators such as those built around an autoregressive model of the dechirped signal. The proposed processing scheme is based on a twodimensional covariance method. The expected improvement in resolution is discussed together with the results of a simulation analysis. The application of the technique to images captured by an airborne SAR resulted in a resolution gain factor of about two, The paper concludes with a perspective on future research and applications.

We present a novel approach to reconstruction-based super-resolution that uses aperiodic pixel tilings, such as a Penrose tiling or a biological retina, for improved performance. To this aim, we develop a new variant of the well-known... more

We present a novel approach to reconstruction-based super-resolution that uses aperiodic pixel tilings, such as a Penrose tiling or a biological retina, for improved performance. To this aim, we develop a new variant of the well-known error back projection super-resolution algorithm that makes use of the exact detector model in its back projection operator for better accuracy. Pixels in our model can vary in shape and size, and there may be gaps between adjacent pixels. The algorithm applies equally well to periodic or aperiodic pixel tilings. We present analysis and extensive tests using synthetic and real images to show that our approach using aperiodic layouts substantially outperforms existing reconstruction-based algorithms for regular pixel arrays. We close with a discussion of the feasibility of manufacturing CMOS or CCD chips with pixels arranged in Penrose tilings.

| Much of the progress made in image processing in the past decades can be attributed to better modeling of image content and a wise deployment of these models in relevant applications. This path of models spans from the simple ' 2 -norm... more

| Much of the progress made in image processing in the past decades can be attributed to better modeling of image content and a wise deployment of these models in relevant applications. This path of models spans from the simple ' 2 -norm smoothness through robust, thus edge preserving, measures of smoothness (e.g. total variation), and until the very recent models that employ sparse and redundant representations. In this paper, we review the role of this recent model in image processing, its rationale, and models related to it. As it turns out, the field of image processing is one of the main beneficiaries from the recent progress made in the theory and practice of sparse and redundant representations. We discuss ways to employ these tools for various image-processing tasks and present several applications in which state-of-the-art results are obtained.

Recognition of document images have important applications in restoring old and classical texts. The problem involves quality improvement before passing it to a properly trained OCR to get accurate recognition of the text. The image... more

Recognition of document images have important applications in restoring old and classical texts. The problem involves quality improvement before passing it to a properly trained OCR to get accurate recognition of the text. The image enhancement and quality improvement constitute important steps as subsequent recognition depends upon the quality of the input image. There are scenarios when high resolution images are not available and our experiments show that the OCR accuracy reduces significantly with decrease in the spatial resolution of document images. Thus the only option is to improve the resolution of such document images. The goal is to construct a high resolution image , given a single low resolution binary image, which constitutes the problem of single image super-resolution. Most of the previous work in super-resolution deal with natural images which have more information-content than the document images. Here, we use Convolution Neural Network to learn the mapping between low and the corresponding high resolution images. We experiment with different number of layers, parameter settings and non-linear functions to build a fast end-to-end framework for document image super-resolution. Our proposed model shows a very good PSNR improvement of about 4 dB on 75 dpi Tamil images, resulting in a 3% improvement of word level accuracy by the OCR. It takes less time than the recent sparse based natural image super-resolution technique, making it useful for real-time document recognition applications.

Gradient based motion estimation techniques (GM) are considered to be in the heart of stateof-the-art registration algorithms, being able to account for both pixel and subpixel registration and to handle various motion models... more

Gradient based motion estimation techniques (GM) are considered to be in the heart of stateof-the-art registration algorithms, being able to account for both pixel and subpixel registration and to handle various motion models (translation, rotation, a¢ne, projective). These methods estimate the motion between two images based on the local changes in the image intensities while assuming image smoothness. This paper o¤ers two main contributions: (i) Enhancement of the GM technique by introducing two new bidirectional formulations of the GM. This improves the convergence properties for large motions. (ii) We present an analytical convergence analysis of the GM and its properties. Experimental results demonstrate the applicability of these algorithms to real images.

The ability to improve the limited resolving power of optical imaging systems while approaching the theoretical diffraction limit has been an attractive discipline with growing interest over the last years due to its benefits in many... more

The ability to improve the limited resolving power of optical imaging systems while approaching the theoretical diffraction limit has been an attractive discipline with growing interest over the last years due to its benefits in many applied optics systems. This paper presents a new approach to achieve transverse superresolution in far-field imaging systems, with direct application in both digital microscopy and digital holographic microscopy. Theoretical analysis and computer simulations show the validity of the presented approach.

Hyperspectral imaging is a continuously growing area of remote sensing. Hyperspectral data provide a wide spectral range, coupled with a very high spectral resolution, and are suitable for detection and classification of surfaces and... more

Hyperspectral imaging is a continuously growing area of remote sensing. Hyperspectral data provide a wide spectral range, coupled with a very high spectral resolution, and are suitable for detection and classification of surfaces and chemical elements in the observed image. The main problem with hyperspectral data for these applications is the (relatively) low spatial resolution, which can vary from a few to tens of meters. In the case of classification purposes, the major problem caused by low spatial resolution is related to mixed pixels, i.e., pixels in the image where more than one land cover class is within the same pixel. In such a case, the pixel cannot be considered as belonging to just one class, and the assignment of the pixel to a single class will inevitably lead to a loss of information, no matter what class is chosen. In this paper, a new supervised technique exploiting the advantages of both probabilistic classifiers and spectral unmixing algorithms is proposed, in order to produce land cover maps of improved spatial resolution. The method is in three steps. In a first step, a coarse classification is performed, based on the probabilistic output of a Support Vector Machine (SVM). Every pixel can be assigned to a class, if the probability value obtained in the classification process is greater than a chosen threshold, or unclassified. In the proposed approach it is assumed that the pixels with a low probabilistic output are mixed pixels and thus their classification is addressed in a second step. In the second step, spectral unmixing is performed on the mixed pixels by considering the preliminary results of the coarse classification step and applying a Fully Constrained Least Squares (FCLS) method to every unlabeled pixel, in order to obtain the abundances fractions of each land cover type. Finally, in a third step, spatial regularization by Simulated Annealing is performed to obtain the resolution improvement. Experiments were carried out on a real hyperspectral data set. The results are good both visually and numerically and show that the proposed method clearly outperforms common hard classification methods when the data contain mixed pixels.

We study the effect of a kind of binary phase-only filters, the Toraldo filters, in two-color excitation fluorescence microscopy. We show that by simple insertion of a properly designed Toraldo filter in one of the illumination arms the... more

We study the effect of a kind of binary phase-only filters, the Toraldo filters, in two-color excitation fluorescence microscopy. We show that by simple insertion of a properly designed Toraldo filter in one of the illumination arms the axial resolution of the system is significantly improved. Specifically, the main peak of the point spread function is narrowed by 22% along the axial direction.

Recognition of document images have important applications in restoring old and classical texts. The problem involves quality improvement before passing it to a properly trained OCR to get accurate recognition of the text. The image... more

Recognition of document images have important applications in restoring old and classical texts. The problem involves quality improvement before passing it to a properly trained OCR to get accurate recognition of the text. The image enhancement and quality improvement constitute important steps as subsequent recognition depends upon the quality of the input image. There are scenarios when high resolution images are not available and our experiments show that the OCR accuracy reduces significantly with decrease in the spatial resolution of document images. Thus the only option is to improve the resolution of such document images. The goal is to construct a high resolution image, given a single low resolution binary image, which constitutes the problem of single image super-resolution. Most of the previous work in super-resolution deal with natural images which have more information-content than the document images. Here, we use Convolution Neural Network to learn the mapping between ...

We propose a novel architecture that learns an end-to-end mapping function to improve the spatial resolution of the input natural images. The model is unique in forming a nonlinear combination of three traditional interpolation techniques... more

We propose a novel architecture that learns an end-to-end mapping function to improve the spatial resolution of the input natural images. The model is unique in forming a nonlinear combination of three traditional interpolation techniques using the convolutional neural network. Another proposed architecture uses a skip connection with nearest neighbor interpolation, achieving almost similar results. The architectures have been carefully designed to ensure that the reconstructed images lie precisely in the manifold of high-resolution images, thereby preserving the high-frequency components with fine details. We have compared with the state of the art and recent deep learning based natural image super-resolution techniques and found that our methods are able to preserve the sharp details in the image, while also obtaining comparable or better PSNR than them. Since our methods use only traditional interpolations and a shallow CNN with less number of smaller filters, the computational c...

We present an adaptively accelerated Lucy-Richardson (AALR) method for the restoration of an image from its blurred and noisy version. The conventional Lucy-Richardson (LR) method is nonlinear and therefore its convergence is very slow.... more

We present an adaptively accelerated Lucy-Richardson (AALR) method for the restoration of an image from its blurred and noisy version. The conventional Lucy-Richardson (LR) method is nonlinear and therefore its convergence is very slow. We present a novel method to accelerate the existing LR method by using an exponent on the correction ratio of LR. This exponent is computed adaptively in each iteration, using first-order derivatives of the deblurred image from previous two iterations. Upon using this exponent, the AALR improves speed at the first stages and ensures stability at later stages of iteration. An expression for the estimation of the acceleration step size in AALR method is derived. The superresolution and noise amplification characteristics of the proposed method are investigated analytically. Our proposed AALR method shows better results in terms of low root mean square error (RMSE) and higher signal-to-noise ratio (SNR), in approximately 43% fewer iterations than those required for LR method. Moreover, AALR method followed by wavelet-domain denoising yields a better result than the recently published stateof-the-art methods.

This paper proposes a new micro-particles localization scheme in digital holography. Most conventionnal digital holography methods, are based on Fresnel transform and have several issues such as twin-image, border effects... To avoid... more

This paper proposes a new micro-particles localization scheme in digital holography. Most conventionnal digital holography methods, are based on Fresnel transform and have several issues such as twin-image, border effects... To avoid these difficulties, we propose an inverse problem approach, which yields the optimal particles set which best models the observed hologram image. We resolve this global optimization problem by conventional particle detection followed by a local refinement for each particle. Results on both simulated and real digital holograms show strong improvements in the localization of the particles, particularly along the depth dimension. In our simulations, the position precision is about or better than 1 µm rms. Our results also show that the localization precision does not deteriorate for particles near the edges of the field of view.

Based on truncated inverse filtering, a theory for deconvolution of complex fields is studied. The validity of the theory is verified by comparing with experimental data from digital holographic microscopy (DHM) using a high-NA system... more

Based on truncated inverse filtering, a theory for deconvolution of complex fields is studied. The validity of the theory is verified by comparing with experimental data from digital holographic microscopy (DHM) using a high-NA system (NA=0.95). Comparison with standard intensity deconvolution reveals that only complex deconvolution deals correctly with coherent cross-talk. With improved image resolution, complex deconvolution is demonstrated to exceed the Rayleigh limit. Gain in resolution arises by accessing the objects complex field -containing the information encoded in the phase -and deconvolving it with the reconstructed complex transfer function (CTF). Synthetic (based on Debye theory modeled with experimental parameters of MO) and experimental amplitude point spread functions (APSF) are used for the CTF reconstruction and compared. Thus, the optical system used for microscopy is characterized quantitatively by its APSF. The role of noise is discussed in the context of complex field deconvolution. As further results, we demonstrate that complex deconvolution does not require any additional optics in the DHM setup while extending the limit of resolution with coherent illumination by a factor of at least 1.64.

A new imaging technique that combines compressive sensing and super-resolution techniques is presented. Compressive sensing is accomplished by capturing optically a set of Radon projections. Superresolution measurements are simply taken... more

A new imaging technique that combines compressive sensing and super-resolution techniques is presented. Compressive sensing is accomplished by capturing optically a set of Radon projections. Superresolution measurements are simply taken by introducing a slanted twodimensional array in the optical system. The goal of the technique is to overcome resolution limitation that occurs in imaging scenarios where dense pixels sensors with large number of pixels are not available or cannot be used. With the presented imaging technique, owing to the compressive sensing approach, we were able to reconstruct images with significantly more number of pixels than measured, and owing to the super-resolution design we have been able to achieve resolution significantly beyond that limited by the sensor's pixels size.

Recently new techniques for night vision cameras are developed. So-called EMCCD cameras are able to record color information about the scene. However, in low-light situations this imagery becomes noisy. This is also the case for normal... more

Recently new techniques for night vision cameras are developed. So-called EMCCD cameras are able to record color information about the scene. However, in low-light situations this imagery becomes noisy. This is also the case for normal CCD cameras in dark situations or in shadowed areas. In this paper we present image enhancement techniques for noisy color imagery. The techniques are based on grey-value image enhancement techniques, in particular dynamic super-resolution reconstruction, which is used to enhance the lightness of the image, and local adaptive contrast enhancement. With the super-resolution technique the temporal noise in the lightness channel of the imagery is removed. The color information of the images is spatially filtered using the edge information of the enhanced lightness image. The result is colored output imagery with reduced temporal noise.

We present the design and the experimental implementation of a new imaging set-up, based on Liquid Crystal technology, able to obtain super-resolved polarimetric images of polarimetric samples when the resolution is detector limited. The... more

We present the design and the experimental implementation of a new imaging set-up, based on Liquid Crystal technology, able to obtain super-resolved polarimetric images of polarimetric samples when the resolution is detector limited. The proposed set-up is a combination of two modules. One of them is an imaging Stokes polarimeter, based on Ferroelectric Liquid Crystal cells, which is used to analyze the polarization spatial distribution of an incident beam. The other module is used to obtain high resolved intensity images of the sample in an optical system whose resolution is mainly limited by the CCD pixel geometry. It contains a calibrated Parallel Aligned Liquid Crystal on Silicon display employed to introduce controlled linear phases. As a result, a set of different low resolved intensity images with sub-pixel displacements are captured by the CCD. By properly combining these images and after applying a deconvolution process, a super-resolved intensity image of the object is obtained. Finally, the combination of the two different optical modules permits to employ super-resolved images during the polarimetric data reduction calculation, leading to a final polarization image with enhanced spatial resolution. The proposed optical set-up performance is implemented and experimentally validated by providing superresolved images of an amplitude resolution test and a birefringent resolution test. A significant improvement in the spatial resolution (by a factor of 1.4) of the obtained polarimetric images, in comparison with the images obtained with the regular imaging system, is clearly observed when applying our proposed technique.

In this paper, we propose an image super-resolution (resolution enhancement) algorithm that takes into account inaccurate estimates of the registration parameters and the point spread function. These inaccurate estimates, along with the... more

In this paper, we propose an image super-resolution (resolution enhancement) algorithm that takes into account inaccurate estimates of the registration parameters and the point spread function. These inaccurate estimates, along with the additive Gaussian noise in the low-resolution (LR) image sequence, result in different noise level for each frame. In the proposed algorithm, the LR frames are adaptively weighted according to their reliability and the regularization parameter is simultaneously estimated. A translational motion model is assumed. The convergence property of the proposed algorithm is analyzed in detail. Our experimental results using both real and synthetic data show the effectiveness of the proposed algorithm.

We report the use of superresolution fluorescence microscopy for studying the nanoscale distribution of protein colocalization in living mammalian cells. Nanoscale imaging is attained both by a targeted and a stochastic fluorescence... more

We report the use of superresolution fluorescence microscopy for studying the nanoscale distribution of protein colocalization in living mammalian cells. Nanoscale imaging is attained both by a targeted and a stochastic fluorescence on-off switching superresolution method, namely by stimulated emission depletion (STED) and ground state depletion microscopy followed by individual molecular return (GSDIM), respectively. Analysis of protein colocalization is performed by bimolecular fluorescence complementation (BiFC). Specifically, a nonfluorescent fragment of the yellow fluorescent protein Citrine is fused to tubulin while a counterpart nonfluorescent fragment is fused to the microtubulin-associated protein MAP2 such that fluorescence is reconstituted on contact of the fragment-carrying proteins. Images with resolution down to 65 nm prove a powerful new way for studying protein colocalization in living cells at the nanoscale. Microsc. Res. Tech. 00:000-000,

Most structured illumination microscopes use a physical or synthetic grating that is projected into the sample plane to generate a periodic illumination pattern. Albeit simple and cost-effective, this arrangement hampers fast or... more

Most structured illumination microscopes use a physical or synthetic grating that is projected into the sample plane to generate a periodic illumination pattern. Albeit simple and cost-effective, this arrangement hampers fast or multi-color acquisition, which is a critical requirement for time-lapse imaging of cellular and sub-cellular dynamics. In this study, we designed and implemented an interferometric approach allowing large-field, fast, dual-color imaging at an isotropic 100-nm resolution based on a subdiffraction fringe pattern generated by the interference of two colliding evanescent waves. Our all-mirror-based system generates illumination patterns of arbitrary orientation and period, limited only by the illumination aperture (NA = 1.45), the response time of a fast, piezo-driven tip-tilt mirror (10 ms) and the available fluorescence signal. At low µW laser powers suitable for long-period observation of life cells and with a camera exposure time of 20 ms, our system permits the acquisition of super-resolved 50 µm by 50 µm images at 3.3 Hz. The possibility it offers for rapidly adjusting the pattern between images is particularly advantageous for experiments that require multi-scale and multi-color information. We demonstrate the performance of our instrument by imaging mitochondrial dynamics in cultured cortical astrocytes. As an illustration of dual-color excitation dualcolor detection, we also resolve interaction sites between near-membrane mitochondria and the endoplasmic reticulum. Our TIRF-SIM microscope provides a versatile, compact and cost-effective arrangement for superresolution imaging, allowing the investigation of co-localization and dynamic interactions between organelles -important questions in both cell biology and neurophysiology

Single-image super-resolution driven by multihypothesis prediction is considered. The proposed strategy exploits self-similarities existing between image patches within a single image. Specifically, each patch of a low-resolution image is... more

Single-image super-resolution driven by multihypothesis prediction is considered. The proposed strategy exploits self-similarities existing between image patches within a single image. Specifically, each patch of a low-resolution image is represented as a linear combination of spatially surrounding hypothesis patches. The coefficients of this representation are calculated using Tikhonov regularization and then used to generate a high-resolution image. Experimental results reveal that the proposed algorithm offers significantly higher-quality super-resolution than bicubic interpolation without the cost of training on an extensive training set of imagery as is typical of competing single-image techniques.

Single-molecule localization microscopy methods offer high spatial resolution, but they are not always suitable for live cell imaging due to limited temporal resolution. One strategy is to increase the density of photoactivated molecules... more

Single-molecule localization microscopy methods offer high spatial resolution, but they are not always suitable for live cell imaging due to limited temporal resolution. One strategy is to increase the density of photoactivated molecules present in each image, however suitable analysis algorithms for such data are still lacking. We present 3denseSTORM, a new algorithm for localization microscopy which is able to recover 2D or 3D super-resolution images from a sequence of diffraction limited images with high densities of photoactivated molecules. The algorithm is based on sparse support recovery and uses a Poisson noise model, which becomes critical in low-light conditions. For 3D data reconstruction we use the astigmatism and biplane imaging methods. We derive the theoretical resolution limits of the method and show examples of image reconstructions in simulations and in real 2D and 3D biological samples. The method is suitable for fast image acquisition in densely labeled samples and helps facilitate live cell studies with single molecule localization microscopy.

We present an approach that provides superresolution beyond the classical limit as well as image restoration in the presence of aberrations; in particular, the ability to obtain superresolution while extending the depth of field (DOF)... more

We present an approach that provides superresolution beyond the classical limit as well as image restoration in the presence of aberrations; in particular, the ability to obtain superresolution while extending the depth of field (DOF) simultaneously is tested experimentally. It is based on an approach, recently proposed, shown to increase the resolution significantly for in-focus images by speckle encoding and decoding. In our approach, an object multiplied by a fine binary speckle pattern may be located anywhere along an extended DOF region. Since the exact magnification is not known in the presence of defocus aberration, the acquired low-resolution image is electronically processed via a parallel-branch decoding scheme, where in each branch the image is multiplied by the same high-resolution synchronized time-varying binary speckle but with different magnification. Finally, a hard-decision algorithm chooses the branch that provides the highest-resolution output image, thus achieving insensitivity to aberrations as well as DOF variations. Simulation as well as experimental results are presented, exhibiting significant resolution improvement factors.

Super-resolution fluorescence imaging based on singlemolecule localization relies critically on the availability of efficient processing algorithms to distinguish, identify, and localize emissions of single fluorophores. In multiple... more

Super-resolution fluorescence imaging based on singlemolecule localization relies critically on the availability of efficient processing algorithms to distinguish, identify, and localize emissions of single fluorophores. In multiple current applications, such as threedimensional, time-resolved or cluster imaging, high densities of fluorophore emissions are common. Here, we provide an analytic tool to test the performance and quality of localization microscopy algorithms and demonstrate that common algorithms encounter difficulties for samples with high fluorophore density. We demonstrate that, for typical single-molecule localization microscopy methods such as dSTORM and the commonly used rapidSTORM scheme, computational precision limits the acceptable density of concurrently active fluorophores to 0.6 per square micrometer and that the number of successfully localized fluorophores per frame is limited to 0.2 per square micrometer.

This paper provides an overview of some time-reversal (TR) techniques for remote sensing and imaging using ultrawideband (UWB) electromagnetic signals in the microwave and millimeter wave range. The TR techniques explore the TR invariance... more

This paper provides an overview of some time-reversal (TR) techniques for remote sensing and imaging using ultrawideband (UWB) electromagnetic signals in the microwave and millimeter wave range. The TR techniques explore the TR invariance of the wave equation in lossless and stationary media. They provide superresolution and statistical stability, and are therefore quite useful for a number of remote sensing applications. We first discuss the TR concept through a prototypal TR experiment with a discrete scatterer embedded in continuous random media. We then discuss a series of TR-based imaging algorithms employing UWB signals: DORT, space-frequency (SF) imaging and TR-MUSIC. Finally, we consider a dispersion/loss compensation approach for TR applications in dispersive/lossy media, where TR invariance is broken.

An algorithm to increase the spatial resolution of digital video sequences captured with a camera that is subject to mechanical vibration is developed. The blur caused by vibration of the camera is often the primary cause for image... more

An algorithm to increase the spatial resolution of digital video sequences captured with a camera that is subject to mechanical vibration is developed. The blur caused by vibration of the camera is often the primary cause for image degradation. We address the degradation caused by low-frequency vibrations ͑vibrations for which the exposure time is less than the vibration period͒. The blur caused by lowfrequency vibrations differs from other types by having a random shape and displacement. The different displacement of each frame makes the approach used in superresolution ͑SR͒ algorithms suitable for resolution enhancement. However, SR algorithms that were developed for general types of blur should be adapted to the specific characteristics of low-frequency vibration blur. We use the method of projection onto convex sets together with a motion estimation method specially adapted to low-frequency vibration blur characteristics. We also show that the random blur characterizing low-frequency vibration requires selection of the frames prior to processing. The restoration performance as well as the frame selection criteria is dependent mainly on the motion estimation precision.

This paper describes a hyperspectral image classification method to obtain classification maps at a finer resolution than the image's original resolution. We assume that a complementary color image of high spatial resolution is available.... more

This paper describes a hyperspectral image classification method to obtain classification maps at a finer resolution than the image's original resolution. We assume that a complementary color image of high spatial resolution is available. The proposed methodology consists of a soft classification procedure to obtain landcover fractions, followed by a subpixel mapping of these fractions. While the main contribution of this article is in fact the complete multisource framework for obtaining a subpixel map, the major novelty of this subpixel mapping approach is the inclusion of contextual information, obtained from the color image. Experiments, conducted on two hyperspectral images and one real multisource data set, show excellent results, when compared to classification of the hyperspectral data only. The advantage of the contextual approach, compared to conventional subpixel mapping approaches, is clearly demonstrated.

The optical frequency response of a perfect lens partially masked by a retarder has been studied. The optical transfer function (OTF) of such a system depends upon the relative orientation of the analyser, placed at the output, relative... more

The optical frequency response of a perfect lens partially masked by a retarder has been studied. The optical transfer function (OTF) of such a system depends upon the relative orientation of the analyser, placed at the output, relative to the slow and fast axes of the mask, and ...

Hallucinating high frequency image details in single image super-resolution is a challenging task. Traditional super-resolution methods tend to produce oversmoothed output images due to the ambiguity in mapping between low and high... more

Hallucinating high frequency image details in single image super-resolution is a challenging task. Traditional super-resolution methods tend to produce oversmoothed output images due to the ambiguity in mapping between low and high resolution patches. We build on recent success in deep learning based texture synthesis and show that this rich feature space can facilitate successful transfer and synthesis of high frequency image details to improve the visual quality of super-resolution results on a wide variety of natural textures and images.

Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing... more

Localization-based super-resolution microscopy image quality depends on several factors such as dye choice and labeling strategy, microscope quality and user-defined parameters such as frame rate and number as well as the image processing algorithm. Experimental optimization of these parameters can be time-consuming and expensive so we present TestSTORM, a simulator that can be used to optimize these steps. TestSTORM users can select from among four different structures with specific patterns, dye and acquisition parameters. Example results are shown and the results of the vesicle pattern are compared with experimental data. Moreover, image stacks can be generated for further evaluation using localization algorithms, offering a tool for further software developments.

It is shown that one can make use of local instabilities in turbulent video frames to enhance image resolution beyond the limit defined by the image sampling rate. We outline the processing algorithm, present its experimental verification... more

It is shown that one can make use of local instabilities in turbulent video frames to enhance image resolution beyond the limit defined by the image sampling rate. We outline the processing algorithm, present its experimental verification on simulated and real-life videos, and discuss its potentials and limitations.

Isolating the outlier image region by decreasing the corresponding coefficient • Enhancing robustness in subpixel registration -Advantage of proposed method • Performing well in the presence of motion outliers • Relatively simple and fast... more

Isolating the outlier image region by decreasing the corresponding coefficient • Enhancing robustness in subpixel registration -Advantage of proposed method • Performing well in the presence of motion outliers • Relatively simple and fast mechanism • Effective to Gaussian noise Abstract Abstract 3 3 /36 /36 Super resolution -Definition • Increasing the information content in the final image by exploiting an additional spatio-temporal information -Using each of the LR images • Combing a set of LR to reconstruct a high-resolution image -Necessity of super resolution in mobile devices • Overcoming the limitations due to optics and sensor resolution -Pricing constraints -Computational and memory resources

An optical setup to achieve superresolution in microscopy using holographic recording is presented. The technique is based on off-axis illumination of the object and a simple optical image processing stage after the imaging system for the... more

An optical setup to achieve superresolution in microscopy using holographic recording is presented. The technique is based on off-axis illumination of the object and a simple optical image processing stage after the imaging system for the interferometric recording process. The superresolution effect can be obtained either in one step by combining a spatial multiplexing process and an incoherent addition of different holograms or it can be implemented sequentially. Each hologram holds the information of each different frequency bandpass of the object spectrum. We have optically implemented the approach for a low-numerical-aperture commercial microscope objective. The system is simple and robust because the holographic interferometric recording setup is done after the imaging lens.

Objects that temporally vary slowly can be superresolved by the use of two synchronized moving masks such as pinholes or gratings. This approach to superresolution allows one to exceed Abbe's limit of resolution. Moreover, under coherent... more

Objects that temporally vary slowly can be superresolved by the use of two synchronized moving masks such as pinholes or gratings. This approach to superresolution allows one to exceed Abbe's limit of resolution. Moreover, under coherent illumination, superresolution requires a certain approximation based on the time averaging of intensity rather than of field distribution. When extensive digital postprocessing can be incorporated into the optical system, a detector array and some postprocessing algorithms can replace the grating that is responsible for information decoding. In this way, no approximation is needed and the synchronization that is necessary when two gratings are used is simplified. Furthermore, we present two novel approaches for overcoming distortions when extensive digital postprocessing cannot be incorporated into the optical system. In the first approach, one of the gratings, in the input or at the output plane, is shifted at half the velocity of the other. In the second approach, various spectral regions are transmitted through the system's aperture to facilitate postprocessing. Experimental results are provided to demonstrate the properties of the proposed methods.

Nonlinear cellular neural filters (NCNF) were introduced recently. They are based on the complex non-linearity of multi-valued and universal binary neurons. NCNF include multi- valued filters and cellular neural Boolean filters.... more

Nonlinear cellular neural filters (NCNF) were introduced recently. They are based on the complex non-linearity of multi-valued and universal binary neurons. NCNF include multi- valued filters and cellular neural Boolean filters. Applications of the NCNF to noise reduction, extraction of image details and precise edge detection have been considered recently. This paper develops the previous ideas and presents the new results. The following problems are considered in the paper: (1) Solution of the Super-resolution problem using iterative extrapolation of the orthogonal spectra and final correction of the resulting image using NCNF; (2) Precise edge detection using NCNF within a 5 X 5 window and precise edge detection for the color images.