Image Reconstruction Research Papers - Academia.edu (original) (raw)
A prototype breast tomosynthesis system ** has been developed, allowing a total angular view of ±25 o . The detector used in this system is an amorphous selenium direct-conversion digital flat-panel detector suitable for digital... more
A prototype breast tomosynthesis system ** has been developed, allowing a total angular view of ±25 o . The detector used in this system is an amorphous selenium direct-conversion digital flat-panel detector suitable for digital tomosynthesis. The system is equipped with various readout sequences to allow the investigation of different tomosynthetic data acquisition modes. In this paper, we will present basic physical properties -such as MTF , NPS, and DQE -measured for the full resolution mode and a binned readout mode of the detector. From the measured projections, slices are reconstructed employing a special version of filtered backprojection algorithm. In a phantom study, we compare binned and full resolution acquisition modes with respect to image quality. Under the condition of same dose, we investigate the impact of the number of views on artifacts. Finally, we show tomosynthesis images reconstructed from first clinical data.
A procedure to fuse the information of short-axis cine and late enhanced magnetic resonance images is presented. First a coherent 3D reconstruction of the images is obtained by objectbased interpolation of the information of contiguous... more
A procedure to fuse the information of short-axis cine and late enhanced magnetic resonance images is presented. First a coherent 3D reconstruction of the images is obtained by objectbased interpolation of the information of contiguous slices in stacked short-axis cine acquisitions and by the correction of slice misalignments with the aid of a set of reference longaxis slices. Then, late enhanced stacked images are also interpolated and aligned with the anatomical information. Thus, the complementary information provided by both modalities is combined in a common frame of reference and in a nearly isotropic grid, which is not possible with existing fusion procedures. Numerical improvement is established by comparing the distances between unaligned and aligned manual segmentations of the myocardium in both modalities. Finally, a set of snapshots illustrate the improvement in the information overlap and the ability to reconstruct the gradient in the long-axis.
Conoscopic holography is an interferometric technique that permits the recording of three-dimensional objects. A two-step scheme is presented to recover an opaque object's shape from its conoscopic hologram, consisting of a reconstruction... more
Conoscopic holography is an interferometric technique that permits the recording of three-dimensional objects. A two-step scheme is presented to recover an opaque object's shape from its conoscopic hologram, consisting of a reconstruction algorithm to give a first estimate of the shape and an iterative restoration procedure that uses the object's support information to make the reconstruction more robust. The existence, uniqueness, and stability of the solution, as well as the convergence of the restoration algorithm, are studied. A preliminary experimental result is presented.
Electrical capacitance tomography (ECT) is considered as a promising process tomography (PT) technology. Image reconstruction algorithms play an important role in the successful applications of ECT. In this paper, a generalized objective... more
Electrical capacitance tomography (ECT) is considered as a promising process tomography (PT) technology. Image reconstruction algorithms play an important role in the successful applications of ECT. In this paper, a generalized objective functional, which has been developed using the combinational minimax estimation and a generalized stabilizing functional, is proposed. The Newton algorithm is employed to solve the proposed objective functional.
Computer vision systems attempt to recover useful information about the three-dimensional world from huge image arrays of sensed values. Since direct interpretation of large amounts of raw data by computer is difficult, it is often... more
Computer vision systems attempt to recover useful information about the three-dimensional world from huge image arrays of sensed values. Since direct interpretation of large amounts of raw data by computer is difficult, it is often convenient to partition (segment) image arrays into low-level entities (groups of pixels with similar properties) that can be compared to higher-level entities derived from representations of world knowledge. Solving the segmentation problem requires a mechanism for partitioning the image array into low-level entities based on a model of the underlying image structure. Using a piecewise-smooth surface model for image data that possesses surface coherence properties, we have developed an algorithm that simultaneously segments a large class of images into regions of arbitrary shape and approximates image data with bivariate functions so that it is possible to compute a complete, noiseless image reconstruction based on the extracted functions and regions. Surface curvature sign labeling provides an initial coarse image segmentation, which is refined by an iterative region growing method based on variable-order surface fitting. Experimental results show the algorithm's performance on six range images and three intensity images.
Multiple view 3D video reconstruction of actor performance captures a level-of-detail for body and clothing movement which is time-consuming to produce using existing animation tools. In this paper we present a framework for concatenative... more
Multiple view 3D video reconstruction of actor performance captures a level-of-detail for body and clothing movement which is time-consuming to produce using existing animation tools. In this paper we present a framework for concatenative synthesis from multiple 3D video sequences according to user constraints on movement, position and timing. Multiple 3D video sequences of an actor performing different movements are automatically constructed into a surface motion graph which represents the possible transitions with similar shape and motion between sequences without unnatural movement artefacts. Shape similarity over an adaptive temporal window is used to identify transitions between 3D video sequences. Novel 3D video sequences are synthesized by finding the optimal path in the surface motion graph between user specified key-frames for control of movement, location and timing. The optimal path which satisfies the user constraints whilst minimizing the total transition cost between 3D video sequences is found using integer linear programming. Results demonstrate that this framework allows flexible production of novel 3D video sequences which preserve the detailed dynamics of the captured movement for actress with loose clothing and long hair without visible artefacts.
Patient motion during brain SPECT studies can c l e m resolution and introduce distortion. We have developed a correction method which incorporates a motion tracking system to monitor the position and orientation of the patient's head... more
Patient motion during brain SPECT studies can c l e m resolution and introduce distortion. We have developed a correction method which incorporates a motion tracking system to monitor the position and orientation of the patient's head during acquisition. Correction is achieved by spatially repositioning projections according to measured head movements and reconstructing these projections with a fully three-dimensional (3D) algorithm. The method has been evaluated in SPECT studies of the Hoffman 3D brain phantom performed on a triple head camera with fan beam collimation. Movements were applied to the phantom and recorded by a head tracker during SPECT acquisition. Fully 3D reconstruction was performed using the motion data provided by the tracker. Correction accuracy was assessed by comparing the corrected and uncorrected studies with a motion free study, visually and by calculating mean squared error (MSE). In all studies, motion correction reduced distortion and improved MSE by a factor of 2 or more. We conclude that this method can compensate for head motion under clinical SPECT imaging conditions. 0-7803-5021 -9/99/$10.00 0 1999 IEEE.
This work describes a new approach for the computation of 3D Fourier descriptors, which are used for characterization, classification, and recognition of 3D objects. The method starts with a polygonized surface which is mapped onto a unit... more
This work describes a new approach for the computation of 3D Fourier descriptors, which are used for characterization, classification, and recognition of 3D objects. The method starts with a polygonized surface which is mapped onto a unit sphere using an inflation algorithm, after which the polyhedron is expanded in spherical harmonic functions. Homogeneous distribution of the vertices is achieved by applying an iterative watershed algorithm to the surface graph.
Fractional-pixel accuracy Motion Estimation (ME) has been shown to result in higher quality reconstructed image sequences in hybrid video coding systems. However, the higher quality is achieved by notably increased Motion Field (MF)... more
Fractional-pixel accuracy Motion Estimation (ME) has been shown to result in higher quality reconstructed image sequences in hybrid video coding systems. However, the higher quality is achieved by notably increased Motion Field (MF) bitrate and more complex computations. In this paper, new half-pixel block matching ME algorithms are proposed to improve the rate-distortion characteristics of low bitrate video communications. The proposed methods tend to decrease the required video bandwidth, while improving the motion compensation quality. The key idea is to put a deeper focus on the search origin of the ME process, based on center-bias characteristics of low bitrate video MFs. To employ the bene ts of Mesh-based ME (MME), the introduced algorithms are also examined in the framework of a fast MME scheme. Experimental results show the e ciency of the proposed schemes, especially when employed in the MME approach, so that a reduction of more than 20% in the MF bitrate is achieved when employing typical QCIF formatted image sequences.
The Algebraic Reconstruction Technique (ART) is an iterative image reconstruction algorithm. During the development of the Clear-PEM device, a PET scanner designed for the evaluation of breast cancer, multiple tests were done in order to... more
The Algebraic Reconstruction Technique (ART) is an iterative image reconstruction algorithm. During the development of the Clear-PEM device, a PET scanner designed for the evaluation of breast cancer, multiple tests were done in order to optimise the reconstruction process. The comparison between ART, MLEM and OSEM indicates that ART can perform faster and with better image quality than the other, most common algorithms. It is claimed in this paper that if ART's relaxation parameter is carefully adjusted to the reconstruction procedure it can produce high quality images in short computational time. This is confirmed by showing that with the relaxation parameter evolving as a logarithmic function, ART can match in terms of image quality and overcome in terms of computational time the performance of MLEM and OSEM algorithms. However, this study was performed only with simulated data and the level of noise with real data may be different.
This paper presents the mathematical framework of radial Tchebichef moment invariants, and investigates their feature representation capabilities for pattern recognition applications. The radial Tchebichef moments are constructed using... more
This paper presents the mathematical framework of radial Tchebichef moment invariants, and investigates their feature representation capabilities for pattern recognition applications. The radial Tchebichef moments are constructed using the discrete ...
We consider the problem of estimating one nonblurred and cleaned image from a sequence of P randomly translated images corrupted with Poisson noise. We develop a new algorithm based on maximum-likelihood (ML) estimation for two unknown... more
We consider the problem of estimating one nonblurred and cleaned image from a sequence of P randomly translated images corrupted with Poisson noise. We develop a new algorithm based on maximum-likelihood (ML) estimation for two unknown parameters: the reconstructed image itself and the set of translations of the low-light-level images. We demonstrate that the ML reconstructed image is proportional to the sum of the low-light-level images after correcting for the unknown movement and that its entropy is minimal. The images of the sequence are matched together by means of an iterative minimum-entropy algorithm, where a systematic search under displacements for the images is performed. We develop a fast version of this algorithm, and we present results for simulated images and experimental data. The probability of good matching of a low-level image sequence is estimated numerically when the light level of the images in the sequence decreases, corresponding to small numbers of photons detected (down to 20) in each image of the sequence. We compare these results with those obtained when the low-light-level images are matched to a known reference, i.e., the linear correlation method, and with those from the optimal one, when the noise has a Poisson distribution. This approach is applied to astronomical images that are acquired by photocounting from a balloonborne ultraviolet imaging telescope. © 1998 Optical Society of America [S0740-3232(98)00611-5] OCIS code: 100.0100.
We advocate the use of point sets to represent shapes. We provide a definition of a smooth manifold surface from a set of points close to the original surface. The definition is based on local maps from differential geometry, which are... more
We advocate the use of point sets to represent shapes. We provide a definition of a smooth manifold surface from a set of points close to the original surface. The definition is based on local maps from differential geometry, which are approximated by the method of moving least squares (MLS). The computation of points on the surface is local, which results in an out-of-core technique that can handle any point set. We show that the approximation error is bounded and present tools to increase or decrease the density of the points, thus allowing an adjustment of the spacing among the points to control the error. To display the point set surface, we introduce a novel point rendering technique. The idea is to evaluate the local maps according to the image resolution. This results in high quality shading effects and smooth silhouettes at interactive frame rates.
Aims. We aim to explore the photosphere of the very cool late-type star VX Sgr and in particular the existence and characterization of molecular layers above the continuum forming photosphere. Methods. We obtained interferometric... more
Aims. We aim to explore the photosphere of the very cool late-type star VX Sgr and in particular the existence and characterization of molecular layers above the continuum forming photosphere. Methods. We obtained interferometric observations with the VLTI/AMBER interferometer using the fringe tracker FINITO in the spectral domain 1.45-2.50 micron with a spectral resolution of about 35 and baselines ranging from 15 to 88 meters.We perform independent image reconstruction for different wavelength bins and fit the interferometric data with a geometrical toy model.We also compare the data to 1D dynamical models of Miras atmosphere and to 3D hydrodynamical simulations of red supergiant (RSG) and asymptotic giant branch (AGB) stars. Results. Reconstructed images and visibilities show a strong wavelength dependence. The H-band images display two bright spots whose positions are confirmed by the geometrical toy model. The inhomogeneities are qualitatively predicted by 3D simulations. At about 2,00 micron and in the region 2,35 - 2,50 micron, the photosphere appears extended and the radius is larger than in the H band. In this spectral region, the geometrical toy model locates a third bright spot outside the photosphere that can be a feature of the molecular layers. The wavelength dependence of the visibility can be qualitatively explained by 1D dynamical models of Mira atmospheres. The best-fitting photospheric models show a good match with the observed visibilities and give a photospheric diameter of theta = 8,82+-0,50 mas. The H2O molecule seems to be the dominant absorber in the molecular layers. Conclusions. We show that the atmosphere of VX Sgr rather resembles Mira/AGB star model atmospheres than RSG model atmospheres. In particular, we see molecular (water) layers that are typical for Mira stars.
In this paper, we present a 3-D localization method for a magnetically actuated soft capsule endoscope (MASCE). The proposed localization scheme consists of three steps. First, MASCE is oriented to be coaxially aligned with an external... more
In this paper, we present a 3-D localization method for a magnetically actuated soft capsule endoscope (MASCE). The proposed localization scheme consists of three steps. First, MASCE is oriented to be coaxially aligned with an external permanent magnet (EPM). Second, MASCE is axially contracted by the enhanced magnetic attraction of the approaching EPM. Third, MASCE recovers its initial shape by the retracting EPM as the magnetic attraction weakens. The combination of the estimated direction in the coaxial alignment step and the estimated distance in the shape deformation (recovery) step provides the position of MASCE in 3-D. It is experimentally shown that the proposed localization method could provide 2.0-3.7 mm of distance error in 3-D. This study also introduces two new applications of the proposed localization method. First, based on the trace of contact points between the MASCE and the surface of the stomach, the 3-D geometrical model of a synthetic stomach was reconstructed. Next, the relative tissue compliance at each local contact point in the stomach was characterized by measuring the local tissue deformation at each point due to the preloading force. Finally, the characterized relative tissue compliance parameter was mapped onto the geometrical model of the stomach toward future use in disease diagnosis.
PSF (point spread function) based image reconstruction causes an overshoot at sharp intensity transitions (edges) of the object. This edge artifact, or ringing, has not been fully studied. In this work, we analyze the properties of edge... more
PSF (point spread function) based image reconstruction causes an overshoot at sharp intensity transitions (edges) of the object. This edge artifact, or ringing, has not been fully studied. In this work, we analyze the properties of edge artifacts in PSF-based reconstruction in an effort to develop mitigation methods. Our study is based on 1D and 2D simulation experiments. Two approaches are adopted to analyze the artifacts. In the system theory approach, we relate the presence of edge artifacts to the null space and conditioning of the imaging operator. We show that edges cannot be accurately recovered with a practical number of image updates when the imaging matrices are poorly conditioned. In the frequency-domain analysis approach, we calculate the object-specific modulation transfer function (OMTF) of the system, defined as spectrum of the reconstruction divided by spectrum of the object. We observe an amplified frequency band in the OMTF of PSF-based reconstruction and that the band is directly related to the presence of ringing. Further analysis shows the amplified band is linearly related to kernel frequency support (the reciprocal of the reconstruction kernel FWHM), and the relation holds for different objects. Based on these properties, we develop a band-suppression filter to mitigate edge artifacts. We apply the filter to simulation and patient data, and compare its performance with other mitigation methods. Analysis shows the band-suppression filter provides better tradeoff of resolution and ringing suppression than a low-pass filter.
Coronary calcified plaque (CP) is both an important marker of atherosclerosis and major determinant of the success of coronary stenting. Intracoronary optical coherence tomography (OCT) with high spatial resolution can provide detailed... more
Coronary calcified plaque (CP) is both an important marker of atherosclerosis and major determinant of the success of coronary stenting. Intracoronary optical coherence tomography (OCT) with high spatial resolution can provide detailed volumetric characterization of CP. We present a semiautomatic method for segmentation and quantification of CP in OCT images. Following segmentation of the lumen, guide wire, and arterial wall, the CP was localized by edge detection and traced using a combined intensity and gradient-based level-set model. From the segmentation regions, quantification of the depth, area, angle fill fraction, and thickness of the CP was demonstrated. Validation by comparing the automatic results to expert manual segmentation of 106 in vivo images from eight patients showed an accuracy of 78 ± 9%. For a variety of CP measurements, the bias was insignificant (except for depth measurement) and the agreement was adequate when the CP has a clear outer border and no guide-wire overlap. These results suggest that the proposed method can be used for automated CP analysis in OCT, thereby facilitating our understanding of coronary artery calcification in the process of atherosclerosis and helping guide complex interventional strategies in coronary arteries with superficial calcification.
In order to construct a 3D model from a collection of 2D images of an object, an energy function is defined between the object's images and corresponding images of an articulated mesh in three dimensions. Repeated adjustment of the mesh... more
In order to construct a 3D model from a collection of 2D images of an object, an energy function is defined between the object's images and corresponding images of an articulated mesh in three dimensions. Repeated adjustment of the mesh to minimize the energy function results in a mesh that produces images which closely approximate the input images, that is to say that under the appropriate conditions it realizes a preconceived object. It has implications for model building, reverse engineering and computer vision. Minimization of the energy function is a multivariate problem of large scale with many local minima. We give an approach for solving this problem. For certain restricted, but useful applications, intuitive solutions to the minimization are consistently obtained. $
10 High-Dynamic Range Imaging for Dynamic Scenes Celine Loscos and Katrien Jacobs 10.1 Introduction................................................................ 259 10.2 High-Dynamic Range Images:... more
10 High-Dynamic Range Imaging for Dynamic Scenes Celine Loscos and Katrien Jacobs 10.1 Introduction................................................................ 259 10.2 High-Dynamic Range Images: Definition.................................... 261 10.3 HDR Image Creation from Multiple Exposures........ ...
The aim of this work is the presentation and comparison of state-of-the-art dedicated PET systems actually available on the market, in terms of physical performance and technical features. Particular attention has been given to evaluate... more
The aim of this work is the presentation and comparison of state-of-the-art dedicated PET systems actually available on the market, in terms of physical performance and technical features. Particular attention has been given to evaluate the whole-body performance by sensitivity, spatial resolution, dead time, noise equivalent counting rate (NECR), and scatter fraction. PET/CT systems were also included as new proposals to improve diagnostic accuracy of PET, allowing effective anatomic integration to functional data. An overview of actually implemented reconstruction algorithms is also reported to fully understand all of the factors that contribute to image quality.
which implements a high-performance JPEG-LS encoder. The encoding process follows the principles of the JPEG-LS lossless mode. The proposed implementation consists of an efficient pipelined JPEG-LS encoder, which operates at a... more
which implements a high-performance JPEG-LS encoder. The encoding process follows the principles of the JPEG-LS lossless mode. The proposed implementation consists of an efficient pipelined JPEG-LS encoder, which operates at a significantly higher encoding rate than any other JPEG-LS hardware or software implementation while keeping area small.
Videokeratometers and Scheimpflug cameras permit accurate estimation of corneal surfaces. From height data it is possible to adjust analytical surfaces that will be later used for aberration calculation. Zernike polynomials are often used... more
Videokeratometers and Scheimpflug cameras permit accurate estimation of corneal surfaces. From height data it is possible to adjust analytical surfaces that will be later used for aberration calculation. Zernike polynomials are often used as adjusting polynomials, but they have shown to be not precise when describing highly irregular surfaces. We propose a combined zonal and modal method that allows an accurate reconstruction of corneal surfaces from height data, diminishing the influence of smooth areas over irregular zones and vice versa. The surface fitting error is decreased in the considered cases, mainly in the central region, which is more important optically. Therefore, the method can be established as an accurate resampling technique.
Checking railway status is critical to guarantee high operating safety, proper maintenance schedule, and low maintenance and operating costs. This operation consists of the analysis of the rail profile and level as well as overall... more
Checking railway status is critical to guarantee high operating safety, proper maintenance schedule, and low maintenance and operating costs. This operation consists of the analysis of the rail profile and level as well as overall geometry and ondulation. Traditional detection systems are based on mechanical devices in contact with the track. Innovative approaches are based on laser scanning and image analysis. This paper presents an efficient composite technique for track profile extraction with real-time image processing. High throughput is obtained by algorithmic prefiltering to restrict the image area containing the track profile, while high accuracy is achieved by neural reconstruction of the profile itself.
The electrical impedance tomography (EIT) is a promising imaging modality for early detection of lung disease. Several advantages of EIT imaging, such as save, simple and low cost. However, the better reconstruction method is still under... more
The electrical impedance tomography (EIT) is a promising imaging modality for early detection of lung disease. Several advantages of EIT imaging, such as save, simple and low cost. However, the better reconstruction method is still under investigated. The paper proposed filtered back projection to produce an anomaly image in the lungs. The method is applied to reconstruct an image from relative potential data of expiration and inspiration and also on normal lung and lung with anomaly. The simulation shows that the reconstruction method of filtered back projection from the relative potential data of expiration and inspiration have not been able to detect any anomalies, but from the relative potential data of normal lungs and lungs with anomaly is able to detect the presence and position of anomaly although have not been able to distinguish the size of the anomaly.
We present an image-based 3D reconstruction pipeline for acquiring geo-referenced semi-dense 3D models. Multiple overlapping images captured from a micro aerial vehicle platform provide a highly redundant source for multiview... more
We present an image-based 3D reconstruction pipeline for acquiring geo-referenced semi-dense 3D models. Multiple overlapping images captured from a micro aerial vehicle platform provide a highly redundant source for multiview reconstructions. Publicly available geo-spatial information sources are used to obtain an approximation to a digital surface model (DSM). Models obtained by the semi-dense reconstruction are automatically aligned to the DSM to allow the integration of highly detailed models into the original DSM and to provide geographic context.
In this work, we present a method for approximating constrained maximum entropy (ME) reconstructions of SPECT data with modifications to a block-iterative maximum a posteriori (MAP) algorithm. Maximum likelihood (ML)-based reconstruction... more
In this work, we present a method for approximating constrained maximum entropy (ME) reconstructions of SPECT data with modifications to a block-iterative maximum a posteriori (MAP) algorithm. Maximum likelihood (ML)-based reconstruction algorithms require some form of noise smoothing. Constrained ME provides a more formal method of noise smoothing without requiring the user to select parameters. In the context of SPECT, constrained ME seeks the minimum-information image estimate among those whose projections are a given distance from the noisy measured data, with that distance determined by the magnitude of the Poisson noise. Images that meet the distance criterion are referred to as feasible images. We find that modeling of all principal degrading factors (attenuation, detector response, and scatter) in the reconstruction is critical because feasibility is not meaningful unless the projection model is as accurate as possible. Because the constrained ME solution is the same as a MAP solution for a particular value of the MAP weighting parameter, beta, the constrained ME solution can be found with a MAP algorithm if the correct value of beta is found. We show that the RBI-MAP algorithm, if used with a dynamic scheme for estimating beta, can approximate constrained ME solutions in 20 or fewer iterations. We compare results for various methods of achieving feasible images on a simulation of Tl-201 cardiac SPECT data. Results show that the RBI-MAP ME approximation provides images and quantitative estimates close to those from a slower algorithm that gives the true ME solution. Also, we find that the ME results have higher spatial resolution and greater high-frequency noise content than a feasibility-based stopping rule, feasibility-based low-pass filtering, and a quadratic Gibbs prior with beta selected according to the feasibility criterion. We conclude that fast ME approximation is possible using either RBI-MAP with the dynamic procedure or a feasibility-based stopping rule, and that such reconstructions may be particularly useful in applications where resolution is critical.
Non-quadratic regularization based image formation is a recently proposed framework for feature-enhanced radar imaging. Specific image formation techniques in this framework have so far focused on enhancing one type of feature, such as... more
Non-quadratic regularization based image formation is a recently proposed framework for feature-enhanced radar imaging. Specific image formation techniques in this framework have so far focused on enhancing one type of feature, such as strong point scatterers, or smooth regions. However, many scenes contain a number of such features. We develop an image formation technique that simultaneously enhances multiple types of features by posing the problem as one of sparse signal representation based on overcomplete dictionaries. Due to the complex-valued nature of the reflectivities in SAR, our new approach is designed to sparsely represent the magnitude of the complex-valued scattered field in terms of multiple features, which turns the image reconstruction problem into a joint optimization problem over the representation of the magnitude and the phase of the underlying field reflectivities. We formulate the mathematical framework needed for this method and propose an iterative solution for the corresponding joint optimization problem. We demonstrate the effectiveness of this approach on various SAR images.
—Hough transform (HT) is a typical method to detect or segment geometry objects from images. In this paper, we study the principle of Hough Transform and its mathematical expressions, and try to use a new approach based on Hough transform... more
—Hough transform (HT) is a typical method to detect or segment geometry objects from images. In this paper, we study the principle of Hough Transform and its mathematical expressions, and try to use a new approach based on Hough transform for quick line and circle detection in image processing. Our method accurately detected some simple graphics, such as straight line of different direction, circles of different detection, thickness and different number. The results show that our method is less memory consumption and calculated fast, which could be applied for line detection and segmentation in 3D ultrasonic image.
A lattice structure of multidimensional (MD) linearphase paraunitary filter banks (LPPUFB's) is proposed, which makes it possible to design such systems in a systematic manner. Our proposed structure can produce MD-LPPUFB's whose filters... more
A lattice structure of multidimensional (MD) linearphase paraunitary filter banks (LPPUFB's) is proposed, which makes it possible to design such systems in a systematic manner. Our proposed structure can produce MD-LPPUFB's whose filters all have the region of support N(M M M4 4 4), where M M M and 4 4 4 are the decimation and positive integer diagonal matrices, respectively, and N(N N N) denotes the set of integer vectors in the fundamental parallelepiped of a matrix N N N. It is shown that if N(M M M) is reflection invariant with respect to some center, then the reflection invariance of N(M M M4 4 4) is guaranteed. This fact is important in constructing MD linear-phase filter banks because the reflection invariance is necessary for any linear-phase filter. Since our proposed system structurally restricts both the paraunitary and linear-phase properties, an unconstrained optimization process can be used to design MD-LPPUFB's. Our proposed structure is developed for both an even and an odd number of channels and includes the conventional 1-D system as a special case. It is also shown to be minimal, and the no-DC-leakage condition is presented. Some design examples will show the significance of our proposed structure for both the rectangular and nonrectangular decimation cases.
A three-dimensional ͑3-D͒ optical-scanning technique is proposed based on spatial optical phase code activation on an input beam. This code-multiplexed optical scanner ͑C-MOS͒ relies on holographically stored 3-D beam-forming information.... more
A three-dimensional ͑3-D͒ optical-scanning technique is proposed based on spatial optical phase code activation on an input beam. This code-multiplexed optical scanner ͑C-MOS͒ relies on holographically stored 3-D beam-forming information. Proof-of-concept C-MOS experimental results by use of a photorefractive crystal as a holographic medium generates eight beams representing a basic 3-D voxel element generated via a binary-code matrix of the Hadamard type. The experiment demonstrates the C-MOS features of no moving parts, beam-forming flexibility, and large centimeter-size apertures. A novel application of the C-MOS as an optical security lock is highlighted.
In this paper, we present a hybrid chain coding based scheme for contours and binary image compression and reconstruction. The proposed scheme comprises of a lossless and a lossy parts. This scheme is designed in such a way to generate... more
In this paper, we present a hybrid chain coding based scheme for contours and binary image compression and reconstruction. The proposed scheme comprises of a lossless and a lossy parts. This scheme is designed in such a way to generate extensive number of replicate links in the contours that can be assembled according to our (n10, 5) rule that can be highly compressed. Furthermore, for the lossy part, we introduce a new line processing technique to smooth the contours while maintaining high image quality. The experimental results show that the proposed method surpasses all published chain coding methods including FCC, DCC, DCC-8, VCC, CRCC, and L&Z. In addition, this scheme produced significantly higher compression ratio than WinZip, G3, G4, JBIG1, and JBIG2 standards.
We propose a unification framework for three-dimensional shape reconstruction using physically based models. A variety of 3D shape reconstruction techniques have been developed in the past two decades, such as shape from stereopsis, from... more
We propose a unification framework for three-dimensional shape reconstruction using physically based models. A variety of 3D shape reconstruction techniques have been developed in the past two decades, such as shape from stereopsis, from shading, from texture gradient, and from structured lighting. However, the lack of a general theory that unifies these shape reconstruction techniques into one framework hinders the effort of a synergistical image interpretation scheme using multiple sensors/information sources. Most shape-from-X techniques use an “observable” (e.g., the stereo disparity, intensity, or texture gradient) and a model, which is based on specific domain knowledge (e.g., the triangulation principle, reflectance function, or texture distortion equation) to predict the observable, in 3D shape reconstruction. We show that all these “observable–prediction-model” types of techniques can be incorporated into our framework of energy constraint on a flexible, deformable image frame. In our algorithm, if the observable does not confirm to the predictions obtained using the corresponding model, a large “error” potential results. The error potential gradient forces the flexible image frame to deform in space. The deformation brings the flexible image frame to “wrap” onto the surface of the imaged 3D object. Surface reconstruction is thus achieved through a “package wrapping” or a “shape deformation” process by minimizing the discrepancy in the observable and the model prediction. The dynamics of such a wrapping process are governed by the least action principle which is physically correct. A physically based model is essential in this general shape reconstruction framework because of its capability to recover the desired 3D shape, to provide an animation sequence of the reconstruction, and to include the regularization principle into the theory of surface reconstruction.
Morphological associative memories (MAMs) are a special type of associative memory which exhibit optimal absolute storage capacity and one-step convergence. This associative model substitutes the additions and multiplications used in the... more
Morphological associative memories (MAMs) are a special type of associative memory which exhibit optimal absolute storage capacity and one-step convergence. This associative model substitutes the additions and multiplications used in the classical models by additions/subtractions and maximums/minimums depending on the proposed model. MAMs have been applied to different pattern recognition problems including face localization and gray scale image restoration. Despite of his power, it has not been applied in problems that involve true-color patterns. In this paper we show how a Morphological Auto-associative Memory (MAAM) can be applied to restore true-color patterns. We present a study of the behavior of this associative model with a benchmark of 14400 images altered by different type of noises.
A new FFT-accelerated projection matching method is presented and tested. The electron microscopy images are represented by their Fourier-Bessel transforms and the 3D model by its expansion in spherical harmonics, or more specifically in... more
A new FFT-accelerated projection matching method is presented and tested. The electron microscopy images are represented by their Fourier-Bessel transforms and the 3D model by its expansion in spherical harmonics, or more specifically in terms of symmetry-adapted functions. The rotational and translational properties of these representations are used to quickly access all the possible 2D projections of the 3D model, which allow an exhaustive inspection of the whole five-dimensional domain of parameters associated to each particle.
The problem of reconstructing locations, shapes, and dielectric permittivity distributions of two-dimensional (2-D) dielectric objects from measurements of the scattered electric field is addressed in this paper. A numerical approach is... more
The problem of reconstructing locations, shapes, and dielectric permittivity distributions of two-dimensional (2-D) dielectric objects from measurements of the scattered electric field is addressed in this paper. A numerical approach is proposed which is based on a multi-illumination multiview processing. In particular, the inverse problem is recast as a global nonlinear optimization problem, which is solved by a genetic algorithm (GA). The final objective of the approach is the image reconstruction of highly contrasted bodies.
Abstruct-We use the generalized Landweber iteration with a variable shaping matrix to solve the large linear system of equations arising in the image reconstruction problem of emission tomography. Our method is based on the property that... more
Abstruct-We use the generalized Landweber iteration with a variable shaping matrix to solve the large linear system of equations arising in the image reconstruction problem of emission tomography. Our method is based on the property that once a spatial frequency image component is almost recovered within E in the generalized Landweber iteration, this component will still stay within E during subsequent iterations with a different shaping matrix, as long as this shaping matrix satisfies the convergence criterion for the component. Two different shaping matrices are used: the first recovers low-frequency image components; and the second may be used either to accelerate the reconstruction of high-frequency image components, or to attenuate these components to filter the image. The variable shaping matrix gives results similar to truncated inverse filtering, but requires much less computation and memory, since it does not rely on the singular value decomposition.
Electrical capacitance tomography (ECT) is a so-called 'soft-field' tomography technique. The linear back-projection (LBP) method is used widely for image reconstruction in ECT systems. It is numerically simple and computationally fast... more
Electrical capacitance tomography (ECT) is a so-called 'soft-field' tomography technique. The linear back-projection (LBP) method is used widely for image reconstruction in ECT systems. It is numerically simple and computationally fast because it involves only a single matrix-vector multiplication. However, the images produced by the LBP algorithm are generally qualitative rather than quantitative. This paper presents an image-reconstruction algorithm based on a modified Landweber iteration method that can greatly enhance the quality of the image when two distinct phases are present. In this algorithm a simple constraint is used as a regularization for computing a stabilized solution, with a better immunity to noise and faster convergence. Experimental results are presented.
GT Herman optimized (on a sufficiently large data set so as to lead to statistically significant results) without outrageous demands on computational resources. In the rest of this section we discuss how the image reconstruction problem... more
GT Herman optimized (on a sufficiently large data set so as to lead to statistically significant results) without outrageous demands on computational resources. In the rest of this section we discuss how the image reconstruction problem arises in a number of bio- medical areas. ...
Este articulo presenta una revision de los fundamentos de la tomografia computarizada, empezando por un recuento de los inicios y progresos de esta tecnica a traves del tiempo, y continuando con una descripcion de los principios fisicos... more
Este articulo presenta una revision de los fundamentos de la tomografia computarizada, empezando por un recuento de los inicios y progresos de esta tecnica a traves del tiempo, y continuando con una descripcion de los principios fisicos que rigen la produccion de los rayos X. El articulo tambien discute las bases matematicas para la reconstruccion de las imagenes a partir de proyecciones utilizando metodos analiticos o iterativos. En una seccion independiente, se revisan los conceptos mas importantes relacionados con los riesgos de la radiacion ionizante y se discuten investigaciones recientes, algunas polemicas, acerca de los beneficios y riesgos asociados con la tomografia computarizada y como estos afectan los protocolos de adquisicion de las imagenes. Finalmente, con base en los avances cientificos y tendencias mas recientes, el articulo propone las areas que, presumiblemente, continuaran siendo el centro de atencion de la tomografia computarizada de rayos X en los proximos anos.
This paper presents an on-line calibration method of the absolute extrinsic parameters of a stereovision system suited for vision based vehicle applications. The method uses as prior knowledge the intrinsic parameters and the relative... more
This paper presents an on-line calibration method of the absolute extrinsic parameters of a stereovision system suited for vision based vehicle applications. The method uses as prior knowledge the intrinsic parameters and the relative extrinsic parameters (relative position and orientation) of the two cameras, which are calibrated using off-line procedures. These parameters are remaining unchanged if the two cameras are mounted on a rigid frame (stereo-rig). The absolute extrinsic parameters are defining the position and orientation of the stereo system relative to a world coordinate system. They must be calibrated every time after mounting the stereorig in the vehicle and are subject to changes due to static (variable load) and dynamic (acceleration, bumpy road) factors. The proposed method is able to perform on-line the estimation of the absolute extrinsic parameters by driving the car on a flat and straight road, parallel with the longitudinal lane markers. The edge points of the longitudinal lane markers are extracted after a 2D image classification process and reconstructed by stereovision in the stereo-rig coordinate system. After filtering out the noisy 3D points the normal vectors of the world coordinate system axes are estimated in the stereo-rig coordinate system by 3D data fitting. The output of the method is the height and the orientation of the stereo rig relative to the world coordinate system.
A signal-processing algorithm has been developed where a filter function is extracted from degraded data through mathematical operations. The filter function can then be used to restore much of the degraded content of the data through use... more
A signal-processing algorithm has been developed where a filter function is extracted from degraded data through mathematical operations. The filter function can then be used to restore much of the degraded content of the data through use of a deconvolution algorithm. This process can be performed without prior knowledge of the detection system, a technique known as blind deconvolution. The extraction process, designated self-deconvolving data reconstruction algorithm, has been used successfully to restore digitized photographs, digitized acoustic waveforms, and other forms of data. The process is noniterative, computationally efficient, and requires little user input. Implementation is straightforward, allowing inclusion into many types of signal-processing software and hardware. The novelty of the invention is the application of a power law and smoothing function to the degraded data in frequency space. Two methods for determining the value of the power law are discussed. The first method assumes the power law is frequency dependent. The function derived comparing the frequency spectrum of the degraded data with the spectrum of a signal with the desired frequency response. The second method assumes this function is a constant of frequency. This approach requires little knowledge of the original data or the degradation.
X-ray-computed tomography (CT) successfully underwent a transition from slice-by-slice imaging to volume imaging in the decade after 1990 due to the introduction of spiral scan modes. Later, the transition from single-slice to multislice... more
X-ray-computed tomography (CT) successfully underwent a transition from slice-by-slice imaging to volume imaging in the decade after 1990 due to the introduction of spiral scan modes. Later, the transition from single-slice to multislice scanning followed. With the advent of new detector technologies we are now looking forward to circular and spiral scanning using area detectors and the respective reconstruction approaches.
Reconstructing a three-dimensional (3D) object from a set of its two-dimensional (2D) X-ray projections requires that the source position and image plane orientation in 3D space be obtained with high accuracy. We present a method for... more
Reconstructing a three-dimensional (3D) object from a set of its two-dimensional (2D) X-ray projections requires that the source position and image plane orientation in 3D space be obtained with high accuracy. We present a method for estimating the geometrical parameters of an X-ray imaging chain, based on the minimization of the reprojection mean quadratic error measured on reference points of a calibration phantom. This error is explicitly calculated with respect to the geometrical parameters of the conic projection, and a conjugate gradient technique is used for its minimization. By comparison to the classical unconstrained method, better results were obtained ln simulation with our method, specially when only a few reference points are available. This method may be adapted to different X-ray systems and may also be extended to the estimation of the geometrical parameters of the imaging chain trajectory in the case of dynamic acquisitions.
Phishing is an attempt by an individual or a group to thieve personal confidential information such as passwords, credit card information etc from unsuspecting victims for identity theft, financial gain and other fraudulent activities. In... more
Phishing is an attempt by an individual or a group to thieve personal confidential information such as passwords, credit card information etc from unsuspecting victims for identity theft, financial gain and other fraudulent activities. In this paper we have proposed a new approach named as "A Novel Antiphishing framework based on visual cryptography" to solve the problem of phishing. Here an image based authentication using Visual Cryptography (vc) is used. The use of visual cryptography is explored to preserve the privacy of image captcha by decomposing the original image captcha into two shares that are stored in separate database servers such that the original image captcha can be revealed only when both are simultaneously available; the individual sheet images do not reveal the identity of the original image captcha. Once the original image captcha is revealed to the user it can be used as the password.
An algorithm for high-precision numerical computation of Zernike moments is presented. The algorithm, based on the introduced polar pixel tiling scheme, does not exhibit the geometric error and numerical integration error which are... more
An algorithm for high-precision numerical computation of Zernike moments is presented. The algorithm, based on the introduced polar pixel tiling scheme, does not exhibit the geometric error and numerical integration error which are inherent in conventional methods based on Cartesian coordinates. This yields a dramatic improvement of the Zernike moments accuracy in terms of their reconstruction and invariance properties. The introduced image tiling requires an interpolation algorithm which turns out to be of the second order importance compared to the discretization error. Various comparisons are made between the accuracy of the proposed method and that of commonly used techniques. The results reveal the great advantage of our approach.
A new image compression algorithm is proposed, based on independent Embedded Block Coding with Optimized Truncation of the embedded bit-streams (EBCOT). The algorithm exhibits state-of-the-art compression performance while producing a... more
A new image compression algorithm is proposed, based on independent Embedded Block Coding with Optimized Truncation of the embedded bit-streams (EBCOT). The algorithm exhibits state-of-the-art compression performance while producing a bit-stream with a rich set of features, including resolution and SNR scalability together with a "random access" property. The algorithm has modest complexity and is suitable for applications involving remote browsing of large compressed images. The algorithm lends itself to explicit optimization with respect to MSE as well as more realistic psychovisual metrics, capable of modeling the spatially varying visual masking phenomenon.