High resolution satelite image Research Papers (original) (raw)

Patient motion during brain SPECT studies can c l e m resolution and introduce distortion. We have developed a correction method which incorporates a motion tracking system to monitor the position and orientation of the patient's head... more

Patient motion during brain SPECT studies can c l e m resolution and introduce distortion. We have developed a correction method which incorporates a motion tracking system to monitor the position and orientation of the patient's head during acquisition. Correction is achieved by spatially repositioning projections according to measured head movements and reconstructing these projections with a fully three-dimensional (3D) algorithm. The method has been evaluated in SPECT studies of the Hoffman 3D brain phantom performed on a triple head camera with fan beam collimation. Movements were applied to the phantom and recorded by a head tracker during SPECT acquisition. Fully 3D reconstruction was performed using the motion data provided by the tracker. Correction accuracy was assessed by comparing the corrected and uncorrected studies with a motion free study, visually and by calculating mean squared error (MSE). In all studies, motion correction reduced distortion and improved MSE by a factor of 2 or more. We conclude that this method can compensate for head motion under clinical SPECT imaging conditions. 0-7803-5021 -9/99/$10.00 0 1999 IEEE.

An adaptive two step paradigm for the super-resolution of optical images is developed in this paper. The procedure locally projects image samples onto a family of kernels that are learned from image data. First, an unsupervised feature... more

An adaptive two step paradigm for the super-resolution of optical images is developed in this paper. The procedure locally projects image samples onto a family of kernels that are learned from image data. First, an unsupervised feature extraction is performed on local neighborhood information from a training image. These features are then used to cluster the neighborhoods into disjoint sets for which an optimal mapping relating homologous neighborhoods across scales can be learned in a supervised manner. A superresolved image is obtained through the convolution of a low resolution test image with the established family of kernels. Results demonstrate the effectiveness of the approach.

We advocate the use of point sets to represent shapes. We provide a definition of a smooth manifold surface from a set of points close to the original surface. The definition is based on local maps from differential geometry, which are... more

We advocate the use of point sets to represent shapes. We provide a definition of a smooth manifold surface from a set of points close to the original surface. The definition is based on local maps from differential geometry, which are approximated by the method of moving least squares (MLS). The computation of points on the surface is local, which results in an out-of-core technique that can handle any point set. We show that the approximation error is bounded and present tools to increase or decrease the density of the points, thus allowing an adjustment of the spacing among the points to control the error. To display the point set surface, we introduce a novel point rendering technique. The idea is to evaluate the local maps according to the image resolution. This results in high quality shading effects and smooth silhouettes at interactive frame rates.

In this paper, we present an efficient coarse-tofine multiresolution framework for multidimensional scaling and demonstrate its performance on a large-scale nonlinear dimensionality reduction and embedding problem in a texture feature... more

In this paper, we present an efficient coarse-tofine multiresolution framework for multidimensional scaling and demonstrate its performance on a large-scale nonlinear dimensionality reduction and embedding problem in a texture feature extraction step for the unsupervised image segmentation problem. We demonstrate both the efficiency of our multiresolution algorithm and its real interest to learn a nonlinear low-dimensional representation of the texture feature set of an image which can then subsequently be exploited in a simple clustering-based segmentation algorithm. The resulting segmentation procedure has been successfully applied on the Berkeley image database, demonstrating its efficiency compared to the best existing state-ofthe-art segmentation methods recently proposed in the literature.

Transmission of compressed medical images is becoming a vital tool in telemedicine. Thus new methods are needed for efficient image compression. This study discovers the best design parameters for a data compression scheme applied to... more

Transmission of compressed medical images is becoming a vital tool in telemedicine. Thus new methods are needed for efficient image compression. This study discovers the best design parameters for a data compression scheme applied to digital magnetic resonance (MR) images. The proposed technique aims at reducing the transmission cost while preserving the diagnostic information. By selecting the wavelet packet's filters, decomposition level, and subbands that are better adapted to the frequency characteristics of the image, one may achieve better image representation in the sense of lower entropy or minimal distortion. Experimental results show that the selection of the best parameters has a dramatic effect on the data compression rate of MR images. In all cases, decomposition at three or four levels with the Coiflet 5 wavelet (Coif 5) results in better compression performance than the other wavelets. Image resolution is found to have a remarkable effect on the compression rate.

We describe a new interferometric configuration for optical coherence tomography that is based on the Mirau interferometer. It uses the photodetector included in a superluminescent diode package, which makes possible a highly miniaturized... more

We describe a new interferometric configuration for optical coherence tomography that is based on the Mirau interferometer. It uses the photodetector included in a superluminescent diode package, which makes possible a highly miniaturized device. Other advantages of the configuration include its totally coaxial structure, confocal microscope operation, availability of the full working distance of the imaging objective, and no central obscuration. Fundamental characteristics such as resolution and dynamic range are discussed, and the result of measurement on a rough metallic surface is presented.

Image segmentation is not only hard and unnecessary for texture-based image retrieval, but can even be harmful. Images of either individual or multiple textures are best described by distributions of spatial frequency descriptors, rather... more

Image segmentation is not only hard and unnecessary for texture-based image retrieval, but can even be harmful. Images of either individual or multiple textures are best described by distributions of spatial frequency descriptors, rather than single descriptor vectors over presegmented regions. A retrieval method based on the Earth Movers Distance with an appropriate ground distance is shown to handle both complete and partial multi-textured queries. As an illustration, different images of the same type of animal are easily retrieved together. At the same time, animals with subtly different coats, like cheetahs and leopards, are properly distinguished.

This paper investigates the application of a new concept called the "Overcomplete Discrete Wavelet Transform"(0DWT) for image sequences compression. It has been demonstrated that: "The translated function with any integer multiple of the... more

This paper investigates the application of a new concept called the "Overcomplete Discrete Wavelet Transform"(0DWT) for image sequences compression. It has been demonstrated that: "The translated function with any integer multiple of the sampling period is completely represented in the wavelet space by one of the ODWT members". This theoretical result leads to a new motion estimation and motion compensation scheme working in the wavelet transform domain. Our simulation experiments, performed on real image sequences, show high quality and low bit rate performances. By performing the motion estimation in the wavelet space, a quite modest computational complexity is ensured.

3D integration comes with the introduction of many new processes and materials that may affect behavior and reliability of the overall system. For reliability testing of 3D integration technologies a 3-level test chip has been designed... more

3D integration comes with the introduction of many new processes and materials that may affect behavior and reliability of the overall system. For reliability testing of 3D integration technologies a 3-level test chip has been designed that includes Through Silicon Vias (TSV's) and assembly layers and that allows evaluation of yield and electrical parameters under steady state (DC) and RF signal conditions. Additionally, this (stacked) chip delivers reliability values when used within the standardized procedures defined by JDEC.

We present the nonsubsampled contourlet transform and its application in image enhancement. The nonsubsampled contourlet transform is built upon nonsubsampled pyramids and nonsubsampled directional filter banks and provides a... more

We present the nonsubsampled contourlet transform and its application in image enhancement. The nonsubsampled contourlet transform is built upon nonsubsampled pyramids and nonsubsampled directional filter banks and provides a shiftinvariant directional multiresolution image representation. Existing methods for image enhancement cannot capture the geometric information of images and tend to amplify noises when they are applied to noisy images since they cannot distinguish noises from weak edges. In contrast, the nonsubsampled contourlet transform extracts the geometric information of images, which can be used to distinguish noises from weak edges. Experimental results show the proposed method achieves better enhancement results than a wavelet-based image enhancement method.

The reliable detection of the three-dimensional position of arbitrary objects in a scene is a key capability of most animals and one of the most important tasks in machine vision. Today's preferred technical solution is optical... more

The reliable detection of the three-dimensional position of arbitrary objects in a scene is a key capability of most animals and one of the most important tasks in machine vision. Today's preferred technical solution is optical time-of-flight (TOF) range imaging, due to its simplicity, its distance resolution, its large and adaptable measurement range, as well as the absence of shadowing problems. In order significantly to extend the application areas of TOF 3-D cameras, in particular for outdoor use, we show how their performance can be improved in all relevant respects: Background light suppression is improved by an order of magnitude by the minimum charge transfer method. Multicamera operation is achieved by a binary pseudo-noise modulation/demodulation technique. This method also avoids all practical ambiguity problems typically encountered with harmonic modulation. Higher temporal demodulation resolution becomes possible with a pixel structure employing lateral electric fields. We have realized such pixels with a commercially available CCD/CMOS process, and our measurement results confirm that gigahertz demodulation imaging is possible. The practicality of all theoretical concepts is demonstrated with a miniaturized TOF 3-D camera platform whose LED array light source is modulated at a typical rate of 20 MHz. Our work contributes, therefore, to opening up new application domains of the soaring optical TOF range imaging techniques.

Curved cross-sections extracted from medical volume images are useful for analyzing nonplanar anatomic structures such as the aorta arch or the pelvis. For visualization and for performing distance measurements, extracted surface sections... more

Curved cross-sections extracted from medical volume images are useful for analyzing nonplanar anatomic structures such as the aorta arch or the pelvis. For visualization and for performing distance measurements, extracted surface sections need to be adequately flattened. We present two different distance preserving surface flattening methods which preserve distances according to a user-specified center of interest and according to user-specified orientations. The first method flattens surface sections by preserving distances along surface curves located within planes having a user specified constant orientation. The second method flattens surfaces along curves located within radial planes crossing the center of interest. We study and compare the properties of the two flattening methods by analyzing their distortion maps. Thanks to a multiresolution approach, we provide surface flattening at interactive rates, allowing users to displace their focus point while visualizing the resulting flattened surface. These distance preserving flattening methods provide new means of inspecting curved cross-sections extracted from medical images.

X-ray-computed tomography (CT) successfully underwent a transition from slice-by-slice imaging to volume imaging in the decade after 1990 due to the introduction of spiral scan modes. Later, the transition from single-slice to multislice... more

X-ray-computed tomography (CT) successfully underwent a transition from slice-by-slice imaging to volume imaging in the decade after 1990 due to the introduction of spiral scan modes. Later, the transition from single-slice to multislice scanning followed. With the advent of new detector technologies we are now looking forward to circular and spiral scanning using area detectors and the respective reconstruction approaches.

An abandoned object detection system is presented and evaluated using benchmark datasets. The detection is based on a simple mathematical model and works efficiently at QVGA resolution at which most CCTV cameras operate. The... more

An abandoned object detection system is presented and evaluated using benchmark datasets. The detection is based on a simple mathematical model and works efficiently at QVGA resolution at which most CCTV cameras operate. The pre-processing involves a dual-time background subtraction algorithm which dynamically updates two sets of background, one after a very short interval (less than half a second) and the other after a relatively longer duration. The framework of the proposed algorithm is based on the Approximate Median model. An algorithm for tracking of abandoned objects even under occlusion is also proposed. Results show that the system is robust to variations in lighting conditions and the number of people in the scene. In addition, the system is simple and computationally less intensive as it avoids the use of expensive filters while achieving better detection results.

A new image compression algorithm is proposed, based on independent Embedded Block Coding with Optimized Truncation of the embedded bit-streams (EBCOT). The algorithm exhibits state-of-the-art compression performance while producing a... more

A new image compression algorithm is proposed, based on independent Embedded Block Coding with Optimized Truncation of the embedded bit-streams (EBCOT). The algorithm exhibits state-of-the-art compression performance while producing a bit-stream with a rich set of features, including resolution and SNR scalability together with a "random access" property. The algorithm has modest complexity and is suitable for applications involving remote browsing of large compressed images. The algorithm lends itself to explicit optimization with respect to MSE as well as more realistic psychovisual metrics, capable of modeling the spatially varying visual masking phenomenon.

Fast magnetic resonance imaging slice acquisition techniques such as single shot fast spin echo are routinely used in the presence of uncontrollable motion. Current applications involve fetal MRI and MRI of moving subjects and organs.... more

Fast magnetic resonance imaging slice acquisition techniques such as single shot fast spin echo are routinely used in the presence of uncontrollable motion. Current applications involve fetal MRI and MRI of moving subjects and organs. Although high-quality slices are frequently acquired by these techniques, inter-slice motion leads to severe motion artifacts that appear in out-of-plane views. Slice sequential acquisitions do not enable 3D volume representation. In this study, we have developed a novel technique based on a slice acquisition model, which enables the reconstruction of a volumetric image from multiple-scan slice acquisitions. The super-resolution volume reconstruction is formulated as an inverse problem of finding the underlying structure generating the acquired slices. We have developed a robust M-estimation solution which minimizes a robust error norm function between the model-generated slices and the acquired slices. The accuracy and robustness of this novel technique has been quantitatively assessed through simulations with digital brain phantom images as well as high-resolution newborn images. We also report here successful application of our new technique for the reconstruction of volumetric fetal brain MRI from clinically acquired data.

Background. Cone-beam computed tomography (CBCT) has been changing the way dental practitioners use imaging. The radiation dose to the patient and how to effectively reduce the dose is still not completely clear to most users of this... more

Background. Cone-beam computed tomography (CBCT) has been changing the way dental practitioners use imaging. The radiation dose to the patient and how to effectively reduce the dose is still not completely clear to most users of this technology. Objective. The objective of this study was to quantitate the change in radiation dose when using different CBCT settings. Methods. A CBCT machine was modified to allow different setting combinations. The variables consisted of 4 different mA choices (2, 5, 10, and 15), 2 kVp choices (100 and 120), and 3 fields of view (6 inches, 9 inches, and 12 inches). A radiation phantom with 10 thermoluminescent dosimeters (TLD) was used to measure radiation dose. One specific setting (15 mA, 120 kVp, and 12-inch FOV) was scanned 3 times to determine consistency. Results. The CBCT showed less than 5% variance in radiation dose values. An overall reduction in dose of about 0.62 times was achieved by reducing the kVp from 120 to 100. When reducing the field size the dose decreased 5% to 10%, while for organs that escaped the direct beam the reduction was far greater.

We describe an instrument for the measurement of surface flatness, parallelism, and size ͑thickness͒ of plane-parallel parts in a single measurement to 1 gauge capability of 0.02, 0.03, and 0.06 m, respectively. A low-coherence IR... more

We describe an instrument for the measurement of surface flatness, parallelism, and size ͑thickness͒ of plane-parallel parts in a single measurement to 1 gauge capability of 0.02, 0.03, and 0.06 m, respectively. A low-coherence IR profiler viewing both sides of the part simultaneously, believed to be novel, accommodates a wide variety of industrial surface finishes, including machined, ground, or lapped parts, with a 75-mm field of view and 15,000 pixels per side. A heterodyne laser displacement gauge together with an integrated zeroing system allows for a range of part sizes from 0 to 100 mm.

Orientation estimation based on image data is a key technique in many applications. Robust estimates are possible in case of omnidirectional images due to the large field of view of the camera. Traditionally, techniques based on local... more

Orientation estimation based on image data is a key technique in many applications. Robust estimates are possible in case of omnidirectional images due to the large field of view of the camera. Traditionally, techniques based on local image features have been applied to this kind of problem. Another very efficient technique is to formulate the problem in terms of correlation on the sphere and to solve it in Fourier space. While both methods claim to provide accurate and robust estimates, a quantitative comparison has not been reported yet. In this paper we evaluate the two approaches in terms of accuracy, image resolution and robustness to noise by comparing the estimated rotations of virtual as well as real images to ground-truth data.

The surface topography of human hair, as defined by the outer layer of cellular sheets, termed cuticles, largely determines the cosmetic properties of the hair. The condition of the cuticles is of great cosmetic importance, but also has... more

The surface topography of human hair, as defined by the outer layer of cellular sheets, termed cuticles, largely determines the cosmetic properties of the hair. The condition of the cuticles is of great cosmetic importance, but also has the potential to aid diagnosis in the medical and forensic sciences. Atomic force microscopy (AFM) has been demonstrated to offer unique advantages for analysis of the hair surface, mainly due to the high image resolution and the ease of sample preparation. This article presents an algorithm for the automatic analysis of AFM images of human hair. The cuticular structure is characterized using a series of descriptors, such as step height, tilt angle and cuticle density, allowing quantitative analysis and comparison of different images. The usefulness of this approach is demonstrated by a classification study. Thirty-eight AFM images were measured, consisting of hair samples from (a) untreated and bleached hair samples, and (b) the root and distal ends of the hair fibre. The multivariate classification technique partial least squares discriminant analysis is used to test the ability of the algorithm to characterize the images according to the properties of the hair samples. Most of the images (86%) were found to be classified correctly.

This paper presents an efficient architecture of a real-time traffic sign recognition system. The architecture will present an alternative through a graphical user interface that mixes MATLAB, Simulink and basically Xilinx System... more

This paper presents an efficient architecture of a real-time traffic sign recognition system. The architecture will present an alternative through a graphical user interface that mixes MATLAB, Simulink and basically Xilinx System Generator. In this work, a hardware implementation of the proposed architecture will be done to achieve real-time constraints. Co-simulation results demonstrate the superior performance of the hardware design in terms of speed, frequency and accuracy. The architecture is implemented on FPGA device of Virtex-5 Xilinx family.

In medical ultrasound, synthetic aperture (SA) imaging is well-considered as a novel image formation technique for achieving superior resolution than that offered by existing scanners. However, its intensive processing load is known to be... more

In medical ultrasound, synthetic aperture (SA) imaging is well-considered as a novel image formation technique for achieving superior resolution than that offered by existing scanners. However, its intensive processing load is known to be a challenging factor. To address such a computational demand, this paper proposes a new parallel approach based on the design of OpenCL signal processing kernels that can compute SA image formation in real-time. We demonstrate how these kernels can be ported onto different classes of parallel processors, namely multi-core CPUs and GPUs, whose multi-thread computing resources are able to process more than 250 fps. Moreover, they have strong potential to support the development of more complex algorithms, thus increasing the depth range of the inspected human volume and the final image resolution observed by the medical practitioner.

A flexible needle can be accurately steered by robotically controlling the orientation of the bevel tip as the needle is inserted into tissue. Here, we demonstrate the significant effect of friction between the long, flexible needle shaft... more

A flexible needle can be accurately steered by robotically controlling the orientation of the bevel tip as the needle is inserted into tissue. Here, we demonstrate the significant effect of friction between the long, flexible needle shaft and the tissue, which can cause a significant discrepancy between the orientation of the needle tip and the orientation of the base where the needle is controlled. Our experiments show that several common phantom tissues used in needle steering experiments impart substantial frictional forces to the needle shaft, resulting in a lag of over 45° for a 10 cm insertion depth in some phantoms; clinical studies have reported torques large enough to could cause similar errors during needle insertions. Such angle discrepancies will result in poor performance or failure of path planners and image-guided controllers, since the needles used in percutaneous procedures are too small for state-of-the-art imaging to accurately measure the tip angle. To compensate...

The problem of creating artifact-free upscaled images appearing sharp and natural to the human observer is probably more interesting and less trivial than it may appear. The solution to the problem, often referred to also as "single image... more

The problem of creating artifact-free upscaled images appearing sharp and natural to the human observer is probably more interesting and less trivial than it may appear. The solution to the problem, often referred to also as "single image super-resolution", is related both to the statistical relationship between low resolution and high resolution image sampling and to the human perception of image quality.

Although gully erosion is generally considered a major process of land degradation, its contribution to total soil loss by erosion has recently been a subject of much discussion. The lack of adequate methods for the documentation and... more

Although gully erosion is generally considered a major process of land degradation, its contribution to total soil loss by erosion has recently been a subject of much discussion. The lack of adequate methods for the documentation and monitoring of gullies resulted in the shortage of quantitative data. Therefore, a highresolution remote sensing system for aerial surveys by blimp or kite has been developed which meets spatial and temporal image resolutions required for short-term gully monitoring. The objective of this paper is to examine the potential of a method using non-metric digital photogrammetry and GIS for gully surface modelling and monitoring. Two bank gullies representing different morphological types of gullies (V-shaped and U-shaped) were chosen from a dataset of gully systems in semi-arid Spain. The considerable relief energy and complex topography of the gullies in a natural, vegetated landscape were found to be a challenge to digital photogrammetric techniques, introducing errors which inhibit fully automated DEM generation. Using a hybrid method combining stereomatching for mass-point extraction with manual 3D editing and digitizing, high-resolution DEMs (5 and 7.5 cm pixel size) were created for the study sites. GIS analysis of the DEMs for different monitoring periods (2 to 4 years) allowed the computation of gully area and volume, as well as their changes with an accuracy and detail sufficient to represent the geomorphological forms and processes involved. Furthermore, the spatially continuous survey of the entire form offered the possibility of distinguishing different zones of activity both at the gully rim and within the gully interior, identifying patterns of erosion and deposition which indicate the limited use of headcut retreat rates for the assessment of sediment production on a short-term basis.

Abstrac/-SAR surveys from separate passes show relative shifts of the ground wavcnu m ber spectra thaI depend on the local slope and the off-nadir angle. We discuss the exploitation of this spectral sllift for different app] ications: I)... more

Abstrac/-SAR surveys from separate passes show relative shifts of the ground wavcnu m ber spectra thaI depend on the local slope and the off-nadir angle. We discuss the exploitation of this spectral sllift for different app] ications: I) generation of "low noise" intcrferograms benefiting phase unwrapping, 2) generation of quick-look interferograms, 3) decorrelation redllction by means of tunable SAl{ systems (TINSAR), 1 4) range resolution enhancement, and 5) the combination of SAR data gathered by different platforms (airborne and satell ite) for a "long-lime coherence" sludy.

Recently, there has been a great deal of work developing super-resolution algorithms for combining a set of lowquality images to produce a set of higher quality images. Either explicitly or implicitly, such algorithms must perform the... more

Recently, there has been a great deal of work developing super-resolution algorithms for combining a set of lowquality images to produce a set of higher quality images. Either explicitly or implicitly, such algorithms must perform the joint task of registering and fusing the low-quality image data. While many such algorithms have been proposed, very little work has addressed the performance bounds for such problems. In this paper, we analyze the performance limits from statistical first principles using Cramér-Rao inequalities. Such analysis offers insight into the fundamental super-resolution performance bottlenecks as they relate to the subproblems of image registration, reconstruction, and image restoration.

The fundamentals of X-ray lithography are reviewed. Issues associated with resolution, wafer throughput, and process latitude are discussed. X-ray lithography is compared with other lithographic technologies; future advancements, such as... more

The fundamentals of X-ray lithography are reviewed. Issues associated with resolution, wafer throughput, and process latitude are discussed. X-ray lithography is compared with other lithographic technologies; future advancements, such as X-ray projection lithography, are described. It is shown that the major barrier to the near-term success of X-ray lithography is the requirement for a defect-fvee one-to-one mask which satisfies the stringent image-placement needs of submicrometer patterning.

A new technique of high-resolution imaging through atmospheric turbulence is described. As in speckle interferometry, short-exposure images are recorded, but in addition the associated wave fronts are measured by a Hartmann-Shack... more

A new technique of high-resolution imaging through atmospheric turbulence is described. As in speckle interferometry, short-exposure images are recorded, but in addition the associated wave fronts are measured by a Hartmann-Shack wave-front sensor. The wave front is used to calculate the point-spread function. The object is then estimated from the correlation of images and point-spread functions by a deconvolution process. An experimental setup is described, and the first laboratory results, which prove the capabilities of the method, are presented. A signal-to-noise-ratio calculation, permitting a first comparison with the speckle interferometry, is also presented.

The A-buffer (anti-aliased, area-averaged, accumulation buffer) is a general hidden surface mechanism suited to medium scale virtual memory computers. It resolves visibility among an arbitrary collection of opaque, transparent, and... more

The A-buffer (anti-aliased, area-averaged, accumulation buffer) is a general hidden surface mechanism suited to medium scale virtual memory computers. It resolves visibility among an arbitrary collection of opaque, transparent, and intersecting objects. Using an easy to compute Fourier window (box flter), it increases the effective image resolution many times over the Z-buffer, with a moderate increase in cost. The A-buffer is incorporated into the REYES 3-D rendering system at Lucasfilm and was used successfully in the "Genesis Demo" sequence in Star Trek II.

The JPEG baseline standard for image compression employs a block Discrete Cosine Transform (DCT) and uniform quantization. For a monochrome image, a single quantization matrix is allowed, while for a color image, distinct matrices are... more

The JPEG baseline standard for image compression employs a block Discrete Cosine Transform (DCT) and uniform quantization. For a monochrome image, a single quantization matrix is allowed, while for a color image, distinct matrices are allowed for each color channel.. Here we describe a method, called DCTune, for design of color quantization matrices that is based on a model of the visibility of quantization artifacts. The model describes artifact visibility as a function of DCT frequency, color channel, and display resolution and brightness. The model also describes summation of artifacts over space and frequency, and masking of artifacts by the image itself. The DCTune matrices are different from the de facto JPEG matrices, and appear to provide superior visual quality at equal bit-rates.

This project was carried out within the course Satellite Oceanography under the academic supervision of Konstantinos Topouzelis. The main object of this study was to classify and map the underwater habitats in a shallow coastal area... more

This project was carried out within the course Satellite Oceanography under the academic supervision of Konstantinos Topouzelis. The main object of this study was to classify and map the underwater habitats in a shallow coastal area northeast of the Greek island Lesvos. This was done by a combination of high spatial resolution WorldView-2 imagery and in-field observations, after applying atmospheric and water-column corrections. Focus was, to survey the meadows of Posidonia oceanica, which form the major benthic ecosystem dominating the soft bottom of the sublittoral zone in the Mediterranean Sea.

Digital cameras are convenient image acquisition devices: they are fast, versatile, mobile, do not touch the object, and are relatively cheap. In OCR applications, however, digital cameras suffer from a number of limitations, like... more

Digital cameras are convenient image acquisition devices: they are fast, versatile, mobile, do not touch the object, and are relatively cheap. In OCR applications, however, digital cameras suffer from a number of limitations, like geometrical distortions. In this paper, we deal with the preprocessing step before text recognition, specifically with images from a digital camera. Experiments, performed with the FineReader 7.0 software as the back-end recognition tool, confirm importance of image preprocessing in OCR applications.

This paper demonstrates a super-resolution method for improving the resolution in clinical positron emission tomography (PET) scanners. Super-resolution images were obtained by combining four data sets with spatial shifts between... more

This paper demonstrates a super-resolution method for improving the resolution in clinical positron emission tomography (PET) scanners. Super-resolution images were obtained by combining four data sets with spatial shifts between consecutive acquisitions and applying an iterative algorithm. Super-resolution attenuation corrected PET scans of a phantom were obtained using the two-dimensional and three-dimensional (3-D) acquisition modes of a clinical PET/computed tomography (CT) scanner (Discovery LS, GEMS). In a patient study, following a standard 18 F FDG PET/CT scan, a super-resolution scan around one small lesion was performed using axial shifts without increasing the patient radiation exposure. In the phantom study, smaller features (3 mm) could be resolved axially with the super-resolution method than without (6 mm). The super-resolution images had better resolution than the original images and provided higher contrast ratios in coronal images and in 3-D acquisition transaxial images. The coronal super-resolution images had superior resolution and contrast ratios compared to images reconstructed by merely interleaving the data to the proper axial location. In the patient study, super-resolution reconstructions displayed a more localized 18 F FDG uptake. A new approach for improving the resolution of PET images using a super-resolution method has been developed and experimentally confirmed, employing a clinical scanner. The improvement in axial resolution requires no changes in hardware.

We present a method of recovering high dynamic range radiance maps from photographs taken with conventional imaging equipment. In our method, multiple photographs of the scene are taken with different amounts of exposure. Our algorithm... more

We present a method of recovering high dynamic range radiance maps from photographs taken with conventional imaging equipment. In our method, multiple photographs of the scene are taken with different amounts of exposure. Our algorithm uses these differently exposed photographs to recover the response function of the imaging process, up to factor of scale, using the assumption of reciprocity. With the known response function, the algorithm can fuse the multiple photographs into a single, high dynamic range radiance map whose pixel values are proportional to the true radiance values in the scene. We demonstrate our method on images acquired with both photochemical and digital imaging processes. We discuss how this work is applicable in many areas of computer graphics involving digitized photographs, including image-based modeling, image compositing, and image processing. Lastly, we demonstrate a few applications of having high dynamic range radiance maps, such as synthesizing realistic motion blur and simulating the response of the human visual system.

a) 1M samples. (b) 4M samples. (c) 16M samples.

Wavelet transforms and machine learning tools can be used to assist art experts in the stylistic analysis of paintings. A dual-tree complex wavelet transform, Hidden Markov Tree modeling and Random Forest classifiers are used here for a... more

Wavelet transforms and machine learning tools can be used to assist art experts in the stylistic analysis of paintings. A dual-tree complex wavelet transform, Hidden Markov Tree modeling and Random Forest classifiers are used here for a stylistic analysis of Vincent van Gogh's paintings with results on two stylometry challenges that concern “dating, resp. extracting distinguishing features”.

The physical basis for breast tumor detection with microwave imaging is the contrast in dielectric properties of normal and malignant breast tissues. Confocal microwave imaging involves illuminating the breast with an ultra-wideband pulse... more

The physical basis for breast tumor detection with microwave imaging is the contrast in dielectric properties of normal and malignant breast tissues. Confocal microwave imaging involves illuminating the breast with an ultra-wideband pulse from a number of antenna locations, then synthetically focusing reflections from the breast. The detection of malignant tumors is achieved by the coherent addition of returns from these strongly scattering objects. In this paper, we demonstrate the feasibility of detecting and localizing small ( 1 cm) tumors in three dimensions with numerical models of two system configurations involving synthetic cylindrical and planar antenna arrays. Image formation algorithms are developed to enhance tumor responses and reduce early-and late-time clutter. The early-time clutter consists of the incident pulse and reflections from the skin, while the late-time clutter is primarily due to the heterogeneity of breast tissue. Successful detection of 6-mm-diameter spherical tumors is achieved with both planar and cylindrical systems, and similar performance measures are obtained. The influences of the synthetic array size and position relative to the tumor are also explored.

MIRI, the Mid Infrared Instrument, is one of the four scientific instruments on-board the James Webb Space Telescope JWST. MIRI combines spectroscopic and imaging capabilities, including coronagraphy, between 5 and 28 microns with... more

MIRI, the Mid Infrared Instrument, is one of the four scientific instruments on-board the James Webb Space Telescope JWST. MIRI combines spectroscopic and imaging capabilities, including coronagraphy, between 5 and 28 microns with unprecedented sensitivity and spatial and spectral resolution. The required cooling of the detector arrays to around 6K which is provided by a MIRI-dedicated multi-stage mechanical cooler. The complements of the JWST science payload at near-infrared wavelengths between 0.4 and 5 microns are a camera (NIRCam), a spectrometer (NIRSpec) and a Fine Guidance Sensor.

This paper proposes an edge-directed interpolation algorithm for natural images. The basic idea is to first estimate local covariance coefficients from a low-resolution image and then use these covariance estimates to adapt the... more

This paper proposes an edge-directed interpolation algorithm for natural images. The basic idea is to first estimate local covariance coefficients from a low-resolution image and then use these covariance estimates to adapt the interpolation at a higher resolution based on the geometric duality between the low-resolution covariance and the high-resolution covariance. The edge-directed property of covariance-based adaptation attributes to its capability of tuning the interpolation coefficients to match an arbitrarily oriented step edge. A hybrid approach of switching between bilinear interpolation and covariance-based adaptive interpolation is proposed to reduce the overall computational complexity. Two important applications of the new interpolation algorithm are studied: resolution enhancement of grayscale images and reconstruction of color images from CCD samples. Simulation results demonstrate that our new interpolation algorithm substantially improves the subjective quality of the interpolated images over conventional linear interpolation.

Individually visible scratches, also called isolated scratches, are very common in real world surfaces. Although their microgeometry is not visible, they are individually perceptible by the human eye, lying into a representation scale... more

Individually visible scratches, also called isolated scratches, are very common in real world surfaces. Although their microgeometry is not visible, they are individually perceptible by the human eye, lying into a representation scale between BRDF and texture. In order to simulate this kind of scratches in synthetic images we need to know their position over the surface (texture scale), so we can determine where to use the specific scratch BRDF instead of the ordinary surface BRDF. Computing the BRDF of a scratch is difficult because it depends on the scratch's invisible microgeometry. In this paper, we propose a new physically based model to derive this microgeometry by simulating the formation process of scratches. We allow specifying intuitively the parameters involved in the process such as the scratching tool, the penetration forces, and the material properties of the object. From these parameters, we derive the microgeometries of the scratches by taking into account the real behaviour of the process. This behaviour has been determined by analysing existing models in the field of materials engineering and some "scratch tests" that we performed on metals. Our method has the advantages of easily simulating scratches with a wide range of microgeometries and taking into account the variability of their microgeometry along the scratch path. Another contribution is related to the location of the scratches over the surface. Instead of using an image of the paths as in previous work, we present a new representation based on curves defining the paths. This offers an independence on the image resolution or the distance from the observer and accurately provides the scratch direction in order to compute scratch BRDFs.

This paper proposes an edge-directed interpolation algorithm for natural images. The basic idea is to first estimate local covariance coefficients from a low-resolution image and then use these covariance estimates to adapt the... more

This paper proposes an edge-directed interpolation algorithm for natural images. The basic idea is to first estimate local covariance coefficients from a low-resolution image and then use these covariance estimates to adapt the interpolation at a higher resolution based on the geometric duality between the low-resolution covariance and the high-resolution covariance. The edge-directed property of covariance-based adaptation attributes to its capability of tuning the interpolation coefficients to match an arbitrarily oriented step edge. A hybrid approach of switching between bilinear interpolation and covariance-based adaptive interpolation is proposed to reduce the overall computational complexity. Two important applications of the new interpolation algorithm are studied: resolution enhancement of grayscale images and reconstruction of color images from CCD samples. Simulation results demonstrate that our new interpolation algorithm substantially improves the subjective quality of the interpolated images over conventional linear interpolation.

This paper presents the design and implementation of a vision-based navigation and landing algorithm for an autonomous helicopter. The vision system allows to define target areas from a high resolution aerial or satellite image to... more

This paper presents the design and implementation of a vision-based navigation and landing algorithm for an autonomous helicopter. The vision system allows to define target areas from a high resolution aerial or satellite image to determine the waypoints of the navigation trajectory or the landing area. The helicopter is required to navigate from an initial position to a final position in a partially known environment using GPS and vision, to locate a landing target (a helipad of a known shape or a natural landmark) and to land on it. The vision system, using a feature-based image matching algorithm, finds the area and gives feedbacks to the control system for autonomous landing. Vision is used for accurate target detection, recognition and tracking. The helicopter updates its landing target parameters owing to vision and uses an on board behavior-based controller to follow a path to the landing site. Results show the appropriateness of the vision-based approach that does not require any artificial landmark (e.g., helipad) and is quite robust to occlusions, light variations and seasonal changes (e.g., brown or green leaves).

A critical issue in image restoration is the problem of Gaussian noise removal while keeping the integrity of relevant image information. Clinical magnetic resonance imaging (MRI) data is normally corrupted by Rician noise from the... more

A critical issue in image restoration is the problem of Gaussian noise removal while keeping the integrity of relevant image information. Clinical magnetic resonance imaging (MRI) data is normally corrupted by Rician noise from the measurement process which reduces the accuracy and reliability of any automatic analysis. The quality of ultrasound (US) imaging is degraded by the presence of signal dependant noise known as speckle. It generally tends to reduce the resolution and contrast, thereby, to degrade the diagnostic accuracy of this modality. For this reasons, denoising methods are often applied to increase the: Signal-to-Noise Ratio (SNR) and improve image quality. This paper proposes a statistical filter, which is a modified version of Hybrid Median filter for noise reduction, which computes the median of the diagonal elements and the mean of the diagonal, horizontal and vertical elements in a moving window and finally the median value of the two values will be the new pixel value. The results show that our proposed method outperforms the classical implementation of the Mean, Median and Hybrid Median filter in terms of denoising quality. Comparison with well established methods, such as Total Variation, Wavelet and Wiener filters show that the proposed filter produces better denoising results, preserving the main structures and details.

S. Benton published a definitive taxonomy of the first one hundred and seventy years of 3D displays covering the field up to the year 2000. In this article we review how display technologies have advanced in the last ten years and update... more

S. Benton published a definitive taxonomy of the first one hundred and seventy years of 3D displays covering the field up to the year 2000. In this article we review how display technologies have advanced in the last ten years and update Benton's taxonomy to include the latest additions. Our aim is to produce a display taxonomy suitable for content producers highlighting which displays have common requirements for image delivery. We also analyze key technical characteristics of 3D displays and use these characteristics to suggest the future applications for each category of display.