Wiener filter Research Papers - Academia.edu (original) (raw)
The use of frames, or overcomplete dictionaries, for sparse signal representation has been given considerable attention in recent years. The major challenges are good algorithms for sparse approximations, and good methods for choosing or... more
The use of frames, or overcomplete dictionaries, for sparse signal representation has been given considerable attention in recent years. The major challenges are good algorithms for sparse approximations, and good methods for choosing or designing frames. We are concerned with the latter, and have developed algorithms for training frames for a class of data and a specific application. The application presented in this paper is denoising of images with additive Gaussian noise. We present a method for training of constrained overlapping frames to be used for denoising of images. Experiments show that the proposed method improves denoising results compared to adaptive Wiener filtering and wavelet denoising.
- by and +1
- •
- Wiener filter, Wavelet Denoising, Gaussian noise
The a priori signal-to-noise ratio (SNR) plays an important role in many speech enhancement algorithms. In this paper we present a data-driven approach to a priori SNR estimation. It may be used with a wide range of speech enhancement... more
The a priori signal-to-noise ratio (SNR) plays an important role in many speech enhancement algorithms. In this paper we present a data-driven approach to a priori SNR estimation. It may be used with a wide range of speech enhancement techniques, such as, e.g., the minimum mean square error (MMSE) (log) spectral amplitude estimator, the super Gaussian joint maximum a posteriori (JMAP) estimator, or the Wiener filter. The proposed SNR estimator employs two trained artificial neural networks, one for speech presence, one for speech absence. The classical decision-directed a priori SNR estimator by Ephraim and Malah is broken down into its two additive components, which now represent the two input signals to the neural networks. Both output nodes are combined to represent the new a priori SNR estimate. As an alternative to the neural networks, also simple lookup tables are investigated. Employment of these data-driven nonlinear a priori SNR estimators reduces speech distortion, particularly in speech onset, while retaining a high level of noise attenuation in speech absence.
In this paper, we combine the well-established technique of Wiener filtering with an efficient method for robust smoothing: channel smoothing. The main parameters to choose in channel smoothing are the number of channels and the averaging... more
In this paper, we combine the well-established technique of Wiener filtering with an efficient method for robust smoothing: channel smoothing. The main parameters to choose in channel smoothing are the number of channels and the averaging filter. Whereas the number of channels has a natural lower bound given by the noise level and should for the sake of speed be as small as possible, the averaging filter is a less obvious choice. Based on the linear behavior of channel smoothing for inlier noise, we derive a Wiener filter applicable for averaging the channels of an image. We show in some experiments that our method compares favorable with established methods.
This paper focuses on microphone arrays to realize distant-talking speech recognition in real environments. In distant-talking situations, users can speak at arbitrary positions while moving. Therefore, it is very important for high... more
This paper focuses on microphone arrays to realize distant-talking speech recognition in real environments. In distant-talking situations, users can speak at arbitrary positions while moving. Therefore, it is very important for high quality speech acquisition using microphone arrays to localize a talker accurately. However, it is very difficult to localize a moving talker in noisy and reverberant environments. The talker localization errors result in performance degradation of speech recognition. One way to solve this problem is to integrate the speech recognition process and the talker localization into a unified framework.
We present an approach for separating two speech signals when only one single recording of their linear mixture is available. For this purpose, we derive a filter, which we call the soft mask filter, using minimum mean square error (MMSE)... more
We present an approach for separating two speech signals when only one single recording of their linear mixture is available. For this purpose, we derive a filter, which we call the soft mask filter, using minimum mean square error (MMSE) estimation of the log spectral vectors of sources given the mixture's log spectral vectors. The soft mask filter's parameters are estimated using the mean and variance of the underlying sources which are modeled using the Gaussian composite source modeling (CSM) approach. It is also shown that the binary mask filter which has been empirically and extensively used in single channel speech separation techniques is, in fact, a simplified form of the soft mask filter. The soft mask filtering technique is compared with the binary mask and Wiener filtering approaches when the input consists of male+male, female+female, and male+female mixtures. The experimental results in terms of signal-to-noise ratio (SNR) and segmental SNR show that soft mask filtering outperforms binary mask and Wiener filtering.
E xtracting minutiae from fingerprint images is one of the most important steps in automatic fingerprint identification and classification. Minutiae are local discontinuities in the fingerprint pattern, mainly terminations and... more
E xtracting minutiae from fingerprint images is one of the most important steps in automatic fingerprint identification and classification. Minutiae are local discontinuities in the fingerprint pattern, mainly terminations and bifurcations. Most of the minutiae detection methods are based on image binarization while some others extract the minutiae directly from gray-scale images. In this work we compare these two approaches and propose two different methods for fingerprint ridge image enhancement. The first one is carried out using local histogram equalization, Wiener filtering, and image binarization. The second method uses a unique anisotropic filter for direct gray-scale enhancement. The results achieved are compared with those obtained through some other methods. Both methods show some improvement in the minutiae detection process in terms of time required and efficiency.
This paper attempts to undertake the study of Restored Gaussian Blurred Images. by using four types of techniques of deblurring image as Wiener filter, Regularized filter, Lucy Richardson deconvlutin algorithm and Blind deconvlution... more
This paper attempts to undertake the study of Restored Gaussian Blurred Images. by using four types of techniques of deblurring image as Wiener filter, Regularized filter, Lucy Richardson deconvlutin algorithm and Blind deconvlution algorithm with an information of the Point Spread Function (PSF) corrupted blurred image with Different values of Size and Alfa and then corrupted by Gaussian noise. The same is applied to the remote sensing image and they are compared with one another, So as to choose the base technique for restored or deblurring image.This paper also attempts to undertake the study of restored Gaussian blurred image with no any information about the Point Spread Function (PSF) by using same four techniques after execute the guess of the PSF, the number of iterations and the weight threshold of it. To choose the base guesses for restored or deblurring image of this techniques.
The classical solution to the noise removal problem is the Wiener filter, which utilizes the second-order statistics of the Fourier decomposition. Subband decompositions of natural images have significantly non-Gaussian higher-order point... more
The classical solution to the noise removal problem is the Wiener filter, which utilizes the second-order statistics of the Fourier decomposition. Subband decompositions of natural images have significantly non-Gaussian higher-order point statistics; these statistics capture image properties that elude Fourier-based techniques. We develop a Bayesian estimator that is a natural extension of the Wiener solution, and that exploits these higher-order statistics. The resulting nonlinear estimator performs a "coring" operation. We provide a simple model for the subband statistics, and use it to develop a semi-blind noise-removal algorithm based on a steerable wavelet pyramid.
The problem of solving the optimal (minimumnoise) error feedback coefficients for recursive digital filters is addressed in the general high-order case. It is shown that when minimum noise variance at the filter output is required, the... more
The problem of solving the optimal (minimumnoise) error feedback coefficients for recursive digital filters is addressed in the general high-order case. It is shown that when minimum noise variance at the filter output is required, the optimization problem leads to a set of familiar Wiener-Hopf or Yule-Walker equations, demonstrating that the optimal error feedback can be interpreted as a special case of Wiener filtering.
E xtracting minutiae from fingerprint images is one of the most important steps in automatic fingerprint identification and classification. Minutiae are local discontinuities in the fingerprint pattern, mainly terminations and... more
E xtracting minutiae from fingerprint images is one of the most important steps in automatic fingerprint identification and classification. Minutiae are local discontinuities in the fingerprint pattern, mainly terminations and bifurcations. Most of the minutiae detection methods are based on image binarization while some others extract the minutiae directly from gray-scale images. In this work we compare these two approaches and propose two different methods for fingerprint ridge image enhancement. The first one is carried out using local histogram equalization, Wiener filtering, and image binarization. The second method uses a unique anisotropic filter for direct gray-scale enhancement. The results achieved are compared with those obtained through some other methods. Both methods show some improvement in the minutiae detection process in terms of time required and efficiency.
A speech enhancement technique is indispensable to achieve acceptable speech quality in VoIP systems. This paper proposes a Wiener filter optimized to the estimated SNR of noisy speech for speech enhancement. The proposed noise reduction... more
A speech enhancement technique is indispensable to achieve acceptable speech quality in VoIP systems. This paper proposes a Wiener filter optimized to the estimated SNR of noisy speech for speech enhancement. The proposed noise reduction method is applied as preprocessing before speech coding. The performance of the proposed method is evaluated by the PESQ in various noisy conditions. In this paper, G.711, G.723.1, and G.729A VoIP speech codecs are used for the performance evaluation. The PESQ results show that the performance of our proposed noise reduction scheme outperforms those of the noise suppression in the IS-127 EVRC and the noise reduction in the ETSI standard for the advanced distributed speech recognition front-end.
Ray tracing is a well known method for photorealistic image synthesis, volume visualization and rendering. Over the last decade the method is being adopted throughout the research community around the world. With the advent of the high... more
Ray tracing is a well known method for photorealistic image synthesis, volume visualization and rendering. Over the last decade the method is being adopted throughout the research community around the world. With the advent of the high speed processing units, the method has been emerging from offline rendering towards real time rendering. The success behind ray tracing algorithms lies in the use of acceleration data structures and modern processing power of CPUs and GPUs. kd-tree is one of the most widely used data structures based on surface area heuristics (SAH). The major bottleneck in kd-tree construction is the time consumed to find optimum split locations. In this paper, we propose a prediction algorithm for animated ray tracing based on Kalman filtering. The algorithm successfully predicts the split locations for the next consecutive frame in the animation sequence. Thus, giving good initial starting points for one dimensional search algorithms to find optimum split locationsin our case parabolic interpolation combined with golden section search. With our technique implemented, we have reduced the "running kd-tree construction" time by between 78% and 87% for dynamic scenes with 16.8K and 252K polygons respectively.
The paper deals with carrier recovery based on pilot symbols in single-carrier systems. The system model considered in the paper includes the channel additive white noise and the phase noise that affects the local oscillators used for... more
The paper deals with carrier recovery based on pilot symbols in single-carrier systems. The system model considered in the paper includes the channel additive white noise and the phase noise that affects the local oscillators used for up/downconversion. Wiener's method is used to determine the optimal filter in estimation of phase noise assuming that a sequence of equally spaced pilot symbols is available. Our analysis allows to capture the cyclostationary performance of the estimate, a phenomenon that is not considered in the previous literature. In the paper, closed-form formulas for the transfer function of the optimal filter and for the mean-square phase error are derived for the case where the phase noise is modelled as random phase walk. For this case, a suboptimal filter is proposed. Numerical results are presented to substantiate the analysis.
Image restoration has major application areas of image processing technique is improving the quality of recorded images. No imaging system gives images of perfect quality because of degradation caused by various reasons. Image restoration... more
Image restoration has major application areas of image processing technique is improving the quality of recorded images. No imaging system gives images of perfect quality because of degradation caused by various reasons. Image restoration technique deals with those images that have been recorded in the presence of one or more source of degradation. The prime focus of this paper is related to the restoration performed by different non-linear filters and there drawbacks which can to removed up to an extent using discrete wavelet transform with a chosen threshold. In this paper, some pseudo-inverse techniques and wiener filter technique have been used along with restoration using discrete wavelet transform (DWT). A comparative analysis shows that the DWT approach is superior to the pseudo-inverse and wiener technique.
Ce TP a pour objectif de prendre en main le logiciel MATLAB. Les notions traitées sont la manipulation de variables aléatoires et de vecteurs, l’utilisation de fonctions, la représentation graphique, la génération de nombres aléatoires,... more
Ce TP a pour objectif de prendre en main le logiciel MATLAB. Les notions traitées sont la manipulation de variables aléatoires et de vecteurs, l’utilisation de fonctions, la représentation graphique, la génération de nombres aléatoires, le calcul d’écart-type de la variance, le calcul de la variance et la corrélation et la génération des signaux aléatoires en utilisant des systèmes linéaires invariants dans le temps. En ce qui concerne la 2 ème partie, on étudie les propriétés du filtrage adaptatif (filtre de Wiener) en utilisant l’algorithme LMS.
A critical issue in image restoration is the problem of Gaussian noise removal while keeping the integrity of relevant image information. Clinical magnetic resonance imaging (MRI) data is normally corrupted by Rician noise from the... more
A critical issue in image restoration is the problem of Gaussian noise removal while keeping the integrity of relevant image information. Clinical magnetic resonance imaging (MRI) data is normally corrupted by Rician noise from the measurement process which reduces the accuracy and reliability of any automatic analysis. The quality of ultrasound (US) imaging is degraded by the presence of signal dependant noise known as speckle. It generally tends to reduce the resolution and contrast, thereby, to degrade the diagnostic accuracy of this modality. For this reasons, denoising methods are often applied to increase the: Signal-to-Noise Ratio (SNR) and improve image quality. This paper proposes a statistical filter, which is a modified version of Hybrid Median filter for noise reduction, which computes the median of the diagonal elements and the mean of the diagonal, horizontal and vertical elements in a moving window and finally the median value of the two values will be the new pixel value. The results show that our proposed method outperforms the classical implementation of the Mean, Median and Hybrid Median filter in terms of denoising quality. Comparison with well established methods, such as Total Variation, Wavelet and Wiener filters show that the proposed filter produces better denoising results, preserving the main structures and details.
In diagnosis of medical images, operations such as feature extraction and object recognition plays the key role.. These operations will become difficult if the images are corrupted with noises. Several types of noise were introduced in... more
In diagnosis of medical images, operations such as feature extraction and object recognition plays the key role.. These operations will become difficult if the images are corrupted with noises. Several types of noise were introduced in the images during image acquisition, transfer & storage. The main objective is to remove the noise from the input image. Image Denoising is an utmost challenge for Researchers, developing Image denoising algorithms is a difficult task, since fine details in a medical image should not be destroyed during noise removal during the diagnosis of information. Medical image focuses on the speckle noise. So, several denoising filters are implemented and performance are compared to find the optimum filter. To the input image noise is added and various filtering are applied to remove the noise. The filters used are Wiener filter, Adaptive Fuzzy filter and Vector Median Filter. The frequency domain is used to improve the quality of the denoising method. As the first step Contourlet transform is applied to the noisy image. Further, several filters are applied on the transformed image for the effective removal of noise. The performance of the filters are compared based on PSNR, SSIM, IQI.
This paper presents the new Fibonacci Fourier-like transforms. The proposed transforms render the relationship between Fibonacci numbers and the conventional Discrete Fourier Transform. The fast Fibonacci Fourier transforms are also... more
This paper presents the new Fibonacci Fourier-like transforms. The proposed transforms render the relationship between Fibonacci numbers and the conventional Discrete Fourier Transform. The fast Fibonacci Fourier transforms are also introduced with the use of the Kronecker product properties. The proposed transforms are applied to the problem of noise reduction with two new algorithms, sliding double window filtering and fusion sliding window filtering. The primary concept of sliding double window filtering is to process the noisy signals with nonoverlapped windows, while the primary concept of fusion sliding window filtering is to process the noisy signals with various weighted filtering methods and overlapped signal values. The results and analysis show the noise reduction of the given noisy gray level images. The proposed methods are compared with the well-known Wiener filtering using images that contain Gaussian noise with the range of variance between 0 and 0.3. The analysis shows by visual inspection that the noisy parts are smoothened while retaining natural edges.
In this paper we present a novel noise reduction method using Coordinate Logic (CL) filters applied to printed text and handwriting images. CL Filters and their associated Coordinate Logic Operations (CLOs) are widely used in common... more
In this paper we present a novel noise reduction method using Coordinate Logic (CL) filters applied to printed text and handwriting images. CL Filters and their associated Coordinate Logic Operations (CLOs) are widely used in common practical image process applications like noise removing, magnification, opening, closing, skeletonization, coding, edge detection, feature extraction, and fractal modeling. Using coordinate logic filters increases the efficiency, simplicity and more than all the programs speed compared to morphological filters. Unlike other methods like applying Wiener filter or Median filter that are dependent to the noise applied (Gaussian or salt & pepper noise) and to the type of the input image, (handwritten text images or printed text images) coordinate logic filters are not only independent to these variations but also on the other hand accurate and fast.
- by S. Mohammad Mostafavi I. and +1
- •
- Image Processing, Pepper, Edge Detection, Text Analysis
Steganography is the science that involves communicating secret data in an appropriate multimedia carrier, e.g., image, audio, and video files. It comes under the assumption that if the feature is visible, the point of attack is evident,... more
Steganography is the science that involves communicating secret data in an appropriate multimedia carrier, e.g., image, audio, and video files. It comes under the assumption that if the feature is visible, the point of attack is evident, thus the goal here is always to conceal the very existence of the embedded data. Steganography has various useful applications. However, like any other science it can be used for ill intentions. It has been propelled to the forefront of current security techniques by the remarkable growth in computational power, the increase in security awareness by, e.g., individuals, groups, agencies, government and through intellectual pursuit. Steganography's ultimate objectives, which are undetectability, robustness (resistance to various image processing methods and compression) and capacity of the hidden data, are the main factors that separate it from related techniques such as watermarking and cryptography. This paper provides a state-of-the-art review and analysis of the different existing methods of steganography along with some common standards and guidelines drawn from the literature. This paper concludes with some recommendations and advocates for the object-oriented embedding mechanism. Steganalysis, which is the science of attacking steganography, is not the focus of this survey but nonetheless will be briefly discussed.
This paper presents several aspects of the application of regularization theory in image restoration. This is accomplished by extending the applicability of the stabilizing functional approach to 2-D ill-posed inverse problems. Image... more
This paper presents several aspects of the application of regularization theory in image restoration. This is accomplished by extending the applicability of the stabilizing functional approach to 2-D ill-posed inverse problems. Image restoration is formulated as the constrained minimization of a stabilizing functional. The choice of a particular quadratic functional to be minimized is related to the a priori knowledge regarding the original object through a formulation of image restoration as a maximum a posteriori estimation problem. This formulation is based on image representation by certain stochastic partial differential equation image models. The analytical study and computational treatment of the resulting optimization problem are subsequently presented. As a result, a variety of regularizing filters and iterative regularizing algorithms are proposed. A relationship between the regularized solutions proposed and optimal Wiener estimation is also identified. The filters and algorithms proposed are evaluated through several experimental results.
We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing... more
We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the non local means (NL-means), based on a non local averaging of all pixels in the image. Finally, we present some experiments comparing the NL-means algorithm and the local smoothing filters.
Image denoising is a challenging issue found in diverse image processing and computer vision problems. There are various existing methods investigated to denoising image. The essential characteristic of a successful model that denoising... more
Image denoising is a challenging issue found in diverse image processing and computer vision problems. There are various existing methods investigated to denoising image. The essential characteristic of a successful model that denoising image is that it should eliminate noise as far as possible and edges preserving and necessary image information by improving visual quality. This paper presents a review of some significant work in the field of image denoising based on that the denoising methods can be roughly classified as spatial domain methods, transform domain methods, or can mix both to get the advantages of them. This work tried to focus on this mixing between using wavelet transform and the filters in spatial domain to show spatial domain. There have been numerous published algorithms, and each approach has its assumptions, advantages, and limitations depending on the various merits and noise. An analyzing study has been performed comparative in their methods to achieve the denoising algorithms, filtering approach and wavelet-based approach. Standard measurement parameters have been used to compute results in some studies to evaluate techniques while other methods applied new measurement parameters to evaluate the denoising techniques.
This paper describes a novel technique for the cancellation of the ventricular activity for applications such as P-wave or atrial fibrillation detection. The procedure was thoroughly tested and compared with a previously published method,... more
This paper describes a novel technique for the cancellation of the ventricular activity for applications such as P-wave or atrial fibrillation detection. The procedure was thoroughly tested and compared with a previously published method, using quantitative measures of performance. The novel approach estimates, by means of a dynamic time delay neural network (TDNN), a time-varying, nonlinear transfer function between two ECG leads. Best results were obtained using an Elman TDNN with nine input samples and 20 neurons, employing a sigmoidal tangencial activation in the hidden layer and one linear neuron in the output stage. The method does not require a previous stage of QRS detection. The technique was quantitatively evaluated using the MIT-BIH arrhythmia database and compared with an adaptive cancellation scheme proposed in the literature. Results show the advantages of the proposed approach, and its robustness during noisy episodes and QRS morphology variations.
A simple adaptive least mean square (LMS) type algorithm for channel estimation is developed based on certain modifications to finite-impulse response (FIR) Wiener filtering. The proposed algorithm is nearly blind since it does not... more
A simple adaptive least mean square (LMS) type algorithm for channel estimation is developed based on certain modifications to finite-impulse response (FIR) Wiener filtering. The proposed algorithm is nearly blind since it does not require any training sequence or channel statistics, and it can be implemented using only noise variance knowledge. A condition guaranteeing the convergence of the algorithm and theoretical mean square error (MSE) values are also derived. Computer simulation results demonstrate that the proposed algorithm can yield a smaller MSE than existing techniques, and that its performance is close to that of optimal Wiener filtering.
Laser Doppler vibrometers (LDVs) have been widely used in industry inspection. One of the superior characteristics of an LDV is that it can detect and measure extremely tiny vibration of a target at a large distance, with sensitivity on... more
Laser Doppler vibrometers (LDVs) have been widely used in industry inspection. One of the superior characteristics of an LDV is that it can detect and measure extremely tiny vibration of a target at a large distance, with sensitivity on the order of 1µm/s. On the other hand, we have found that most objects nearby audio sources can be vibrated by the audio waves. These two aspects motivate our research in a new application of the LDVs, namely remote voice detection from surrounding vibrated objects. However, the detected speech signals may be corrupted by many noise sources, such as laser photon noises, target movements, and background acoustic noises (wind, engine sound, etc.). Therefore, speech enhancement algorithms based on Gaussian bandpass and Wiener filters are designed to effectively improve the intelligibility of the noisy voice signals detected by the LDV system Experimental results show that remote voice detection via an LDV is very promising, when choosing appropriate targets close to human subjects and using the proposed enhancement techniques.
Magnetoencephalographic and electroencephalographic recordings are often contaminated by artifacts such as eye movements, blinks, cardiac or muscle activity. These artifacts, whose amplitude may exceed that of brain signals, may severely... more
Magnetoencephalographic and electroencephalographic recordings are often contaminated by artifacts such as eye movements, blinks, cardiac or muscle activity. These artifacts, whose amplitude may exceed that of brain signals, may severely interfere with the detection and analysis of events of interest. In this paper, we consider a nonlinear approach for cardiac artifacts removal from magnetoencephalographic data, based on Wiener filtering. In recent works, nonlinear Wiener filtering based on reproducing kernel Hilbert spaces and the kernel trick has been proposed. However, the filter parameters are determined by the resolution of a linear system which may be ill-conditioned. To deal with this problem, we introduce three kernel methods which provide powerful tools for solving ill-conditioned problems, namely kernel principal component analysis, kernel partial least squares and kernel ridge regression. A common feature of these methods is that they regularize the solution by assuming an appropriate prior on the class of possible solutions. We avoid the use of QRS-synchronous averaging techniques which may induce distortions in brain signals if artifacts are not well detected. Moreover, our approach shows the nonlinear relation between magnetoencephalographic and electrocardiographic signals.
In the field of artificial intelligence, Adaptive Learning Technique refers to the combination of artificial neural networks. In this research paper the Adaptive Learning Technique has been implemented to carry out the Detection and... more
In the field of artificial intelligence, Adaptive Learning Technique refers to the combination of artificial neural networks. In this research paper the Adaptive Learning Technique has been implemented to carry out the Detection and Localization of Sound (S). In this technique two methods are used to detect the pure sound, In the first method wiener filter are used to reduce the amount of noise in a signal and minimize the mean square error (M.S.E), And in the second method wiener with bacterial foraging optimization are used for effectiveness in sound. These proposed methods are compared and the results reveal its superiority.
Reçu le 28 décembre 2006 ; accepté le 2 mars 2007 Disponible sur Internet le 15 mai 2007
- by M. Kirkove and +1
- •
- Noise reduction, Oscillations, Wiener filter, Visual Quality
It is well known that encryption provides secure channels for communicating entities. However, due to lack of covertness on these channels, an eavesdropper can identify encrypted streams through statistical tests and capture them for... more
It is well known that encryption provides secure channels for communicating entities. However, due to lack of covertness on these channels, an eavesdropper can identify encrypted streams through statistical tests and capture them for further cryptanalysis. Hence, the communicating entities can use steganography to achieve covertness. In this paper we propose a new form of multimedia steganography called data masking.
Purpose: The quality of an image can be significantly improved by digital deconvolution with the (two-dimensional) point spread function (PSF) of the imaging system. We investigated the significance of this improvement for a projection... more
Purpose: The quality of an image can be significantly improved by digital deconvolution with the (two-dimensional) point spread function (PSF) of the imaging system. We investigated the significance of this improvement for a projection radiograph of vertebral bone, using commercially available software. Methods: A magnified image of the PSF of a GE Advantx RFX system was obtained directly from a pinhole radiograph of the X-ray source and digitized. Images of vertebral bone obtained using similar technique factors were deconvolved with the PSF images, with due regard for magnification effects and using Wiener filtering to avoid amplifying the effects of noise. Results: The spatial resolution of these restored images was significantly better than the original images, and they were less noisy. A significant improvement in image quality could also be obtained by high-pass filtering using a Butterworth filter and a cut-off frequency matching that of the PSF.
This paper introduces a new multiscale speckle reduction method based on the extraction of wavelet interscale dependencies to visually enhance the medical ultrasound images and improve clinical diagnosis. The logarithm of the image is... more
This paper introduces a new multiscale speckle reduction method based on the extraction of wavelet interscale dependencies to visually enhance the medical ultrasound images and improve clinical diagnosis. The logarithm of the image is first transformed to the oriented dual-tree complex wavelet domain. It is then shown that the adjacent subband coefficients of the log-transformed ultrasound image can be successfully modeled using the general form of bivariate isotropic stable distributions, while the speckle coefficients can be approximated using a zero-mean bivariate Gaussian model. Using these statistical models, we design a new discrete bivariate Bayesian estimator based on minimizing the mean square error (MSE). To assess the performance of the proposed method, four image quality metrics, namely signal-to-noise ratio, MSE, coefficient of correlation, and edge preservation index, were computed on 80 medical ultrasound images. Moreover, a visual evaluation was carried out by two medical experts. The numerical results indicated that the new method outperforms the standard spatial despeckling filters, homomorphic Wiener filter, and new multiscale
As Cosmic Microwave Background (CMB) measurements are becoming more ambitious, the issue of foreground contamination is becoming more pressing. This is especially true at the level of sensitivity, angular resolution and for the sky... more
As Cosmic Microwave Background (CMB) measurements are becoming more ambitious, the issue of foreground contamination is becoming more pressing. This is especially true at the level of sensitivity, angular resolution and for the sky coverage of the planned space experiments MAP and Planck. We present in this paper an indicator of the accuracy of the separation of the CMB anisotropies from those induced by foregrounds.
In this study by using noise-free signal estimation the reduction of noise in the image is being proposed. The dyadic stationary wavelet transform is used for both the wiener filter and in estimating the noise free signal. Finding a... more
In this study by using noise-free signal estimation the reduction of noise in the image is being proposed. The dyadic stationary wavelet transform is used for both the wiener filter and in estimating the noise free signal. Finding a suitable filter bank and choosing other parameters of the wiener filter with respect to obtained signalto-noise ratio (SNR) is our goal. Testing was being performed on the standard images corrupted with the noise. The artificial interference was created from the generated white Gaussian noise, whose power spectrum was modified according to a model of the power spectrum of the image. The adaptive setting parameters of the filtering according to the level of interference in the input signal are being used to improve the filtering performance. The average SNR of the whole test database is increased by about 10.6 dB. The better results can be provided by using the provided algorithm than the classic wavelet wiener filter
Electroencephalogram (EEG) related to fast eye movement (saccade), has been the subject of application oriented research by our group toward developing a brain-computer interface(BCI). Our goal is to develop novel BCI based on eye... more
Electroencephalogram (EEG) related to fast eye movement (saccade), has been the subject of application oriented research by our group toward developing a brain-computer interface(BCI). Our goal is to develop novel BCI based on eye movements system employing EEG signals on-line. Most of the analysis of the saccade-related EEG data has been performed using ensemble averaging approaches. However, ensemble averaging is not suitable for BCI. In order to process raw EEG data in real time, we performed saccade-realted EEG experiments and processed data by using the non-conventional Fast ICA with Reference signal (FICAR). Using the FICAR algorithm, we was able to extract successfully a desired independent components(IC) which are correlated with a reference signal. Visually guided saccade tasks were performed and the EEG signal generated in the saccade was recorded. The EEG processing was performed in three stages: PCA preprocessing and noise reduction, extraction of the desired IC using Wiener filter with reference signal, and post-processing using higher order statistics Fast ICA based on maximization of kurtosis. Form the experimental results and analysis we found that using FICAR it is possible to extract form raw EEG data the saccade-related ICs and to predict saccade in advance by 4[ms] before real movements of eyes occurs. For single trail EEG data we have successfully extracted the desire ICs with recognition rate 72%.
Digital watermarking is a vital process for protecting the copyright of images. This paper presents a method of embedding a private robust watermark into a digital image. The full complex form the Wiener filter is used to extract the... more
Digital watermarking is a vital process for protecting the copyright of images. This paper presents a method of embedding a private robust watermark into a digital image. The full complex form the Wiener filter is used to extract the signal from the watermarked image. This is shown to outperform the more conventional approximate notation. The results are shown to be extremely noise insensitive.
This paper presents a novel adaptive reduced-rank multiple-input-multiple-output (MIMO) equalization scheme and algorithms based on alternating optimization design techniques for MIMO spatial multiplexing systems. The proposed... more
This paper presents a novel adaptive reduced-rank multiple-input-multiple-output (MIMO) equalization scheme and algorithms based on alternating optimization design techniques for MIMO spatial multiplexing systems. The proposed reduced-rank equalization structure consists of a joint iterative optimization of the following two equalization stages: 1) a transformation matrix that performs dimensionality reduction and 2) a reduced-rank estimator that retrieves the desired transmitted symbol. The proposed reduced-rank architecture is incorporated into an equalization structure that allows both decision feedback and linear schemes to mitigate the interantenna (IAI) and intersymbol interference (ISI). We develop alternating least squares (LS) expressions for the design of the transformation matrix and the reduced-rank estimator along with computationally efficient alternating recursive least squares (RLS) adaptive estimation algorithms. We then present an algorithm that automatically adjusts the model order of the proposed scheme. An analysis of the LS algorithms is carried out along with sufficient conditions for convergence and a proof of convergence of the proposed algorithms to the reduced-rank Wiener filter. Simulations show that the proposed equalization algorithms outperform the existing reduced-and fullalgorithms while requiring a comparable computational cost.
This research work proposes and explore different wavelets methods in digital image denoising. Using several wavelets threshold technique such as SUREShrink, VisuShrink, and BayesShrink in search for efficient image denoising method. In... more
This research work proposes and explore
different wavelets methods in digital image denoising.
Using several wavelets threshold technique such as
SUREShrink, VisuShrink, and BayesShrink in search for
efficient image denoising method. In this paper, we extend
the existing technique and provide a comprehensive
evaluation of the proposed method. Wiener filtering
technique is the proposed method which was compared
and analysed, while the performance of all the techniques
were compared to ascertain the most efficient method.
This paper attempts to undertake the study of Restored Gaussian Blurred Images. by using four types of techniques of deblurring image as Wiener filter, Regularized filter ,Lucy Richardson deconvlutin algorithm and Blind deconvlution... more
This paper attempts to undertake the study of Restored Gaussian Blurred Images. by using four types of techniques of deblurring image as Wiener filter, Regularized filter ,Lucy Richardson deconvlutin algorithm and Blind deconvlution algorithm with an information of the Point Spread Function (PSF) corrupted blurred image with Different values of Size and Alfa and then corrupted by Gaussian noise. The same is applied to the remote sensing image and they are compared with one another, So as to choose the base technique for restored or deblurring image.This paper also attempts to undertake the study of restored Gaussian blurred image with no any information about the Point Spread Function (PSF) by using same four techniques after execute the guess of the PSF, the number of iterations and the weight threshold of it. To choose the base guesses for restored or deblurring image of this techniques.
In a fast growing industrial world, carriers are required to carry products from one manufacturing plant to another which are usually in different buildings or separate blocks. This study intends to automate this sector using vision... more
In a fast growing industrial world, carriers are required to carry products from one manufacturing plant to another which are usually in different buildings or separate blocks. This study intends to automate this sector using vision controlled mobile robots instead of laying railway tracks which are both expensive and inconvenient. To achieve this purpose an autonomous robot with computer vision as its primary sensor for gaining information about its environment for path following is developed. The proposed Line Follower Robot (LFR) consists of web cam mounted on the vehicle and connected to Matlab platform. A PID control algorithm will be applied to adjust the robot on the line. The proposed LFR is accomplished through the following stages: Firstly, the image is acquired using the web cam. The acquired RGB image is converted to another color coordinates for testing and comparing to choose the best color space. After that, the image contrast is enhanced using histogram equalization ...