Wavelet Transforms Research Papers - Academia.edu (original) (raw)
The aim of this work is to define a procedure to develop diagnostic systems for Printed Circuit Boards, based on Automated Optical Inspection with low cost and easy adaptability to different features. A complete system to detect mounting... more
The aim of this work is to define a procedure to develop diagnostic systems for Printed Circuit Boards, based on Automated Optical Inspection with low cost and easy adaptability to different features. A complete system to detect mounting defects in the circuits is presented in this paper. A lowcost image acquisition system with high accuracy has been designed to fit this application. Afterward, the resulting images are processed using the Wavelet Transform and Neural Networks, for low computational cost and acceptable precision. The wavelet space represents a compact support for efficient feature extraction with the localization property. The proposed solution is demonstrated on several defects in different kind of circuits.
- by and +1
- •
- Gastroenterology, Neural Network, Printed Circuit Board, Wavelet Transform
In this paper, we investigate the effectiveness of a financial time-series forecasting strategy which exploits the mul- tiresolution property of the wavelet transform. A financial series is decomposed into an over complete, shift... more
In this paper, we investigate the effectiveness of a financial time-series forecasting strategy which exploits the mul- tiresolution property of the wavelet transform. A financial series is decomposed into an over complete, shift invariant scale-related representation. In transform space, each individual wavelet series is modeled by a separate multilayer perceptron (MLP). To better utilize the detailed information in the lower scales of wavelet coef- ficients (high frequencies) and general (trend) information in the higher scales of wavelet coefficients (low frequencies), we applied the Bayesian method of automatic relevance determination (ARD) to choose short past windows (short-term history) for the inputs to the MLPs at lower scales and long past windows (long-term history) at higher scales. To form the overall forecast, the indi- vidual forecasts are then recombined by the linear reconstruction property of the inverse transform with the chosen autocorrelation shell representatio...
This paper presents a comparative performance analysis of various trend detection methods developed using fuzzy logic, statistical, regression, and wavelet techniques. The main contribution of this paper is the introduction of a new... more
This paper presents a comparative performance analysis of various trend detection methods developed using fuzzy logic, statistical, regression, and wavelet techniques. The main contribution of this paper is the introduction of a new method that uses noise rejection fuzzy clustering to enhance the performance of trend detection methodologies. Furthermore, another contribution of this work is a comparative investigation that produced systematic guidelines for the selection of a proper trend detection method for different application requirements. Examples of representative physiological variables considered in this paper to examine the trend detection algorithms are: 1) blood pressure signals (diastolic and systolic); and 2) heartbeat rate based on RR intervals of electrocardiography signal. Furthermore, synthetic physiological data intentionally contaminated with various types of real-life noise has been generated and used to test the performance of trend detection methods and develop noise-insensitive trend-detection algorithms.
In this paper, we describe an intelligent signal analysis system employing the wavelet transformation towards solving vehicle engine diagnosis problems. Vehicle engine diagnosis often involves multiple signal analysis. The developed... more
In this paper, we describe an intelligent signal analysis system employing the wavelet transformation towards solving vehicle engine diagnosis problems. Vehicle engine diagnosis often involves multiple signal analysis. The developed system first partitions a leading signal into small segments representing physical events or stateds based on wavelet mutli-resolution analysis. Second, by applying the segmentation result of the leading signal to the other signals, the detailed properties of each segment, including inter-signal relationships, are extracted to form a feature vector. Finally a fuzzy intelligent system is used to learn diagnostic features from a training set containing feature vectors extracted from signal segments at various vehicle states. The fuzzy system applies its diagnostic knowledge to classify signals as abnormal or normal. The implementation of the system is described and experiment results are presented.
This paper presents a robust video watermarking scheme. It mainly contains three characteristics: 1) the sinusoidal signals pattern is embedded as watermark in the selected sub-bands of 3D dual-tree complex wavelet transform (3D-CWT),... more
This paper presents a robust video watermarking scheme. It mainly contains three characteristics: 1) the sinusoidal signals pattern is embedded as watermark in the selected sub-bands of 3D dual-tree complex wavelet transform (3D-CWT), which is employed to preserve the image quality and improve the robustness of watermark; 2) at the detection end, the detected peaks response are used to achieve attacks estimation and rectification; and 3) the watermark is confirmed simply and objectively by detecting the prominent peaks in frequency domain. Experimental results conducted on several popular true color video sequences demonstrate that the proposed watermarking scheme has good performance in terms of common video operations as well as geometric distortions.
Adaptive space-frequency quantization scheme in scalar fashion applied to wavelet-based compression is presented. Because of strong demands due to detail preserving in lossy image archiving and transmission, as it is for example in... more
Adaptive space-frequency quantization scheme in scalar fashion applied to wavelet-based compression is presented. Because of strong demands due to detail preserving in lossy image archiving and transmission, as it is for example in medical applications, different modifications of uniform threshold quantization are considered. The main features of elaborated procedure are as follows: fitting threshold value to local data characteristics in backward way and quantization step size estimation for each subband as forward and backward framework in optimization procedure. Many tests conducted in real wavelet compression scheme c o n f i i significant efficiency of presented quantization procedures. Achieved total compression effectiveness is promising in spite of simple coding algorithm.
The prediction algorithm is one of the most important factors in the quality of wind-power prediction. In this paper, based on the principles of wavelet transform and support vector machines (SVMs), as well as the characteristics of... more
The prediction algorithm is one of the most important factors in the quality of wind-power prediction. In this paper, based on the principles of wavelet transform and support vector machines (SVMs), as well as the characteristics of wind-turbine generation systems, two prediction methods are presented and discussed. In method 1, the time series of model input are decomposed into different frequency modes, and the models are set up separately based on the SVM theory. The results are combined together to forecast the final wind-power output. For comparison purposes, the wavelet kernel function is applied in place of the radial basis function (RBF) kernel function during SVM training in method 2. The operation data of one wind farm from Texas are used. Mean relative error and relative mean square error are used to evaluate the forecasting errors of the two proposed methods and the RBF SVM model. The means of evaluating the prediction-algorithm precision is also proposed. He has been involved in research on thermodynamic analysis and system integration of energy system, theory method on the energy conservation of coal-fired generating units, and thermal applications of solar energy.
Pulmonary embolism is an avoidable cause of death if treated immediately but delays in diagnosis and treatment lead to an increased risk. Computer-assisted image analysis of both unenhanced and contrast-enhanced computed tomography (CT)... more
Pulmonary embolism is an avoidable cause of death if treated immediately but delays in diagnosis and treatment lead to an increased risk. Computer-assisted image analysis of both unenhanced and contrast-enhanced computed tomography (CT) have proven useful for diagnosis of pulmonary embolism. Dual energy CT provides additional information over the standard single energy scan by generating fourdimensional (4D) data, in our case with 11 energy levels in 3D. In this paper a 4D texture analysis method capable of detecting pulmonary embolism in dual energy CT is presented. The method uses wavelet-based visual words together with an automatic geodesic-based region of interest detection algorithm to characterize the texture properties of each lung lobe. Results show an increase in performance with respect to the single energy CT analysis, as well as an accuracy gain compared to preliminary work on a small dataset.
This paper investigates the application of a new concept called the "Overcomplete Discrete Wavelet Transform"(0DWT) for image sequences compression. It has been demonstrated that: "The translated function with any integer multiple of the... more
This paper investigates the application of a new concept called the "Overcomplete Discrete Wavelet Transform"(0DWT) for image sequences compression. It has been demonstrated that: "The translated function with any integer multiple of the sampling period is completely represented in the wavelet space by one of the ODWT members". This theoretical result leads to a new motion estimation and motion compensation scheme working in the wavelet transform domain. Our simulation experiments, performed on real image sequences, show high quality and low bit rate performances. By performing the motion estimation in the wavelet space, a quite modest computational complexity is ensured.
An evolutionary-based procedure for designing adaptive filters based on second-generation wavelet (lifting scheme) packet decomposition for industrial fault detection is presented. The proposed procedure is validated by an experimental... more
An evolutionary-based procedure for designing adaptive filters based on second-generation wavelet (lifting scheme) packet decomposition for industrial fault detection is presented. The proposed procedure is validated by an experimental case study for induction motor fault diagnosis in an elevator system. Preliminary results on two typologies of faults, broken rotor bars and static air gap eccentricity, are discussed by showing encouraging performance.
This paper presents method of deriving optimal excitation signal maximizing probability of successful fault diagnosis. The approach uses evolutionary algorithm and wavelet analysis. The diagnosis procedure is conducted by means of... more
This paper presents method of deriving optimal excitation signal maximizing probability of successful fault diagnosis. The approach uses evolutionary algorithm and wavelet analysis. The diagnosis procedure is conducted by means of specialized aperiodic excitation. Results are compared with fault diagnosis using unit step excitation. The method belongs to simulation before test (SBT) class of fault diagnosis procedure and focuses on case where only input and output nodes of integrated circuit under test (CUT) are available.
Images with high resolution give better results for image processing applications. Image resolution enhancement is the process of manipulating an image so that resultant image is more suitable for specific application such as medical,... more
Images with high resolution give better results for image processing applications. Image resolution enhancement is the process of manipulating an image so that resultant image is more suitable for specific application such as medical, agricultural, satellite image processing. This paper is based on image resolution enhancement by combination of SWT & DWT .In DWT main loss is on high frequency components. The edge loss in high frequency components of DWT is minimized by adding it with high frequency components of SWT which is transition invariant transform. The interpolated high frequency sub-bands of DWT are added with high frequency sub bands of SWT. Then the added high frequency sub-bands (of SWT & DWT) as well as the input image are interpolated by factor of α/2to enlarge the input image by factor of α. Afterwards all these images have been combined using IDWT to generate a super resolved image. Algorithm considering Haar function is developed for DWT & inverse IDWT instead of using db9/7. This obtained image is compared with original high resolution image. This technique has been tested on standard bench mark images. The quantitative (Peak signal to noise ratio) and visual results are superior over conventional image resolution enhancement techniques.
Diagnosis classifies the present state of operation of the equipment, and prognosis predicts the next state of operation and its remaining useful life. In this paper, a prognosis method for the gear faults in dc machines is presented. The... more
Diagnosis classifies the present state of operation of the equipment, and prognosis predicts the next state of operation and its remaining useful life. In this paper, a prognosis method for the gear faults in dc machines is presented. The proposed method uses the time-frequency features extracted from the motor current as machine health indicators and predicts the future state of fault severity using hidden Markov models (HMMs). Parameter training of HMMs generally needs huge historical data, which are often not available in the case of electrical machines. Methods for computing the parameters from limited data are presented. The proposed prognosis method uses matching pursuit decomposition for estimating state-transition probabilities and experimental observations for computing state-dependent observation probability distributions. The proposed method is illustrated by examples using data collected from the experimental setup.
We develop a new filter which combines spatially adaptive noise filtering in the wavelet domain and temporal filtering in the signal domain. For spatial filtering, we propose a new wavelet shrinkage method, which estimates how probable it... more
We develop a new filter which combines spatially adaptive noise filtering in the wavelet domain and temporal filtering in the signal domain. For spatial filtering, we propose a new wavelet shrinkage method, which estimates how probable it is that a wavelet coefficient represents a "signal of interest" given its value, given the locally averaged coefficient magnitude and given the global subband statistics. The temporal filter combines a motion detector and recursive time-averaging. The results show that this combination outperforms single resolution spatio-temporal filters in terms of quantitative performance measures as well as in terms of visual quality. Even though our current implementation of the new filter does not allow real-time processing, we believe that its optimized software implementation could be used for real-or near real-time filtering.
We present the nonsubsampled contourlet transform and its application in image enhancement. The nonsubsampled contourlet transform is built upon nonsubsampled pyramids and nonsubsampled directional filter banks and provides a... more
We present the nonsubsampled contourlet transform and its application in image enhancement. The nonsubsampled contourlet transform is built upon nonsubsampled pyramids and nonsubsampled directional filter banks and provides a shiftinvariant directional multiresolution image representation. Existing methods for image enhancement cannot capture the geometric information of images and tend to amplify noises when they are applied to noisy images since they cannot distinguish noises from weak edges. In contrast, the nonsubsampled contourlet transform extracts the geometric information of images, which can be used to distinguish noises from weak edges. Experimental results show the proposed method achieves better enhancement results than a wavelet-based image enhancement method.
Lifting-style implementations of particular wavelets are popular in image coders. We present a 2-D extension and modification to the prediction part of the lifting implementation type Daubechies 5/3 wavelet. The 2-D prediction filter... more
Lifting-style implementations of particular wavelets are popular in image coders. We present a 2-D extension and modification to the prediction part of the lifting implementation type Daubechies 5/3 wavelet. The 2-D prediction filter predicts the value of the next polyphase component according to an edge orientation estimator of the image. Consequently, the prediction domain is allowed to rotate + or − 45 degrees in regions with diagonal gradient. The proposed structure can be implemented in horizontal and vertical directions similar to a 1-D lifting applied to an image. The gradient estimator was inspired from a method to interpolate missing color sensor values in CCD arrays of image sensors, which is computationally inexpensive with additional costs of only 6 subtractions per lifting instruction, and no multiplications are required. We have observed plausible coding results with conventional wavelet encoders. I. Introduction The 5/3 Daubechies biorthogonal wavelet has received a wide range of interest in various applications due to its filter tap coefficients which are particularly useful in real-time implementations. Furthermore, the lifting implementation of this wavelet contains filters with coefficients that can be written as powers of two leading to a multiplication free realization of the filter-bank [1], [2]. Several linear or nonlinear decomposition structures that are published in the literature report better performance than the 5/3 wavelet using signal adapted filters including [2]-[7]. Among these works, [2] shows the method to achieve the lifting style implementation of any DWT filter bank, whereas [3] extends the idea of linear filters in the lifting style to nonlinear filters. In [4], [8], and [7], the lifting prediction filter was made adaptive according to the local signal properties, and in [6], the importance of coder-nonlinear transform strategy was emphasized. The idea of lifting adaptation was also applied to video processing [9], [10]. Finally, in [5], [11], and [12], 2-D extensions of the lifting structures were examined, which fundamentally resembles the idea of this work. Nevertheless, the 5/3 wavelet has an efficient set of filter coefficients which enables fast, simple, and integer-shifts-only implementations, and due to these properties, it was also adopted by the JPEG-2000 image coding standard [13], [14] in its lossless mode. 1 A. E. Cetin's work is partially funded by TUBITAK and TUBA (Turkish Academy of Sciences) GEBIP Programme, and O. N.
... MU-WEI JIAN1, JUN-YU DONG1, JIA-HUA (JERRY) WU 2 ... 2 Wellcome Trust Sanger Institute, Wellcome Trust Genome Campus, Hinxton Cambridge, United Kingdom E-MAIL: dongjunyu@ouc.edu.cn, jianmuwei@sohu.com, jerry.wu@sanger ... [5] Jian,... more
... MU-WEI JIAN1, JUN-YU DONG1, JIA-HUA (JERRY) WU 2 ... 2 Wellcome Trust Sanger Institute, Wellcome Trust Genome Campus, Hinxton Cambridge, United Kingdom E-MAIL: dongjunyu@ouc.edu.cn, jianmuwei@sohu.com, jerry.wu@sanger ... [5] Jian, Muwei; Dong, Junyu; ...
network classifiers have been widely used in classification due to its adaptive and parallel processing ability. This paper concerns classification of underwater passive sonar signals radiated by ships using neural networks.... more
network classifiers have been widely used in classification due to its adaptive and parallel processing ability. This paper concerns classification of underwater passive sonar signals radiated by ships using neural networks. Classification process can be divided into two stages: one is the signa preprocessing and feature extraction, the other is the recognition process. In the preprocessing and feature extraction stage, the wavelet transform (WT) is used to extract tonal features from the average power spectral density (APSD) of the input data. In the classification stage, two kinds of neural network classifiers are used to evaluate the classification results, inclusive of the hyperplanebased classifier-Multilayer Perceptron (MLP)-and the kernel-based classifier-Adaptive Kernel Classifier (AKC). The experimental results obtained from MLP with different configurations and algorithms show that the bipolar continuous function possesses a wider range and a higher value of the learning rate than the unipolar continuous function. Besides, AKC with fixed radius (modified AKC) sometimes gives better performance than AKC, but the former takes more training time in selecting the width of the receptive field. More important, networks trained with tonal features extracted by WT has 96% or 94% correction rate, but the training with original APSDs only have 80% correction rate. &words-Underwater signal classification, Neural networks, Wavelet transform, Multilayer perceptron, Adaptive kernel classifier.
A new image compression algorithm is proposed, based on independent Embedded Block Coding with Optimized Truncation of the embedded bit-streams (EBCOT). The algorithm exhibits state-of-the-art compression performance while producing a... more
A new image compression algorithm is proposed, based on independent Embedded Block Coding with Optimized Truncation of the embedded bit-streams (EBCOT). The algorithm exhibits state-of-the-art compression performance while producing a bit-stream with a rich set of features, including resolution and SNR scalability together with a "random access" property. The algorithm has modest complexity and is suitable for applications involving remote browsing of large compressed images. The algorithm lends itself to explicit optimization with respect to MSE as well as more realistic psychovisual metrics, capable of modeling the spatially varying visual masking phenomenon.
In this paper, we present a patient-adaptable algorithm for ECG heartbeat classification, based on a previously developed automatic classifier and a clustering algorithm. Both classifier and clustering algorithms include features from the... more
In this paper, we present a patient-adaptable algorithm for ECG heartbeat classification, based on a previously developed automatic classifier and a clustering algorithm. Both classifier and clustering algorithms include features from the RR interval series and morphology descriptors calculated from the wavelet transform. Integrating the decisions of both classifiers, the presented algorithm can work either automatically or with several degrees of assistance. The algorithm was comprehensively evaluated in several ECG databases for comparison purposes. Even in the fully automatic mode, the algorithm slightly improved the performance figures of the original automatic classifier; just with less than two manually annotated heartbeats (MAHB) per recording, the algorithm obtained a mean improvement for all databases of 6.9% in accuracy A, of 6.5% in global sensitivity S and of 8.9% in global positive predictive value P + . An assistance of just 12 MAHB per recording resulted in a mean improvement of 13.1% in A, of 13.9% in S, and of 36.1% in P + . For the assisted mode, the algorithm outperformed other state-of-the-art classifiers with less expert annotation effort. The results presented in this paper represent an improvement in the field of automatic and patient-adaptable heartbeats classification, concluding that the performance of an automatic classifier can be improved with an efficient handling of the expert assistance.
The classical solution to the noise removal problem is the Wiener filter, which utilizes the second-order statistics of the Fourier decomposition. Subband decompositions of natural images have significantly non-Gaussian higher-order point... more
The classical solution to the noise removal problem is the Wiener filter, which utilizes the second-order statistics of the Fourier decomposition. Subband decompositions of natural images have significantly non-Gaussian higher-order point statistics; these statistics capture image properties that elude Fourier-based techniques. We develop a Bayesian estimator that is a natural extension of the Wiener solution, and that exploits these higher-order statistics. The resulting nonlinear estimator performs a "coring" operation. We provide a simple model for the subband statistics, and use it to develop a semi-blind noise-removal algorithm based on a steerable wavelet pyramid.
Actually, hand vein biometrics is a recent technology that offers system for identification /authentication, it ranks among the best biometric modality by the results developed. Just like any recognition system this has four steps: the... more
Actually, hand vein biometrics is a recent technology that offers system for identification /authentication, it ranks among the best biometric modality by the results developed. Just like any recognition system this has four steps: the acquisition, enhancement, feature extraction and classification. This paper present the enhancement's step of the SAB11 Data Base followed by new adaptive feature extraction method for the dorsal hand vein biometrics; which is the discrete wavelet transform.
We have invented a new class of linear filters for the detection of spiculated masses and architectural distortions in mammography. We call these Spiculation Filters. These filters are narrow band filters and form a new class of... more
We have invented a new class of linear filters for the detection of spiculated masses and architectural distortions in mammography. We call these Spiculation Filters. These filters are narrow band filters and form a new class of wavelettype filter banks. In this paper, we show that unmodulated versions of these filters can be used to detect the central mass region of spiculated masses. We refer to these as toroidal gaussian filters. We also show that the physical properties of spiculated masses can be extracted from the responses of the toroidal gaussian filters without segmentation.
Head injury is a major reason for morbidity and mortality worldwide and traumatic head injuries represent the major cause of neurological disability to a clot or hematoma caused by Haemorrhage (ICH) and is the The most common cause of ICH... more
Head injury is a major reason for morbidity and mortality worldwide and traumatic head injuries represent the major cause of neurological disability to a clot or hematoma caused by Haemorrhage (ICH) and is the The most common cause of ICH normally reported in our country are road traffic accidents (RTA) followed by falls and assaults. India is a populous country with over a billion every 100,000 population with deprived of these doctors. The unavailability of these specialists is a grave concern to the w care to the nation. The mainstay in the diagnosis of an ICH is the CT (Computed Tomography) scan of the head which is the definitive tool for accurate diagnosis of an ICH following trauma and provides an objective assessment of structural damage to brain. Accurate segmentation of the haemorrhage. This study is on segment Keywords: Intracranial decomposition; Brain haemorrhage segmentation is the first step before detecting the been done on the brain haemorrhage detection using methods like Convolutional neural network other efficient and advanced deep learning techniques. But that is resource intensive. It is also nec efficient when there is a large dataset Hssayeni and colleagues multiple slices and made it public. Second, used deep learning methods to perform segmentation and got a dice coefficient of 31% which is good compared to and colleagues [12] propose entropy based automatic unsupervised brain intracranial haemorrhage segmentation which comprises of FCM clustering, thresholding and edge based active contour methods and they get a better result with the combination than FCM clustering and active use deep learning to diagnose brain haemorrhage. They have used LeNet, GoogleNet and Inception dataset consisting of 100 cases collected from 115 hospitals and discovered LeNet is the among the three. Arjun Majumdar and colleagues haemorrhage instead of Head injury is a major reason for morbidity and mortality worldwide and traumatic head injuries represent the major cause of neurological disability. A traumatic brain injury to a clot or hematoma caused by an accident or any other trauma. (ICH) and is the most common and serious consequence of head injury which can be life The most common cause of ICH normally reported in our country are road traffic accidents (RTA) followed by falls and assaults. India is a populous country with over a billion every 100,000 population with most of them in the urban setup, Indian rural population of more than 70% is deprived of these doctors. The unavailability of these specialists is a grave concern to the w care to the nation. The mainstay in the diagnosis of an ICH is the CT (Computed Tomography) scan of the head which is the definitive tool for accurate diagnosis of an ICH following trauma and provides an objective assessment of ctural damage to brain. Accurate segmentation of the. This study is on segmentation of the brain haemorrhage Intracranial haemorrhage; Discrete wavelet transforms I. RELATED WORK Brain haemorrhage segmentation is the first step before detecting the been done on the brain haemorrhage detection using methods like Convolutional neural network other efficient and advanced deep learning techniques. But that is resource intensive. It is also nec efficient when there is a large dataset, which is not easily available in case of brain haemorrhage. Hssayeni and colleagues [1][2] have contributed in two ways, they collected a new dataset of 82 CT scans with ade it public. Second, used deep learning methods to perform segmentation and got a dice coefficient of 31% which is good compared to other deep learning techniques on small datasets. Indrajeet Kumar propose entropy based automatic unsupervised brain intracranial haemorrhage segmentation which comprises of FCM clustering, thresholding and edge based active contour methods and they get a better result with the combination than FCM clustering and active contour methods alone. use deep learning to diagnose brain haemorrhage. They have used LeNet, GoogleNet and Inception dataset consisting of 100 cases collected from 115 hospitals and discovered LeNet is the among the three. Arjun Majumdar and colleagues [8] haemorrhage instead of traditional methods and achieve a Head injury is a major reason for morbidity and mortality worldwide and traumatic head injuries traumatic brain injury (TBI) is damage to the brain, secondary an accident or any other trauma. This hematoma is known as an Intracranial most common and serious consequence of head injury which can be life The most common cause of ICH normally reported in our country are road traffic accidents (RTA) followed by falls and assaults. India is a populous country with over a billion people and there is approximately one radiologist for of them in the urban setup, Indian rural population of more than 70% is deprived of these doctors. The unavailability of these specialists is a grave concern to the w care to the nation. The mainstay in the diagnosis of an ICH is the CT (Computed Tomography) scan of the head which is the definitive tool for accurate diagnosis of an ICH following trauma and provides an objective assessment of ctural damage to brain. Accurate segmentation of the haemorrhage is the first step before detecting the brain haemorrhage images using discrete wavelet transforms. iscrete wavelet transforms; Segmentation; RELATED WORK Brain haemorrhage segmentation is the first step before detecting the haemorrhage in the brain. A lot of work has been done on the brain haemorrhage detection using methods like Convolutional neural network other efficient and advanced deep learning techniques. But that is resource intensive. It is also nec which is not easily available in case of brain haemorrhage. have contributed in two ways, they collected a new dataset of 82 CT scans with ade it public. Second, used deep learning methods to perform segmentation and got a dice deep learning techniques on small datasets. Indrajeet Kumar propose entropy based automatic unsupervised brain intracranial haemorrhage segmentation which comprises of FCM clustering, thresholding and edge based active contour methods and they get a better result contour methods alone. Tong Duc Phong and colleagues use deep learning to diagnose brain haemorrhage. They have used LeNet, GoogleNet and Inception dataset consisting of 100 cases collected from 115 hospitals and discovered LeNet is the most time [8] use a modified version of U-Net to detect the brain traditional methods and achieve an overall specificity of 98.6% on the small dataset. Brain Haemorrhage Segmentation using Dircrete Wavelet Transform. the terms of the Creative Commons Attribution License; Which Permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source Head injury is a major reason for morbidity and mortality worldwide and traumatic head injuries (TBI) is damage to the brain, secondary This hematoma is known as an Intracranial most common and serious consequence of head injury which can be life-threatening. The most common cause of ICH normally reported in our country are road traffic accidents (RTA) followed by falls people and there is approximately one radiologist for of them in the urban setup, Indian rural population of more than 70% is deprived of these doctors. The unavailability of these specialists is a grave concern to the well-being of the health care to the nation. The mainstay in the diagnosis of an ICH is the CT (Computed Tomography) scan of the head which is the definitive tool for accurate diagnosis of an ICH following trauma and provides an objective assessment of is the first step before detecting the brain images using discrete wavelet transforms. Thresholding; Wavelet haemorrhage in the brain. A lot of work has been done on the brain haemorrhage detection using methods like Convolutional neural network [2][3][5][11] and other efficient and advanced deep learning techniques. But that is resource intensive. It is also necessary and which is not easily available in case of brain haemorrhage. Murtada D. have contributed in two ways, they collected a new dataset of 82 CT scans with ade it public. Second, used deep learning methods to perform segmentation and got a dice deep learning techniques on small datasets. Indrajeet Kumar propose entropy based automatic unsupervised brain intracranial haemorrhage segmentation which comprises of FCM clustering, thresholding and edge based active contour methods and they get a better result Tong Duc Phong and colleagues [13] use deep learning to diagnose brain haemorrhage. They have used LeNet, GoogleNet and Inception-ResNet and a most time-consuming model Net to detect the brain overall specificity of 98.6% on the small dataset.
The Joint Photographic Experts Group (JPEG) committee is a joint working group of the International Standardization Organization (ISO) and the International Electrotechnical Commission (IEC). The word "Joint" in JPEG however does not... more
The Joint Photographic Experts Group (JPEG) committee is a joint working group of the International Standardization Organization (ISO) and the International Electrotechnical Commission (IEC). The word "Joint" in JPEG however does not refer to the joint efforts of ISO and IEC, but to the fact that the JPEG activities are the result of an additional collaboration with the International Telecommunication Union (ITU). Inspired by technology and market evolutions, i.e. the advent of wavelet technology and need for additional functionality such as scalability, the JPEG committee launched in 1997 a new standardization process that would result in 2000 in a new standard: JPEG 2000. JPEG 2000 is a collection of standard parts, which together shape the complete toolset. Currently, the JPEG 2000 standard is composed out of 13 parts. In this paper, we review these parts and additionally address recent standardization initiatives within the JPEG committee such as JPSearch, JPEG-XR and AIC.
In this paper a variable and controllable resistor type fault current limiter (FCL) is introduced for enhancement of transient stability of single machine infinite bus (SMIB) system with a double circuit transmission line. The optimal... more
In this paper a variable and controllable resistor type fault current limiter (FCL) is introduced for enhancement of transient stability of single machine infinite bus (SMIB) system with a double circuit transmission line. The optimal value of resistor during fault to reach the maximum stability of power system is computed. It is shown that, this optimal value depends on the fault location and the pre-fault active power of synchronous generator considering power demand changing. To show the effectiveness of the proposed FCL, analytical analysis including transient stability study and optimum resistor value calculation are presented. In addition, simulation results using PSCAD /EMTDC software are included to confirm the analytical analysis accuracy.
The acoustic signature of an internal combustion (IC) engine contains valuable information regarding the functioning of its components. It could be used to detect the incipient faults in the engine. Acoustics-based condition monitoring of... more
The acoustic signature of an internal combustion (IC) engine contains valuable information regarding the functioning of its components. It could be used to detect the incipient faults in the engine. Acoustics-based condition monitoring of systems precisely tries to handle the questions and in the process extracts the relevant information from the acoustic signal to identify the health of the system. In automobile industry, fault diagnosis of engines is generally done by a set of skilled workers who by merely listening to the sound produced by the engine, certify whether the engine is good or bad, primary owing to their excellent sensory skills and cognitive capabilities. It would indeed be a challenging task to mimic the capabilities of those individuals in a machine. In the fault diagnosis setup developed hereby, the acoustic signal emanated from the engine is first captured and recorded; subsequently the acoustic signal is transformed on to a domain where distinct patterns corresponding to the faults being investigated are visible. Traditionally, acoustic signals are mainly analyzed with spectral analysis, i.e., the Fourier transform, which is not a proper tool for the analysis of IC engine acoustic signals, as they are non-stationary and consist of many transient components. In the present work, Empirical Mode Decomposition (EMD) and Hidden Markov Model (HMM)-based approach for IC engine is proposed. EMD is a new time-frequency analyzing method for nonlinear and non-stationary signals. By using the EMD, a complicated signal can be decomposed into a number of intrinsic mode functions (IMFs) based on the local characteristics time scale of the signal. Treating these IMFs as feature vectors HMM is applied to classify the IC engine acoustic signal. Experimental results show that the proposed method can be used as a tool in intelligent autonomous system for condition monitoring and fault diagnosis of IC engine.
—This paper presents a new method for image enhancement which includes both resolution enhancement and contrast enhancement. The proposed method merges SWT-DWT which is for resolution enhancement and CLAHE-SWT which is for Contrast... more
—This paper presents a new method for image enhancement which includes both resolution enhancement and contrast enhancement. The proposed method merges SWT-DWT which is for resolution enhancement and CLAHE-SWT which is for Contrast enhancement. SWT is used in combination with DWT for enhancing the resolution of an image. (CLAHE) is a powerful method for contrast enhancement.SWT is used in combination with CLAHE to mitigate noise effects. The proposed method gives better results than existing techniques and it is proved with PSNR, Standard Deviation and RMSE and visual results. Keywords—Contrast limited adaptive histogram equalization(CLAHE),Discrete wavelet transform(DWT), Stationary wavelet transform,(SWT), Bi-cubic interpolation, Peak signal to noise ratio(PSNR)
This paper presents an overview of the methodologies and algorithms for statistical texture analysis of 2D images. Methods for digital-image texture analysis are reviewed based on available literature and research work either carried out... more
This paper presents an overview of the methodologies and algorithms for statistical texture analysis of 2D images. Methods for digital-image texture analysis are reviewed based on available literature and research work either carried out or supervised by the authors.
Induction motors are widely used in transportation, mining, petrochemical, manufacturing and in almost every other field dealing with electrical power. These motors are simple, efficient, highly robust and rugged thus offering a very high... more
Induction motors are widely used in transportation, mining, petrochemical, manufacturing and in almost every other field dealing with electrical power. These motors are simple, efficient, highly robust and rugged thus offering a very high degree of reliability. But like any other machine, they are vulnerable to faults, which if left unmonitored, might lead to catastrophic failure of the machine in the long run. On-line condition monitoring of the induction motors has been widely used in the detection of faults. This paper delves into the various faults and study of conventional and innovative techniques for induction motor faults with an identification of future research areas.
This paper describes the use of wavelet transform for analyzing power system fault transients in order to determine the fault location. Traveling wave theory is utilized in capturing the travel time of the transients along the monitored... more
This paper describes the use of wavelet transform for analyzing power system fault transients in order to determine the fault location. Traveling wave theory is utilized in capturing the travel time of the transients along the monitored lines between the fault point and the relay. Time resolution for the high frequency components of the fault transients, is provided by the wavelet transform. This information is related to the travel time of the signals which are already decomposed into their modal components. Aerial mode is used for all fault types, whereas the ground mode is used to resolve problems associated with certain special cases. Wavelet transform is found to be an excellent discriminant for identifying the traveling wave reflections from the fault irrespective of the fault type and impedance. EMTP simulations are used to test and validate the proposed fault location approach for typical power system faults.
The measurement and analysis of leakage current (LC) for condition-based monitoring and as a means of predicting flashover of polluted insulators has attracted a lot of research in recent years. Leakage current plays an important role in... more
The measurement and analysis of leakage current (LC) for condition-based monitoring and as a means of predicting flashover of polluted insulators has attracted a lot of research in recent years. Leakage current plays an important role in the detection of insulator's condition. This paper proposes a method for reducing the noise included in the current signal. The tests were carried out on cleaned and polluted glass insulators by using surface tracking and erosion test procedure of IEC 60587.Wavelet analysis method is used to compress the leakage current data. Experimental results shows that the actual signals of leakage current are related to the levels of insulator contamination.
Wavelet-basedaudioprocessingisusedforsoundeventdetection.The low-level audio features (timbral or temporal features) are found to be effective to differentiate between different sound events and that is why frequency processing... more
Wavelet-basedaudioprocessingisusedforsoundeventdetection.The
low-level audio features (timbral or temporal features) are found to be effective to differentiate between different sound events and that is why frequency processing algorithms have become popular in recent times. Wavelet based sound event detection is found effective to detect sudden onsets in audio signals because it offers unique advantages compared to traditional frequency-based sound event detection using machine learning approaches. In this work, wavelet transform is applied to the audio to extract audio features which can predict the occurrence of a sound event using a classical feedforward neural network. Additionally,
this work attempts to identify the optimal wavelet parameters to enhance classification performance. 3 window sizes, 6 wavelet families, 4 wavelet levels, 3 decomposition levels and 2 classifier models are used for experimental analysis. The UrbanSound8k data is used and a classification accuracy up to 97% is obtained. Some major observations with regard to parameter-estimation are as follows: wavelet level and wavelet decomposition level should be low; it is desirable to have a large window; however, the window size is limited by the duration of the sound event. A window size greater than the duration of the sound event will decrease classification performance. Most of the wavelet families can classify the sound events; however, using Symlet, Daubechies, Reverse biorthogonal and Biorthogonal families will save computational resources (lesser epochs) because they yield better accuracy compared to Fejér-Korovkin and Coiflets. This work conveys that wavelet-based sound event detection seems promising, and can be extended to detect most of the common sounds and sudden events occurring at various environments.
In moderate climates, short fluctuations in solar irradiance and their impact on the distribution grid will become an important issue with regard to the future large-scale application of embedded photovoltaic systems. Several related... more
In moderate climates, short fluctuations in solar irradiance and their impact on the distribution grid will become an important issue with regard to the future large-scale application of embedded photovoltaic systems. Several related studies from the past are recalled. The approach that is presented here applies a localized spectral analysis to the solar irradiance and derived quantities in order to determine the power content of fluctuations, depending on their characteristic persistence. Pseudorandom time series of solar irradiance, based on measured values of the instantaneous clearness index, are applied as input data. Power-flow calculations are carried out in order to assess the impact of fluctuating solar irradiance on the grid voltage. The "fluctuation power index" is defined as a measure for the mean-square value of fluctuations of a specific persistence. A typical scenario is simulated, and the results are interpreted.
Spectrum sensing is a key function of cognitive radio to prevent the harmful interference with licensed users and identify the available spectrum for improving the spectrum's utilization. However, detection performance in practice is... more
Spectrum sensing is a key function of cognitive radio to prevent the harmful interference with licensed users and identify the available spectrum for improving the spectrum's utilization. However, detection performance in practice is often compromised with multipath fading, shadowing and receiver uncertainty issues. To mitigate the impact of these issues, cooperative spectrum sensing has been shown to be an effective method to improve the detection performance by exploiting spatial diversity. While cooperative gain such as improved detection performance and relaxed sensitivity requirement can be obtained, cooperative sensing can incur cooperation overhead. The overhead refers to any extra sensing time, delay, energy, and operations devoted to cooperative sensing and any performance degradation caused by cooperative sensing. In this paper, the stateof-the-art survey of cooperative sensing is provided to address the issues of cooperation method, cooperative gain, and cooperation overhead. Specifically, the cooperation method is analyzed by the fundamental components called the elements of cooperative sensing, including cooperation models, sensing techniques, hypothesis testing, data fusion, control channel and reporting, user selection, and knowledge base. Moreover, the impacting factors of achievable cooperative gain and incurred cooperation overhead are presented. The factors under consideration include sensing time and delay, channel impairments, energy efficiency, cooperation efficiency, mobility, security, and wideband sensing issues. The open research challenges related to each issue in cooperative sensing are also discussed.
Soft computing is likely to play an important role in science and engineering in the future. The successful applications of soft computing and the rapid growth suggest that the impact of soft computing will be felt increasingly in coming... more
Soft computing is likely to play an important role in science and engineering in the future. The successful applications of soft computing and the rapid growth suggest that the impact of soft computing will be felt increasingly in coming years. Soft Computing encourages the integration of soft computing techniques and tools into both everyday and advanced applications. This Open access peer-reviewed journal serves as a platform that fosters new applications for all scientists and engineers engaged in research and development in this fast growing field.
Power distribution systems play an important role in modern society. Increasing size and capacity of power systems have rendered them more complex which in turn has led to reduced reliability of such systems. Power distribution systems... more
Power distribution systems play an important role in modern society. Increasing size and capacity of power systems have rendered them more complex which in turn has led to reduced reliability of such systems. Power distribution systems are always prone to faults. Faults in power systems are generally due to short circuits, lightning etc. Fast and proper restorations of outages are crucial to improve system reliability. For quick and adequate recovery actions such as the determination of the propriety of carrying out forced line charging and the necessity of network switching, and an efficient patrolling, understanding the cause of a fault in an electric power system in the system operation is essential. Moreover, unknown faults may add to unnecessary costs if effective restorations and identifications can’t be done quickly. So, identification of a fault on a transmission line needs to be correct and rapid. However, recognition of the causes of distribution faults accurately generally lack in expert personnel. Also the knowledge about the nature of these faults is not easily transferable from person to person. So, effective means of fault identification needs to be encouraged. In this paper, some of the unconventional approaches for condition monitoring of power systems comprising of wavelet transform, along with the application of soft computing techniques like artificial neural networks, fuzzy logic, support vector machines, genetic algorithm and hybrid combinations based on these have been studied.
Induction motors are widely used in transportation, mining, petrochemical, manufacturing and in almost every other field dealing with electrical power. These motors are simple, efficient, highly robust and rugged thus offering a very high... more
Induction motors are widely used in transportation, mining, petrochemical, manufacturing and in almost every other field dealing with electrical power. These motors are simple, efficient, highly robust and rugged thus offering a very high degree of reliability. But like any other machine, they are vulnerable to faults, which if left unmonitored, might lead to catastrophic failure of the machine in the long run. On-line
condition monitoring of the induction motors has been widely used in the detection of faults. This paper delves into the various faults and study of conventional and innovative techniques for induction motor faults with an identification of future research areas.
Incremental training is commonly applied to training recurrent neural networks (RNNs) for applications involving prognosis. As the number of prognostic time-step increases, the accuracy of prognosis generally decreases, as often seen in... more
Incremental training is commonly applied to training recurrent neural networks (RNNs) for applications involving prognosis. As the number of prognostic time-step increases, the accuracy of prognosis generally decreases, as often seen in long-term prognosis. Revision of the training techniques is therefore necessary to improve the accuracy in long-term prognosis. This paper presents a competitive learning-based approach to long-term prognosis of machine health status. Specifically, vibration signals from a defect-seeded rolling bearing are preprocessed using continuous wavelet transform (CWT). Statistical parameters computed from both the raw data and the preprocessed data are then utilized as candidate inputs to an RNN. Based on the principle of competitive learning, input data were clustered for effective representation of similar stages of defect propagation of the bearing being monitored. Analysis has shown that the developed technique is more accurate in predicting bearing defect progression than the incremental training technique.
This article addresses the development of and recent advances in the rapidly growing jield of optical pattem recognition. In optical pattern recognition there are two basic approaches; namely, matched filtering and associative memories.... more
This article addresses the development of and recent advances in the rapidly growing jield of optical pattem recognition. In optical pattern recognition there are two basic approaches; namely, matched filtering and associative memories. The first employs optical correlator architectures and the latter uses optical neural networks (NN's). This paper reviews various types of optical correlators and NN's applied to real-time pattern recognition and autonomous tracking. Techniques of scale and rotational invariant jiltering are also given. Recent approaches using wavelet transfom filtering, phase only jiltering, high capacity composite filters, and phase representation f o r improvement in pattern discrimination are also provided.
We performed a comparative study to select the efficient mother wavelet (MWT) basis functions that optimally represent the signal characteristics of the electrical activity of the human brain during a working memory (WM) task recorded... more
We performed a comparative study to select the efficient mother wavelet (MWT)
basis functions that optimally represent the signal characteristics of the electrical activity
of the human brain during a working memory (WM) task recorded through
electro-encephalography (EEG). Nineteen EEG electrodes were placed on the scalp
following the 10–20 system. These electrodes were then grouped into five recording regions
corresponding to the scalp area of the cerebral cortex. Sixty-second WM task data were
recorded from ten control subjects. Forty-five MWT basis functions from orthogonal
families were investigated. These functions included Daubechies (db1–db20), Symlets
(sym1–sym20), and Coiflets (coif1–coif5). Using ANOVA, we determined the MWT basis
functions with the most significant differences in the ability of the five scalp regions to
maximize their cross-correlation with the EEG signals. The best results were obtained using “sym9” across the five scalp regions. Therefore, the most compatible MWT with the EEG
signals should be selected to achieve wavelet denoising, decomposition, reconstruction, and
sub-band feature extraction. This study provides a reference of the selection of efficient
MWT basis functions.
- by Al-Qazzaz Noor and +1
- •
- Dementia, Wavelet Transforms, Vascular dementia, Biosignals
With the computational power available today, machine learning is becoming a very active field finding its applications in our everyday life. One of its biggest challenge is the classification task involving data representation (the... more
With the computational power available today, machine learning is becoming a
very active field finding its applications in our everyday life. One of its biggest
challenge is the classification task involving data representation (the preprocess-
ing part in a machine learning algorithm). In fact, classify linearly separable
data is easily done. The aim of the preprocesing part is to obtain well repre-
sented data by mapping raw data into a feature space where simple classifiers
can be used efficiently. For example, everything around audio processing uses
MFCC until now. This toolbox gives the basic tools for audio representation
using the C++ programming language by providing an implementation of the
Scattering Network [4] which brings a new and powerful solution for these tasks.
The toolkit of reference in scattering analysis is the SCATNET from Mallat et al.
1
. This tool is an attempt to have some of the scatnet features more tractable
in large dataset. Furthermore, the use of this toolbox is not limited to ma-
chine learning preprocessing. It can also be used for more advanced biological
analysis such as animal communication behaviours analysis or any biological
study related to signal analysis. One motivation for this work is the collabora-
tion between DI ENS and the university of Toulon through the SABIOB Scaled
Acoustic project.[15] [14]. This toolbox gives out of the box executables that can
be used by simple bash commands. Examples are given in the README file.
Finally, for each presented algorithm, a graph is provided in order to summarize
how the computation is done in this toolbox.
Powerful image editing tools like Adobe Photoshop etc. are very common these days. However due to such tools tampering of images has become very easy. Such tampering with digital images is known as image forgery. The most common type of... more
Powerful image editing tools like Adobe Photoshop etc. are very common these days. However due to such tools tampering of images has become very easy. Such tampering with digital images is known as image forgery. The most common type of digital image forgery is known as copy move forgery wherein a part of image is cut/copied and pasted in another area of the same image. The aim behind this type of forgery may be to hide some particularly important details in the image. A method has been proposed to detect copy-move forgery in images. We have developed an algorithm of image-tamper detection based on the Discrete Wavelet Transform i.e. DWT. DWT is used for dimension reduction, which in turn increases the accuracy of results. First DWT is applied on a given image to decompose it into four parts LL, LH, HL, and HH. Since LL part contains most of the information, SIFT is applied on LL part only to extract the key features and find descriptor vector of these key features and then find similarities between various descriptor vectors to conclude that the given image is forged. This method allows us to detect whether image forgery has occurred or not and also localizes the forgery i.e. it tells us visually where the copy-move forgery has occurred.
En este trabajo, se desarrolló una interfaz en MatLab preparada para cuantificar de manera sistemática el desempeño de distintos algoritmos para reconstruir imágenes médicas obtenidas por ultrasonido. Los filtros seleccionados... more
En este trabajo, se desarrolló una interfaz en MatLab preparada para cuantificar de manera sistemática el desempeño de distintos algoritmos para reconstruir imágenes médicas obtenidas por ultrasonido. Los filtros seleccionados utilizan umbrales duros y blandos aplicados a los coeficientes wavelet de la imagen exportada del ecógrafo. Las transformadas elegidas son transformadas discreta wavelet (DWT) [26], [59], transformadas Daubechies wavelets simétricas (SDW) [54], [55], onditas complejas de doble árbol (DT-CWT) [47], [66], phaselet [28], [29] y curvelet [15][81]. Según el estado del arte actual, estas transformadas corresponden al conjunto de las que obtienen mejores resultados de reducción de ruido tipo speckle.
La plataforma integra distintas métricas objetivas que utilizan referencia externa (pico de relación señal ruido - PSNR), que utilizan referencia reducida (ganancia de contraste – CG) y que no necesitan referencia alguna (resolución lateral - RL, resolución axial - RA). Así mismo, la misma fue desarrollada para cuantificar estás métricas, no solo sobre la imagen total, sino también en entornos definidos por el usuario, variando posición y cantidad de pixeles.
En este trabajo, se propone un método consistente que implica obtener la imagen mediante la captura de un fantoma conocido (en este caso, un fantoma multipropósito comercial), con zonas de distintas impedancias y tamaños. De esta manera, generar la imagen utilizada como referencia a partir de las especificaciones técnicas brindadas por el fabricante.
Se realizaron pruebas sobre imágenes obtenidas a partir de la captura un fantoma Gammex 403 GS LE por un ecógrafo Esaote MyLabTM 25, aplicando umbrales duros y blandos a los coeficientes wavelet. Se compararon los distintos descriptores en entornos de objetivos infinitesimales y quistes de distinta impedancia. A partir de estas pruebas, se puede concluir que los valores de mejora sobre la PSNR no siempre reflejan una mejora en otros índices importantes.
This article presents an extensive review of the existing state-of-the-art artifact detection and removal methods from scalp EEG for all potential EEG-based applications and analyses the pros and cons of each method. A general overview of... more
This article presents an extensive review of the existing state-of-the-art artifact detection and removal methods from scalp EEG for all potential EEG-based applications and analyses the pros and cons of each method. A general overview of the underlying mechanism of these methods is reviewed and the corresponding EEG applications are considered with the typical signal and artifact characteristics keeping in mind. The limitation and challenges of the existing methods are discussed for particular application; e.g. automated algorithm or real-time implementation or single-channel EEG recording or unavailability of reference channel, etc. In addition the methods are compared based on their ability to remove certain types of artifacts and their suitability in relevant applications. Finally the future direction of the current research is proposed and expected challenges of artifact handling methods for variety of applications are described. Therefore this review is expected to be helpful for the interested researchers who will develop and/or apply artifact handling algorithm/technique in future for their applications as well as for those willing to improve the existing algorithms or propose a new solution in this particular area of research.
The paper proposes a new image encryption scheme based on chaotic encryption. It provides a fast encryption algorithm based on coupled chaotic map. The image is encrypted using a pseudorandom key stream generator. The image is partially... more
The paper proposes a new image encryption scheme based on chaotic encryption. It provides a fast encryption algorithm based on coupled chaotic map. The image is encrypted using a pseudorandom key stream generator. The image is partially encrypted by selecting most important components of image. To obtain most important components of an image, discrete wavelet transform is applied.