Entropy Indices Based Fault Detection (original) (raw)

Approximate Entropy as a diagnostic tool for machine health monitoring

Mechanical Systems and Signal Processing, 2007

This paper presents a new approach to machine health monitoring based on the Approximate Entropy (ApEn), which is a statistical measure that quantifies the regularity of a time series, such as vibration signals measured from an electrical motor or a rolling bearing. As the working condition of a machine system deteriorates due to the initiation and/or progression of structural defects, the number of frequency components contained in the vibration signal will increase, resulting in a decrease in its regularity and an increase in its corresponding ApEn value. After introducing the theoretical framework, numerical simulation of an analytic signal is presented that establishes a quantitative relationship between the severity of signal degradation and the ApEn values. The results of the simulation are then verified experimentally, through vibration measurement on a realistic bearing test bed. The study has shown that ApEn can effectively characterise the severity of structural defect, with good computational efficiency and high robustness. r

Enhanced diagnostic certainty using information entropy theory

Advanced Engineering Informatics, 2003

As is well known, the information entropy is a basic notion in cybernetics. This paper mainly summarizes some applications of information entropy in our machinery diagnostic research practice. First, we use it as a quantitative measure of equipment diagnostibility, maintainability. Second, we introduce a criterion of complexity in frequency domain to evaluate the complexity of diagnostic signal in rotating machinery. Then a new criterion called the index of orbit complexity is proposed to evaluate the dynamic quality of rotor systems during their operation. Finally the entropy distance is applied as an effective diagnostic feature to discriminate the potential faults inside the operating machinery. Practical case studies and experiments show its effectiveness.

Entropy measures for early detection of bearing faults

Physica A: Statistical Mechanics and its Applications, 2019

h i g h l i g h t s • We study 12 entropy-based features for monitoring and detection of bearing faults. • The proposed methodology is tested on two real bearing vibration signal datasets. • Entropy is shown to be a valuable tool for early detection of anomalies in bearings.

A Maximum Entropy Based Approach to Fault Diagnosis Using Discrete and Continuous Features

This paper presents a new maximum entropy (ME) based hybrid inference engine to improve the accuracy of diagnostic decisions using mixed continuous-discrete variables. By fusing the complementary fault information provided by discrete and continuous fault features, false alarms due to misclassification and modeling uncertainty can be significantly reduced. Simulation results using a three-tank benchmark system have clearly illustrated the advantages of diagnostics based on mixed continuous-discrete variables. Moreover, in the presence of significant measurement noise, simulation results show that the proposed ME method achieves better performance than the support vector machine classifier.

Entropy principal component analysis and its application to nonlinear chemical process fault diagnosis

Asia-Pacific Journal of Chemical Engineering, 2014

Most chemical processes generally exhibit the characteristics of nonlinear variation. In this paper, an improved principal component analysis method using information entropy theory, called entropy principal component analysis (EPCA), is proposed for nonlinear chemical process fault diagnosis. This approach applies information entropy theory to build an explicit nonlinear transformation, which provides a convenient way for nonlinear extension of principal component analysis. The information entropy can capture the Gaussian and non-Gaussian information in the measured variables via probability density function estimation. With the entropy of original measured variables, the entropy principal components are calculated using a simple eigenvalue decomposition procedure, and two monitoring statistics are built for fault detection. Once a fault is detected, EPCA similarity factors between the occurred fault dataset and historical fault pattern datasets are computed for fault recognition. Simulations on a continuous stirred tank reactor system show that EPCA performs well in terms of fault detection and recognition.

Intelligent Condition Indices in Fault Diagnosis

Automatic fault detection enables reliable condition monitoring even when long periods of continuous operation are required. Dimensionless indices provide useful information on different faults, and even more sensitive solutions can be obtained by selecting suitable features. These indices combine two or more features, e.g. root-mean-square values and peak values. Additional features can be introduced by analysing signal distributions, for example. The features are generated directly from the higher order derivatives of the acceleration signals, and the models can be based on data or expertise. Generalised moments and norms introduce efficient new features which even alone can provide good solutions with automation systems, but combining several easily calculated features is an efficient approach for intelligent sensors. The nonlinear scaling used in the linguistic equation approach extends the idea of dimensionless indices to nonlinear systems. Indices are obtained from these scaled values by means of linear equations. Indices detect differences between normal and faulty conditions and provide an indication of the severity of the faults. They can even classify different faults in case-based reasoning (CBR) type applications. Additional model complexity, e.g. response surface methods or neural networks, does not provide any practical improvements in these examples. The indices are calculated with problem-specific sample times, and variation with time is handled as uncertainty by presenting the indices as time-varying fuzzy numbers. The classification limits can also be considered fuzzy. Condition indices can be obtained from the degrees of membership which are produced by the reasoning system. Practical long-term tests have been performed e.g. for diagnosing faults in bearings, in supporting rolls of lime kilns and for the cavitation of water turbines. The indices obtained from short samples are aimed for use in the same way as the process measurements in process control. The new indices are consistent with the measurement index MIT and the health index SOL developed for condition monitoring.

Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model

Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.

Information theoretic fault detection

Proceedings of the 2005, American Control Conference, 2005., 2005

In this paper we propose a novel method of fault detection based on a clustering algorithm developed in the information theoretic framework. A mathematical formulation for a multi-input multi-output (MIMO) system is developed to identify the most informative signals for the fault detection using mutual information (MI) as the measure of correlation among various measurements on the system. This is a modelindependent approach for the fault detection. The effectiveness of the proposed method is successfully demonstrated by employing MI-based algorithm to isolate various faults in 16cylinder diesel engine in the form of distinct clusters.

Monitoring gears by vibration measurements: Lempel-Ziv complexity and Approximate Entropy as diagnostic tools

MATEC Web of Conferences, 2015

Unexpected failures of industrial gearboxes may cause significant economic losses. It is therefore important to detect early fault symptoms. This paper introduces signal processing methods based on approximate entropy ( ApEn ) and Lempel-Ziv Complexity ( LZC ) for defect detection of gears. Both methods are statistical measurements exploring the regularity of a vibratory signal. Applied to gear signals, the parameter selection of ApEn and LZC calculation are first numerically investigated, and appropriate parameters are suggested. Finally, an experimental study is presented to investigate the effectiveness of these indicators. The results demonstrate that ApEn and LZC provide alternative features for signal processing. A new methodology is presented combining both Kurtosis and LZC for early detection of faults. The results show that this proposed method may be used as an effective tool for early detection of gear faults.