Differentiating features for the F distributions with different degrees of freedom through RBF network pruning with QLP (original) (raw)

Using RBF reduced by QLP decomposition for probability density estimation

Proceedings of the 6th Conference on Wseas International Conference on Applied Computer Science Volume 6, 2007

This paper is intender to be a simple example illustrating some of the capabilities of Radial basis function by pruning with QLP decomposition. The applicability of the radial basis function (RBF) type function of artificial neural networks (ANNS) approach for re-estimate the Box, Traingle, Epanechnikov and Normal densities. We propose an application of QLP decomposition model to reduce to the class of RBF neural models for improving performance in contexts of density estimate. Has been found in the QLP that such a coupling leads to more precise extraction of the relevant information, even when using it in a heuristic way. This paper is concerned with reestimation these four densities estimated by pruning a Radial Basis Function network using pivoted QLP decomposition. For comparison all RBF type functions with the same Gaussian mixture model as the sample data is superimposed on the plot. This application tool can be used to identify the density estimate from empirical data where presents many type density estimative. The QLP methods proves efficient for reducing the network size by pruning hidden nodes, resulting is a parsimonious model which identify RBF type multiquadric to re-estimate kernel function Box and Normal distributions.

A classification technique based on radial basis function neural networks

Advances in Engineering Software, 2006

In this paper, a new classification method is proposed based on the radial basis function (RBF) neural network architecture. The method is particularly useful for manufacturing processes, in cases where on-line sensors for classifying the product quality are not available. More specifically, the fuzzy means algorithm is employed on a set of training data, where the input data refer to variables that are measured on-line and the output data correspond to quality variables that are classified by human experts. The produced neural network model acts as an artificial sensor that is able to classify the product quality in real time. The proposed method is illustrated through an application to real data collected from a paper machine. The method produces successful results and outperforms a number of classifiers, which are based on the feedforward neural network (FNN) architecture. q

Improving Rbf Networks Classification Performance By Using K-Harmonic Means

2010

In this paper, a clustering algorithm named KHarmonic means (KHM) was employed in the training of Radial Basis Function Networks (RBFNs). KHM organized the data in clusters and determined the centres of the basis function. The popular clustering algorithms, namely K-means (KM) and Fuzzy c-means (FCM), are highly dependent on the initial identification of elements that represent the cluster well. In KHM, the problem can be avoided. This leads to improvement in the classification performance when compared to other clustering algorithms. A comparison of the classification accuracy was performed between KM, FCM and KHM. The classification performance is based on the benchmark data sets: Iris Plant, Diabetes and Breast Cancer. RBFN training with the KHM algorithm shows better accuracy in classification problem.

Neural Networks for Classifying Probability Distributions

2021

Probability distribution fitting of an unknown stochastic process is an important preliminary step for any further analysis in science or engineering. However, it requires some background in statistics, prior considerations of the process or phenomenon under study and familiarity with several distributions. As such, this paper presents an alternative approach which doesn't require prior knowledge of statistical methods nor previous assumption on the available data. Instead, using Deep Learning, the best candidate distribution is extracted from the output of a neural network that was previously trained on a large suitable database in order to classify an array of observations into a matching distributional model. We find that our classifier can perform this task comparably to using maximum likelihood estimation with an Anderson-Darling goodness of fit test.

An experimental study of the extended NRBF regression model and its enhancement for classification problem

Neurocomputing, 2008

As an extension of the traditional normalized radial basis function (NRBF) model, the extended normalized RBF (ENRBF) model was proposed by Xu [RBF nets, mixture experts, and Bayesian Ying-Yang learning, Neurocomputing 19 (1998) 223-257]. In this paper, we perform a supplementary study on ENRBF with several properly designed experiments and some further theoretical discussions. It is shown that ENRBF is able to efficiently improve the learning accuracies under some circumstances. Moreover, since the ENRBF model is initially proposed for the regression and function approximation problems, a further step is taken in this work to modify the ENRBF model to deal with the classification problems. Both the original ENRBF model and the new proposed ENRBF classifier (ENRBFC) can be viewed as the special cases of the mixture-of-experts (ME) model that is discussed in Xu et al. [An alternative model for mixtures of experts, in: Advances in Neural Information Processing Systems, MIT Press, Cambridge, MA, 1995]. Experimental results show the potentials of ENRBFC compared to some other related classifiers. r

Introduction of the radial basis function (rbf) networks

2001

In this paper we provide a short overview of the Radial Basis Functions (RBF), their properties, the motivations behind their use and some of their applications. RBF's have been employed for functional approximation in time-series modeling and in pattern classification. They have been shown to implement the Bayesian rule and to model any continuous inputoutput mapping. RBF's are embedded in a two-layer neural network topology. We present the physical and statistical significance of the elements composing the network. We introduce a few RBF training algorithms and we show how RBF networks can be used in real applications.

Enhancing performance of MLP/RBF neural classifiers via an multivariate data distribution scheme

In this study, the performance of two neural classifiers; namely Multi Layer Perceptron (MLP) and Radial Basis Fuction (RBF), are compared for a multivariate classification problem. MLP and RBF are two of the most widely neural network architecture in literature for classification and have successfully been employed for a variety of applications. A nonlinear scaling scheme for multivariate data is proposed prior to training process in order to improve the performance of both neural classifiers. Proposed scheme modifies the gaussian multivariate data and produce a uniformly distributed multivariate data. It is shown that the proposed scaling scheme increases the performance of neural classifiers.

A probabilistic RBF network for classification

ijcnn, 2000

We present a probabilistic neural network model which is suitable for classification problems. This model constitutes an adaptation of the classical RBF network where the outputs represent the class conditional distributions. Since the network outputs correspond to probability densities functions, training process is treated as maximum likelihood problem and an Expectation-Maximization (EM) algorithm is proposed for adjusting the network parameters. Experimental results show that proposed architecture exhibits superior classification performance compared to the classical RBF network.

Radial Basis Function (RBF) Neural Network Classification based on Consistency Evaluation Measure

International Journal of Computer Applications, 2012

Many researchers have been applied artificial neural networks in clinical diagnosis, image analysis, signal analysis, interpretation and various classification problems. Among artificial neural networks, RBF neural network has a single hidden layer and it is used to classify complex problems, whereas an MLP may have one or more hidden layers. Many feature selection methods have become important preprocessing steps to improve training performance and accuracy before classification. Consistency-based feature selection is an important category of feature selection research. This paper presents about RBF neural network classification based on consistency measure for medical datasets. There are irrelevant features in medical dataset and it becomes easier to train RBF network by removing unnecessary features. Therefore, this paper shows higher accuracy, better network performance and less time complexity by using RBF classifier based on consistency based feature selection.