Michael Manry | University of Texas at Arlington (original) (raw)

Papers by Michael Manry

Research paper thumbnail of An optimal construction and training of second order RBF network for approximation and illumination invariant image segmentation

The 2011 International Joint Conference on Neural Networks, 2011

Research paper thumbnail of Fuzzy C-means clustering based construction and training for second order RBF network

2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011), 2011

Research paper thumbnail of A modified hidden weight optimization algorithm for feedforward neural networks

Conference Record of the Thirty-Sixth Asilomar Conference on Signals, Systems and Computers, 2002.

Research paper thumbnail of Sizing of the multilayer perceptron via modular networks

Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468)

Research paper thumbnail of Near-optimal flight load synthesis using neural nets

Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468)

Research paper thumbnail of Invariance of MLP Training to Input Feature De-correlation

In the neural network literature, input feature de-correlation is often referred as one pre-proce... more In the neural network literature, input feature de-correlation is often referred as one pre-processing technique used to improve the MLP training speed. However, in this paper, we find that de-correlation by orthogonal Karhunen-Loeve transform (KLT) may not be helpful to improve training. Through detailed analyses, the effect of input de-correlation is revealed to be equivalent to using a different weight set to initialize the network. Thus, for a robust training algorithm, the benefit of input de-correlation would be negligible. The theoretical results are applicable to several gradient training algorithms, i.e. back-propagation, conjugate gradient. The simulation results confirm our theoretical analyses.

Research paper thumbnail of Prototype Based Classifier Design with Pruning

International Journal of Artificial Intelligence Tools

An algorithm is proposed to prune the prototype vectors (pro- totype selection) used in a nearest... more An algorithm is proposed to prune the prototype vectors (pro- totype selection) used in a nearest neighbor classifier so that a compact classifier can be obtained with similar or even better performance. The pruning procedure is error based; a proto- type will be pruned if its deletion leads to the smallest clas- sification error increase. Also each pruning iteration is fol- lowed by one epoch of Learning Vector Quantization (LVQ) training. Simulation results show that the selected prototypes can approach optimal or near optimal locations based on the training data distribution.

Research paper thumbnail of Fast Generation of a Sequence of Trained and Validated Feed-Forward Networks

In this paper, three approaches are presented for generating and validating sequences of differen... more In this paper, three approaches are presented for generating and validating sequences of different size neural nets. First, a growing method is given along with several weight initialization methods, and their properties. Then a one pass pruning method is presented which utilizes orthogonal least squares. Based upon this pruning approach, a one- pass validation method is discussed. Finally, a training method that combines growing and pruning is described. In several examples, it is shown that the combination approach is superior to growing or pruning alone.

Research paper thumbnail of Optimal pruning of feedforward neural networks based upon the Schmidt procedure

Asilomar Conference on Signals, Systems & Computers, 2002

A common way of designing feedforward networks is to obtain a large network and then to prune les... more A common way of designing feedforward networks is to obtain a large network and then to prune less useful hidden units. Here, two non-heuristic pruning algorithms are derived from the Schmidt procedure. In both, orthonormal systems of basis functions are found, ordered, pruned, and mapped back to the original network. In the first algorithm, the orthonormal basis functions are found

Research paper thumbnail of Sleep disordered breathing detection using heart rate variability and R-peak envelope spectrogram

2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2009

Research paper thumbnail of Iterative Improvement of trigonometric networks

IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)

Research paper thumbnail of Neural decision directed segmentation of silicon defects

The 2013 International Joint Conference on Neural Networks (IJCNN), 2013

Research paper thumbnail of Training multilayer perceptron by using optimal input normalization

2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011), 2011

Research paper thumbnail of Enhanced robustness of multilayer perceptron training

Conference Record of the Thirty-Sixth Asilomar Conference on Signals, Systems and Computers, 2002.

Research paper thumbnail of A Method to Detect Obstructive Sleep Apnea Using Neural Network Classification of Time-Frequency Plots of the Heart Rate Variability

2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2007

Research paper thumbnail of Cross correlation and scatter plots of the heart rate variability and R-peak Envelope as features in the detection of obstructive sleep apnea

2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2008

Research paper thumbnail of Hidden Layer Training via Hessian Matrix Information

The Florida AI Research Society Conference, 2004

The output weight optimization-hidden weight optimization (OWO-HWO) algorithm for training the mu... more The output weight optimization-hidden weight optimization (OWO-HWO) algorithm for training the multilayer perceptron alternately updates the output weights and the hidden weights. This layer-by-layer training strategy greatly improves convergence speed. However, in HWO, the desired net function actually evolves in the gradient direction, which inevitably reduces efficiency. In this paper, two improvements to the OWO-HWO algorithm are presented. New desired

Research paper thumbnail of Iterative improvement of a nearest neighbor classifier

Research paper thumbnail of An efficient hidden layer training method for the multilayer perceptron

Research paper thumbnail of Upper bound on pattern storage in feedforward networks

Research paper thumbnail of An optimal construction and training of second order RBF network for approximation and illumination invariant image segmentation

The 2011 International Joint Conference on Neural Networks, 2011

Research paper thumbnail of Fuzzy C-means clustering based construction and training for second order RBF network

2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011), 2011

Research paper thumbnail of A modified hidden weight optimization algorithm for feedforward neural networks

Conference Record of the Thirty-Sixth Asilomar Conference on Signals, Systems and Computers, 2002.

Research paper thumbnail of Sizing of the multilayer perceptron via modular networks

Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468)

Research paper thumbnail of Near-optimal flight load synthesis using neural nets

Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468)

Research paper thumbnail of Invariance of MLP Training to Input Feature De-correlation

In the neural network literature, input feature de-correlation is often referred as one pre-proce... more In the neural network literature, input feature de-correlation is often referred as one pre-processing technique used to improve the MLP training speed. However, in this paper, we find that de-correlation by orthogonal Karhunen-Loeve transform (KLT) may not be helpful to improve training. Through detailed analyses, the effect of input de-correlation is revealed to be equivalent to using a different weight set to initialize the network. Thus, for a robust training algorithm, the benefit of input de-correlation would be negligible. The theoretical results are applicable to several gradient training algorithms, i.e. back-propagation, conjugate gradient. The simulation results confirm our theoretical analyses.

Research paper thumbnail of Prototype Based Classifier Design with Pruning

International Journal of Artificial Intelligence Tools

An algorithm is proposed to prune the prototype vectors (pro- totype selection) used in a nearest... more An algorithm is proposed to prune the prototype vectors (pro- totype selection) used in a nearest neighbor classifier so that a compact classifier can be obtained with similar or even better performance. The pruning procedure is error based; a proto- type will be pruned if its deletion leads to the smallest clas- sification error increase. Also each pruning iteration is fol- lowed by one epoch of Learning Vector Quantization (LVQ) training. Simulation results show that the selected prototypes can approach optimal or near optimal locations based on the training data distribution.

Research paper thumbnail of Fast Generation of a Sequence of Trained and Validated Feed-Forward Networks

In this paper, three approaches are presented for generating and validating sequences of differen... more In this paper, three approaches are presented for generating and validating sequences of different size neural nets. First, a growing method is given along with several weight initialization methods, and their properties. Then a one pass pruning method is presented which utilizes orthogonal least squares. Based upon this pruning approach, a one- pass validation method is discussed. Finally, a training method that combines growing and pruning is described. In several examples, it is shown that the combination approach is superior to growing or pruning alone.

Research paper thumbnail of Optimal pruning of feedforward neural networks based upon the Schmidt procedure

Asilomar Conference on Signals, Systems & Computers, 2002

A common way of designing feedforward networks is to obtain a large network and then to prune les... more A common way of designing feedforward networks is to obtain a large network and then to prune less useful hidden units. Here, two non-heuristic pruning algorithms are derived from the Schmidt procedure. In both, orthonormal systems of basis functions are found, ordered, pruned, and mapped back to the original network. In the first algorithm, the orthonormal basis functions are found

Research paper thumbnail of Sleep disordered breathing detection using heart rate variability and R-peak envelope spectrogram

2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2009

Research paper thumbnail of Iterative Improvement of trigonometric networks

IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)

Research paper thumbnail of Neural decision directed segmentation of silicon defects

The 2013 International Joint Conference on Neural Networks (IJCNN), 2013

Research paper thumbnail of Training multilayer perceptron by using optimal input normalization

2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011), 2011

Research paper thumbnail of Enhanced robustness of multilayer perceptron training

Conference Record of the Thirty-Sixth Asilomar Conference on Signals, Systems and Computers, 2002.

Research paper thumbnail of A Method to Detect Obstructive Sleep Apnea Using Neural Network Classification of Time-Frequency Plots of the Heart Rate Variability

2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2007

Research paper thumbnail of Cross correlation and scatter plots of the heart rate variability and R-peak Envelope as features in the detection of obstructive sleep apnea

2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2008

Research paper thumbnail of Hidden Layer Training via Hessian Matrix Information

The Florida AI Research Society Conference, 2004

The output weight optimization-hidden weight optimization (OWO-HWO) algorithm for training the mu... more The output weight optimization-hidden weight optimization (OWO-HWO) algorithm for training the multilayer perceptron alternately updates the output weights and the hidden weights. This layer-by-layer training strategy greatly improves convergence speed. However, in HWO, the desired net function actually evolves in the gradient direction, which inevitably reduces efficiency. In this paper, two improvements to the OWO-HWO algorithm are presented. New desired

Research paper thumbnail of Iterative improvement of a nearest neighbor classifier

Research paper thumbnail of An efficient hidden layer training method for the multilayer perceptron

Research paper thumbnail of Upper bound on pattern storage in feedforward networks