Michael Manry | University of Texas at Arlington (original) (raw)
Papers by Michael Manry
The 2011 International Joint Conference on Neural Networks, 2011
2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011), 2011
Conference Record of the Thirty-Sixth Asilomar Conference on Signals, Systems and Computers, 2002.
Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468)
Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468)
In the neural network literature, input feature de-correlation is often referred as one pre-proce... more In the neural network literature, input feature de-correlation is often referred as one pre-processing technique used to improve the MLP training speed. However, in this paper, we find that de-correlation by orthogonal Karhunen-Loeve transform (KLT) may not be helpful to improve training. Through detailed analyses, the effect of input de-correlation is revealed to be equivalent to using a different weight set to initialize the network. Thus, for a robust training algorithm, the benefit of input de-correlation would be negligible. The theoretical results are applicable to several gradient training algorithms, i.e. back-propagation, conjugate gradient. The simulation results confirm our theoretical analyses.
International Journal of Artificial Intelligence Tools
An algorithm is proposed to prune the prototype vectors (pro- totype selection) used in a nearest... more An algorithm is proposed to prune the prototype vectors (pro- totype selection) used in a nearest neighbor classifier so that a compact classifier can be obtained with similar or even better performance. The pruning procedure is error based; a proto- type will be pruned if its deletion leads to the smallest clas- sification error increase. Also each pruning iteration is fol- lowed by one epoch of Learning Vector Quantization (LVQ) training. Simulation results show that the selected prototypes can approach optimal or near optimal locations based on the training data distribution.
In this paper, three approaches are presented for generating and validating sequences of differen... more In this paper, three approaches are presented for generating and validating sequences of different size neural nets. First, a growing method is given along with several weight initialization methods, and their properties. Then a one pass pruning method is presented which utilizes orthogonal least squares. Based upon this pruning approach, a one- pass validation method is discussed. Finally, a training method that combines growing and pruning is described. In several examples, it is shown that the combination approach is superior to growing or pruning alone.
Asilomar Conference on Signals, Systems & Computers, 2002
A common way of designing feedforward networks is to obtain a large network and then to prune les... more A common way of designing feedforward networks is to obtain a large network and then to prune less useful hidden units. Here, two non-heuristic pruning algorithms are derived from the Schmidt procedure. In both, orthonormal systems of basis functions are found, ordered, pruned, and mapped back to the original network. In the first algorithm, the orthonormal basis functions are found
2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2009
IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)
The 2013 International Joint Conference on Neural Networks (IJCNN), 2013
2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011), 2011
Conference Record of the Thirty-Sixth Asilomar Conference on Signals, Systems and Computers, 2002.
2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2007
2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2008
The Florida AI Research Society Conference, 2004
The output weight optimization-hidden weight optimization (OWO-HWO) algorithm for training the mu... more The output weight optimization-hidden weight optimization (OWO-HWO) algorithm for training the multilayer perceptron alternately updates the output weights and the hidden weights. This layer-by-layer training strategy greatly improves convergence speed. However, in HWO, the desired net function actually evolves in the gradient direction, which inevitably reduces efficiency. In this paper, two improvements to the OWO-HWO algorithm are presented. New desired
The 2011 International Joint Conference on Neural Networks, 2011
2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011), 2011
Conference Record of the Thirty-Sixth Asilomar Conference on Signals, Systems and Computers, 2002.
Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468)
Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468)
In the neural network literature, input feature de-correlation is often referred as one pre-proce... more In the neural network literature, input feature de-correlation is often referred as one pre-processing technique used to improve the MLP training speed. However, in this paper, we find that de-correlation by orthogonal Karhunen-Loeve transform (KLT) may not be helpful to improve training. Through detailed analyses, the effect of input de-correlation is revealed to be equivalent to using a different weight set to initialize the network. Thus, for a robust training algorithm, the benefit of input de-correlation would be negligible. The theoretical results are applicable to several gradient training algorithms, i.e. back-propagation, conjugate gradient. The simulation results confirm our theoretical analyses.
International Journal of Artificial Intelligence Tools
An algorithm is proposed to prune the prototype vectors (pro- totype selection) used in a nearest... more An algorithm is proposed to prune the prototype vectors (pro- totype selection) used in a nearest neighbor classifier so that a compact classifier can be obtained with similar or even better performance. The pruning procedure is error based; a proto- type will be pruned if its deletion leads to the smallest clas- sification error increase. Also each pruning iteration is fol- lowed by one epoch of Learning Vector Quantization (LVQ) training. Simulation results show that the selected prototypes can approach optimal or near optimal locations based on the training data distribution.
In this paper, three approaches are presented for generating and validating sequences of differen... more In this paper, three approaches are presented for generating and validating sequences of different size neural nets. First, a growing method is given along with several weight initialization methods, and their properties. Then a one pass pruning method is presented which utilizes orthogonal least squares. Based upon this pruning approach, a one- pass validation method is discussed. Finally, a training method that combines growing and pruning is described. In several examples, it is shown that the combination approach is superior to growing or pruning alone.
Asilomar Conference on Signals, Systems & Computers, 2002
A common way of designing feedforward networks is to obtain a large network and then to prune les... more A common way of designing feedforward networks is to obtain a large network and then to prune less useful hidden units. Here, two non-heuristic pruning algorithms are derived from the Schmidt procedure. In both, orthonormal systems of basis functions are found, ordered, pruned, and mapped back to the original network. In the first algorithm, the orthonormal basis functions are found
2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2009
IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)
The 2013 International Joint Conference on Neural Networks (IJCNN), 2013
2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011), 2011
Conference Record of the Thirty-Sixth Asilomar Conference on Signals, Systems and Computers, 2002.
2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2007
2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2008
The Florida AI Research Society Conference, 2004
The output weight optimization-hidden weight optimization (OWO-HWO) algorithm for training the mu... more The output weight optimization-hidden weight optimization (OWO-HWO) algorithm for training the multilayer perceptron alternately updates the output weights and the hidden weights. This layer-by-layer training strategy greatly improves convergence speed. However, in HWO, the desired net function actually evolves in the gradient direction, which inevitably reduces efficiency. In this paper, two improvements to the OWO-HWO algorithm are presented. New desired