Michael Manry - Profile on Academia.edu (original) (raw)
Papers by Michael Manry
The 2011 International Joint Conference on Neural Networks, 2011
In this paper, we proposed an hybrid optimal radial-basis function (RBF) neural network for appro... more In this paper, we proposed an hybrid optimal radial-basis function (RBF) neural network for approximation and illumination invariant image segmentation. Unlike other RBF learning algorithms, the proposed paradigm introduces a new way to train RBF models by using optimal learning factors (OLFs) to train the network parameters, i.e. spread parameter, kernel vector and a weighted distance measure (DM) factor to calculate the activation function. An efficient second order Newton's algorithm is proposed for obtaining multiple OLF's (MOLF) for the network parameters. The weights connected to the output layer are trained by a supervised-learning algorithm based on orthogonal least square (OLS). The error obtained is then back-propagated to tune the RBF parameters. By applying RBF network for approximation on some real-life datasets and classification to reduce illumination effects of image segmentation, the results show that the proposed RBF neural network has fast convergence rates combining with low computational time cost, allowing it a good choice for real-life application such as image segmentation.
2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011), 2011
The paper presents a novel two-step approach for constructing and training of optimally weighted ... more The paper presents a novel two-step approach for constructing and training of optimally weighted Euclidean distance based Radial-Basis Function (RBF) neural network. Unlike other RBF learning algorithms, the proposed paradigms use Fuzzy C-means for initial clustering and optimal learning factors to train the network parameters (i.e. spread parameter and mean vector). We also introduce an optimized weighted Distance Measure (DM) to calculate the activation function. Newton's algorithm is proposed for obtaining multiple optimal learning factor for the network parameters (including weighted DM). Simulation results show that regardless of the input data dimension, the proposed algorithms are a significant improvement in terms of convergence speed, network size and generalization over conventional RBF models which use a single optimal learning factor. The generalization ability of the proposed algorithm is further substantiated by using k-fold validation.
Conference Record of the Thirty-Sixth Asilomar Conference on Signals, Systems and Computers, 2002.
The OWO-HWO feedforward network training algorithm alternately solves linear equations for output... more The OWO-HWO feedforward network training algorithm alternately solves linear equations for output weights and reduces a separate hidden layer error function with respect to hidden layer weights. Here, a new hidden layer error function is proposed which de-emphasizes net function errors that correspond to saturated activation function values. In addition, an adaptive learning rate based on the local shape of the error surface is used in hidden layer training. Faster learning convergence is experimentally verified.
Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468)
A fast method for sizing the multilayer perceptron is proposed. The principal assumption is that ... more A fast method for sizing the multilayer perceptron is proposed. The principal assumption is that a modular network with the same theoretical pattern storage as the multilayer perceptron has the same training error. This assumption is analyzed for the case of random patterns. Using several benchmark datasets, the validity of the approach is demonstrated.
Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468)
This paper describes the use of neural networks for near-optimal helicopter flight load synthesis... more This paper describes the use of neural networks for near-optimal helicopter flight load synthesis (FLS), which is the process of estimating mechanical loads during helicopter flight, using cockpit measurements. First, modular neural networks are used to develop statistical signal models of the cockpit measurements as a function of the loads. Then Cramer-Rao maximum a-posteriori bounds on the mean-squared error are calculated. Then, multilayer perceptrons for FLS are designed which approximately attain the bounds. It is shown that all of the FLS networks have good generalization.
In the neural network literature, input feature de-correlation is often referred as one pre-proce... more In the neural network literature, input feature de-correlation is often referred as one pre-processing technique used to improve the MLP training speed. However, in this paper, we find that de-correlation by orthogonal Karhunen-Loeve transform (KLT) may not be helpful to improve training. Through detailed analyses, the effect of input de-correlation is revealed to be equivalent to using a different weight set to initialize the network. Thus, for a robust training algorithm, the benefit of input de-correlation would be negligible. The theoretical results are applicable to several gradient training algorithms, i.e. back-propagation, conjugate gradient. The simulation results confirm our theoretical analyses.
International Journal of Artificial Intelligence Tools
An algorithm is proposed to prune the prototype vectors (pro- totype selection) used in a nearest... more An algorithm is proposed to prune the prototype vectors (pro- totype selection) used in a nearest neighbor classifier so that a compact classifier can be obtained with similar or even better performance. The pruning procedure is error based; a proto- type will be pruned if its deletion leads to the smallest clas- sification error increase. Also each pruning iteration is fol- lowed by one epoch of Learning Vector Quantization (LVQ) training. Simulation results show that the selected prototypes can approach optimal or near optimal locations based on the training data distribution.
In this paper, three approaches are presented for generating and validating sequences of differen... more In this paper, three approaches are presented for generating and validating sequences of different size neural nets. First, a growing method is given along with several weight initialization methods, and their properties. Then a one pass pruning method is presented which utilizes orthogonal least squares. Based upon this pruning approach, a one- pass validation method is discussed. Finally, a training method that combines growing and pruning is described. In several examples, it is shown that the combination approach is superior to growing or pruning alone.
Asilomar Conference on Signals, Systems & Computers, 2002
A common way of designing feedforward networks is to obtain a large network and then to prune les... more A common way of designing feedforward networks is to obtain a large network and then to prune less useful hidden units. Here, two non-heuristic pruning algorithms are derived from the Schmidt procedure. In both, orthonormal systems of basis functions are found, ordered, pruned, and mapped back to the original network. In the first algorithm, the orthonormal basis functions are found
2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2009
We report that combining the interbeat heart rate as measured by the RR interval (RR) and R-peak ... more We report that combining the interbeat heart rate as measured by the RR interval (RR) and R-peak envelope (RPE) derived from R-peak of ECG waveform may significantly improve the detection of sleep disordered breathing (SDB) from single lead ECG recording. The method uses textural features extracted from normalized gray-level cooccurrence matrices of the time frequency plots of HRV or RPE sequences. An optimum subset of textural features is selected for classification of the records. A multi-layer perceptron (MLP) serves as a classifier. To evaluate the performance of the proposed method, single Lead ECG recordings from 7 normal subjects and 7 obstructive sleep apnea patients were used. With 500 randomized Monte-Carlo simulations, the average training sensitivity, specificity and accuracy were 100.0%, 99.9%, and 99.9%, respectively. The mean testing sensitivity, specificity and accuracy were 99.0%, 96.7%, and 97.8%, respectively.
IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)
The trigonometric network, introduced in this paper, is a multilayer feedforward neural network w... more The trigonometric network, introduced in this paper, is a multilayer feedforward neural network with sinusoidal activation functions. Unlike the N-dimensional Fourier series, the basis functions of the proposed trigonometric network have no strict harmonic relationship. An effective training algorithm for the network is developed. It is shown that the trigonometric network performs better than the sigmoidal neural network for some data sets. A pruning method based on the modified Gram-Schmidt orthogonalization procedure is presented to detect and prune useless hidden units. Other network architectures related to the trigonometric network, such as the sine network, are shown to be inferior to the network proposed in this paper.
The 2013 International Joint Conference on Neural Networks (IJCNN), 2013
2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011), 2011
In this paper, we propose a novel second order paradigm called optimal input normalization (OIN) ... more In this paper, we propose a novel second order paradigm called optimal input normalization (OIN) to solve the problems of slow convergence and high complexity of MLP. By optimizing the non-orthogonal transformation matrix of input units in an equivalent network, OIN absorbs separate optimal learning factor for each synaptic weight as well as the threshold of hidden unit, leading to an improvement in the performance for MLP training. Moreover, by using a whitening transformation of negative Jacobian matrix of hidden weights, a modified version of OIN called optimal input normalization with hidden weights optimization (OIN-HWO) is also proposed. The Hessian matrices in both OIN and OIN-HWO are computed by using Gauss-Newton method. All the linear equations are solved via orthogonal least square (OLS). Regression simulations are performed on several real-life datasets and the results show that the proposed OIN has not only much better convergence rate and generalization ability than output weights optimization-back propagation (OWO-BP), optimal input gains (OIG) and even Levenberg-Marquardt (LM) method, but also takes less computational time than OWO-BP. Although OIN-HWO takes a little expensive computational burden than OIN, its convergence rate is faster than OIN and often close to or rivals LM. It is therefore suggested that OIN-based algorithms are potentially very good choices for practical applications.
Conference Record of the Thirty-Sixth Asilomar Conference on Signals, Systems and Computers, 2002.
Due to the chaotic nature of multilayer perceptron training, training error usually fails to be a... more Due to the chaotic nature of multilayer perceptron training, training error usually fails to be a monotonically non-increasing function of the number of hidden units. An initialization and training methodology is developed to significantly increase the probability that the training error is monotonically non-increasing. First a structured initialization generates the random weights in a particular order. Second, larger networks are initialized using weights from smaller trained networks. Lastly, the required number of iterations is calculated as a function of network size.
2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2007
This paper presents a new method of analyzing time-frequency plots of heart rate variability to d... more This paper presents a new method of analyzing time-frequency plots of heart rate variability to detect sleep disordered breathing from nocturnal ECG. Data is collected from 12 normal subjects (7 males, 5 females; age 46 ± 9.38 years, AHI 3.75 ± 3.11) and 14 apneic subjects (8 males, 6 females; age 50.28 ± 9.60 years; AHI 31.21 ± 23.89). The proposed algorithm uses textural features extracted from normalized gray-level co-occurrence matrices (NGLCM) of images generated by short-time discrete Fourier transform (STDFT) of the HRV. Using feature selection, seventeen features extracted from 10 different NGLCMs representing four characteristically different gray-level images are used as inputs to a three-layer Multi-Layer Perceptron (MLP) classifier. After a 1000 randomized Monte-Carlo simulations, the mean training classification sensitivity, specificity and accuracy are 99.00%, 93.42%, and 96.42%, respectively. The mean testing classification sensitivity, specificity and accuracy are 94.42%, 85.40%, and 90.16%, respectively.
2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2008
The Florida AI Research Society Conference, 2004
The output weight optimization-hidden weight optimization (OWO-HWO) algorithm for training the mu... more The output weight optimization-hidden weight optimization (OWO-HWO) algorithm for training the multilayer perceptron alternately updates the output weights and the hidden weights. This layer-by-layer training strategy greatly improves convergence speed. However, in HWO, the desired net function actually evolves in the gradient direction, which inevitably reduces efficiency. In this paper, two improvements to the OWO-HWO algorithm are presented. New desired
Neural Networks, 1991
In practical pattern recognition applications, the nearest neighbor classifier (NNC) is often app... more In practical pattern recognition applications, the nearest neighbor classifier (NNC) is often applied because it does not require an a priori knowledge of the joint probability density of the input feature vectors. As the number of example vectors is increased, the error probability of the NNC approaches that of the Baysian classifier. However, at the same time, the computational complexity of the NNC increases. Also, for a small number of example vectors, the NNC is not optimal with respect to the training data. In this paper, we attack these problems by mapping the NNC to a sigma-pi neural network, to which it is partially isomorphic. A modified form of back-propagation (BP) learning is then developed and used to improve classifier performance. As examples, we apply our approach to the problems of hand-printed numeral recognition and geometrical shape recognition. Significant improvements in classification error percentages are observed for both the training data and testing data.
Neurocomputing, 2006
This article was originally published in a journal published by Elsevier, and the attached copy i... more This article was originally published in a journal published by Elsevier, and the attached copy is provided by Elsevier for the author's benefit and for the benefit of the author's institution, for non-commercial research and educational use including without limitation use in instruction at your institution, sending it to specific colleagues that you know, and providing a copy to your institution's administrator.
Neurocomputing, 2008
This article appeared in a journal published by Elsevier. The attached copy is furnished to the a... more This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit:
The 2011 International Joint Conference on Neural Networks, 2011
In this paper, we proposed an hybrid optimal radial-basis function (RBF) neural network for appro... more In this paper, we proposed an hybrid optimal radial-basis function (RBF) neural network for approximation and illumination invariant image segmentation. Unlike other RBF learning algorithms, the proposed paradigm introduces a new way to train RBF models by using optimal learning factors (OLFs) to train the network parameters, i.e. spread parameter, kernel vector and a weighted distance measure (DM) factor to calculate the activation function. An efficient second order Newton's algorithm is proposed for obtaining multiple OLF's (MOLF) for the network parameters. The weights connected to the output layer are trained by a supervised-learning algorithm based on orthogonal least square (OLS). The error obtained is then back-propagated to tune the RBF parameters. By applying RBF network for approximation on some real-life datasets and classification to reduce illumination effects of image segmentation, the results show that the proposed RBF neural network has fast convergence rates combining with low computational time cost, allowing it a good choice for real-life application such as image segmentation.
2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011), 2011
The paper presents a novel two-step approach for constructing and training of optimally weighted ... more The paper presents a novel two-step approach for constructing and training of optimally weighted Euclidean distance based Radial-Basis Function (RBF) neural network. Unlike other RBF learning algorithms, the proposed paradigms use Fuzzy C-means for initial clustering and optimal learning factors to train the network parameters (i.e. spread parameter and mean vector). We also introduce an optimized weighted Distance Measure (DM) to calculate the activation function. Newton's algorithm is proposed for obtaining multiple optimal learning factor for the network parameters (including weighted DM). Simulation results show that regardless of the input data dimension, the proposed algorithms are a significant improvement in terms of convergence speed, network size and generalization over conventional RBF models which use a single optimal learning factor. The generalization ability of the proposed algorithm is further substantiated by using k-fold validation.
Conference Record of the Thirty-Sixth Asilomar Conference on Signals, Systems and Computers, 2002.
The OWO-HWO feedforward network training algorithm alternately solves linear equations for output... more The OWO-HWO feedforward network training algorithm alternately solves linear equations for output weights and reduces a separate hidden layer error function with respect to hidden layer weights. Here, a new hidden layer error function is proposed which de-emphasizes net function errors that correspond to saturated activation function values. In addition, an adaptive learning rate based on the local shape of the error surface is used in hidden layer training. Faster learning convergence is experimentally verified.
Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468)
A fast method for sizing the multilayer perceptron is proposed. The principal assumption is that ... more A fast method for sizing the multilayer perceptron is proposed. The principal assumption is that a modular network with the same theoretical pattern storage as the multilayer perceptron has the same training error. This assumption is analyzed for the case of random patterns. Using several benchmark datasets, the validity of the approach is demonstrated.
Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468)
This paper describes the use of neural networks for near-optimal helicopter flight load synthesis... more This paper describes the use of neural networks for near-optimal helicopter flight load synthesis (FLS), which is the process of estimating mechanical loads during helicopter flight, using cockpit measurements. First, modular neural networks are used to develop statistical signal models of the cockpit measurements as a function of the loads. Then Cramer-Rao maximum a-posteriori bounds on the mean-squared error are calculated. Then, multilayer perceptrons for FLS are designed which approximately attain the bounds. It is shown that all of the FLS networks have good generalization.
In the neural network literature, input feature de-correlation is often referred as one pre-proce... more In the neural network literature, input feature de-correlation is often referred as one pre-processing technique used to improve the MLP training speed. However, in this paper, we find that de-correlation by orthogonal Karhunen-Loeve transform (KLT) may not be helpful to improve training. Through detailed analyses, the effect of input de-correlation is revealed to be equivalent to using a different weight set to initialize the network. Thus, for a robust training algorithm, the benefit of input de-correlation would be negligible. The theoretical results are applicable to several gradient training algorithms, i.e. back-propagation, conjugate gradient. The simulation results confirm our theoretical analyses.
International Journal of Artificial Intelligence Tools
An algorithm is proposed to prune the prototype vectors (pro- totype selection) used in a nearest... more An algorithm is proposed to prune the prototype vectors (pro- totype selection) used in a nearest neighbor classifier so that a compact classifier can be obtained with similar or even better performance. The pruning procedure is error based; a proto- type will be pruned if its deletion leads to the smallest clas- sification error increase. Also each pruning iteration is fol- lowed by one epoch of Learning Vector Quantization (LVQ) training. Simulation results show that the selected prototypes can approach optimal or near optimal locations based on the training data distribution.
In this paper, three approaches are presented for generating and validating sequences of differen... more In this paper, three approaches are presented for generating and validating sequences of different size neural nets. First, a growing method is given along with several weight initialization methods, and their properties. Then a one pass pruning method is presented which utilizes orthogonal least squares. Based upon this pruning approach, a one- pass validation method is discussed. Finally, a training method that combines growing and pruning is described. In several examples, it is shown that the combination approach is superior to growing or pruning alone.
Asilomar Conference on Signals, Systems & Computers, 2002
A common way of designing feedforward networks is to obtain a large network and then to prune les... more A common way of designing feedforward networks is to obtain a large network and then to prune less useful hidden units. Here, two non-heuristic pruning algorithms are derived from the Schmidt procedure. In both, orthonormal systems of basis functions are found, ordered, pruned, and mapped back to the original network. In the first algorithm, the orthonormal basis functions are found
2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2009
We report that combining the interbeat heart rate as measured by the RR interval (RR) and R-peak ... more We report that combining the interbeat heart rate as measured by the RR interval (RR) and R-peak envelope (RPE) derived from R-peak of ECG waveform may significantly improve the detection of sleep disordered breathing (SDB) from single lead ECG recording. The method uses textural features extracted from normalized gray-level cooccurrence matrices of the time frequency plots of HRV or RPE sequences. An optimum subset of textural features is selected for classification of the records. A multi-layer perceptron (MLP) serves as a classifier. To evaluate the performance of the proposed method, single Lead ECG recordings from 7 normal subjects and 7 obstructive sleep apnea patients were used. With 500 randomized Monte-Carlo simulations, the average training sensitivity, specificity and accuracy were 100.0%, 99.9%, and 99.9%, respectively. The mean testing sensitivity, specificity and accuracy were 99.0%, 96.7%, and 97.8%, respectively.
IJCNN'99. International Joint Conference on Neural Networks. Proceedings (Cat. No.99CH36339)
The trigonometric network, introduced in this paper, is a multilayer feedforward neural network w... more The trigonometric network, introduced in this paper, is a multilayer feedforward neural network with sinusoidal activation functions. Unlike the N-dimensional Fourier series, the basis functions of the proposed trigonometric network have no strict harmonic relationship. An effective training algorithm for the network is developed. It is shown that the trigonometric network performs better than the sigmoidal neural network for some data sets. A pruning method based on the modified Gram-Schmidt orthogonalization procedure is presented to detect and prune useless hidden units. Other network architectures related to the trigonometric network, such as the sine network, are shown to be inferior to the network proposed in this paper.
The 2013 International Joint Conference on Neural Networks (IJCNN), 2013
2011 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2011), 2011
In this paper, we propose a novel second order paradigm called optimal input normalization (OIN) ... more In this paper, we propose a novel second order paradigm called optimal input normalization (OIN) to solve the problems of slow convergence and high complexity of MLP. By optimizing the non-orthogonal transformation matrix of input units in an equivalent network, OIN absorbs separate optimal learning factor for each synaptic weight as well as the threshold of hidden unit, leading to an improvement in the performance for MLP training. Moreover, by using a whitening transformation of negative Jacobian matrix of hidden weights, a modified version of OIN called optimal input normalization with hidden weights optimization (OIN-HWO) is also proposed. The Hessian matrices in both OIN and OIN-HWO are computed by using Gauss-Newton method. All the linear equations are solved via orthogonal least square (OLS). Regression simulations are performed on several real-life datasets and the results show that the proposed OIN has not only much better convergence rate and generalization ability than output weights optimization-back propagation (OWO-BP), optimal input gains (OIG) and even Levenberg-Marquardt (LM) method, but also takes less computational time than OWO-BP. Although OIN-HWO takes a little expensive computational burden than OIN, its convergence rate is faster than OIN and often close to or rivals LM. It is therefore suggested that OIN-based algorithms are potentially very good choices for practical applications.
Conference Record of the Thirty-Sixth Asilomar Conference on Signals, Systems and Computers, 2002.
Due to the chaotic nature of multilayer perceptron training, training error usually fails to be a... more Due to the chaotic nature of multilayer perceptron training, training error usually fails to be a monotonically non-increasing function of the number of hidden units. An initialization and training methodology is developed to significantly increase the probability that the training error is monotonically non-increasing. First a structured initialization generates the random weights in a particular order. Second, larger networks are initialized using weights from smaller trained networks. Lastly, the required number of iterations is calculated as a function of network size.
2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2007
This paper presents a new method of analyzing time-frequency plots of heart rate variability to d... more This paper presents a new method of analyzing time-frequency plots of heart rate variability to detect sleep disordered breathing from nocturnal ECG. Data is collected from 12 normal subjects (7 males, 5 females; age 46 ± 9.38 years, AHI 3.75 ± 3.11) and 14 apneic subjects (8 males, 6 females; age 50.28 ± 9.60 years; AHI 31.21 ± 23.89). The proposed algorithm uses textural features extracted from normalized gray-level co-occurrence matrices (NGLCM) of images generated by short-time discrete Fourier transform (STDFT) of the HRV. Using feature selection, seventeen features extracted from 10 different NGLCMs representing four characteristically different gray-level images are used as inputs to a three-layer Multi-Layer Perceptron (MLP) classifier. After a 1000 randomized Monte-Carlo simulations, the mean training classification sensitivity, specificity and accuracy are 99.00%, 93.42%, and 96.42%, respectively. The mean testing classification sensitivity, specificity and accuracy are 94.42%, 85.40%, and 90.16%, respectively.
2008 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2008
The Florida AI Research Society Conference, 2004
The output weight optimization-hidden weight optimization (OWO-HWO) algorithm for training the mu... more The output weight optimization-hidden weight optimization (OWO-HWO) algorithm for training the multilayer perceptron alternately updates the output weights and the hidden weights. This layer-by-layer training strategy greatly improves convergence speed. However, in HWO, the desired net function actually evolves in the gradient direction, which inevitably reduces efficiency. In this paper, two improvements to the OWO-HWO algorithm are presented. New desired
Neural Networks, 1991
In practical pattern recognition applications, the nearest neighbor classifier (NNC) is often app... more In practical pattern recognition applications, the nearest neighbor classifier (NNC) is often applied because it does not require an a priori knowledge of the joint probability density of the input feature vectors. As the number of example vectors is increased, the error probability of the NNC approaches that of the Baysian classifier. However, at the same time, the computational complexity of the NNC increases. Also, for a small number of example vectors, the NNC is not optimal with respect to the training data. In this paper, we attack these problems by mapping the NNC to a sigma-pi neural network, to which it is partially isomorphic. A modified form of back-propagation (BP) learning is then developed and used to improve classifier performance. As examples, we apply our approach to the problems of hand-printed numeral recognition and geometrical shape recognition. Significant improvements in classification error percentages are observed for both the training data and testing data.
Neurocomputing, 2006
This article was originally published in a journal published by Elsevier, and the attached copy i... more This article was originally published in a journal published by Elsevier, and the attached copy is provided by Elsevier for the author's benefit and for the benefit of the author's institution, for non-commercial research and educational use including without limitation use in instruction at your institution, sending it to specific colleagues that you know, and providing a copy to your institution's administrator.
Neurocomputing, 2008
This article appeared in a journal published by Elsevier. The attached copy is furnished to the a... more This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: