A Global Optimum Approach for One-Layer Neural Networks (original) (raw)

Skip Nav Destination

June 01 2002

Enrique Castillo,

Department of Applied Mathematics and Computational Sciences, University of Cantabria and University of Castilla-La Mancha, 39005 Santander, Spain, [email protected]

Search for other works by this author on:

Crossmark: Check for Updates

Enrique Castillo

Department of Applied Mathematics and Computational Sciences, University of Cantabria and University of Castilla-La Mancha, 39005 Santander, Spain, [email protected]

Oscar Fontenla-Romero

Department of Computer Science, Faculty of Informatics, University of A Coruña, 15071 A Coruña, Spain, [email protected]

Bertha Guijarro-Berdiñas

Department of Computer Science, Faculty of Informatics, University of A Coruña, 15071 A Coruña, Spain, [email protected]

Amparo Alonso-Betanzos

Department of Computer Science, Faculty of Informatics, University of A Coruña, 15071 A Coruña, Spain, [email protected]

Received: May 11 2001

Accepted: August 22 2001

Online ISSN: 1530-888X

Print ISSN: 0899-7667

© 2002 Massachusetts Institute of Technology

2002

Neural Computation (2002) 14 (6): 1429–1449.

Abstract

The article presents a method for learning the weights in one-layer feed-forward neural networks minimizing either the sum of squared errors or the maximum absolute error, measured in the input scale. This leads to the existence of a global optimum that can be easily obtained solving linear systems of equations or linear programming problems, using much less computational power than the one associated with the standard methods. Another version of the method allows computing a large set of estimates for the weights, providing robust, mean or median, estimates for them, and the associated standard errors, which give a good measure for the quality of the fit. Later, the standard one-layer neural network algorithms are improved by learning the neural functions instead of assuming them known. A set of examples of applications is used to illustrate the methods. Finally, a comparison with other high-performance learning algorithms shows that the proposed methods are at least 10 times faster than the fastest standard algorithm used in the comparison.

This content is only available as a PDF.

© 2002 Massachusetts Institute of Technology

2002

You do not currently have access to this content.

Sign in

Client Account

Email address / Username ?

Password

Could not validate captcha. Please try again.