Nonnegative Least Squares Learning for the Random Neural Network (original) (raw)

Abstract

In this paper, a novel supervised batch learning algorithm for the Random Neural Network (RNN) is proposed. The RNN equations associated with training are purposively approximated to obtain a linear Nonnegative Least Squares (NNLS) problem that is strictly convex and can be solved to optimality. Following a review of selected algorithms, a simple and efficient approach is employed after being identified to be able to deal with large scale NNLS problems. The proposed algorithm is applied to a combinatorial optimization problem emerging in disaster management, and is shown to have better performance than the standard gradient descent algorithm for the RNN.

Preview

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Gelenbe, E.: Random neural networks with positive and negative signals and product form solution. Neural Computation 1(4), 502–510 (1989)
    Article Google Scholar
  2. Gelenbe, E.: Stability of the random neural network. Neural Computation 2(2), 239–247 (1990)
    Article Google Scholar
  3. Gelenbe, E., Mao, Z.H., Li, Y.D.: Function approximation with spiked random networks. IEEE Trans. on Neural Networks 10(1), 3–9 (1999)
    Article Google Scholar
  4. Gelenbe, E., Mao, Z.H., Li, Y.D.: Function approximation by random neural networks with a bounded number of layers. J. Differential Equations and Dynamical Systems 12(1&2), 143–170 (2004)
    MATH MathSciNet Google Scholar
  5. Gelenbe, E.: Learning in the recurrent random network. Neural Computation 5, 154–164 (1993)
    Article Google Scholar
  6. Bakircioglu, H., Kocak, T.: Survey of random neural network applications. European Journal of Operational Research 126(2), 319–330 (2000)
    Article MATH MathSciNet Google Scholar
  7. Gelenbe, E., Feng, T., Krishnan, K.R.R.: Neural network methods for volumetric magnetic resonance imaging of the human brain. Proceedings of the IEEE 84(1), 1488–1496 (1996)
    Article Google Scholar
  8. Gelenbe, E., Kocak, T.: Wafer surface reconstruction from top-down scanning electron microscope images. Microlectronic Engineering 75, 216–233 (2004)
    Article Google Scholar
  9. Artalejo, J.R.: G-networks: A versatile approach for work removal in queueing networks. European Journal of Operational Research 126(2), 233–249 (2000)
    Article MATH MathSciNet Google Scholar
  10. Gelenbe, E.: Steady-state solution of probabilistic gene regulatory networks. Physical Review E 76(1), 031903 (2007)
    Article Google Scholar
  11. Gelenbe, E., Timotheou, S.: Random Neural Networks with Synchronised Interactions. In: Neural Computation. MIT Press, Cambridge (2008) (accepted for publication)
    Google Scholar
  12. Gelenbe, E., Timotheou, S.: Synchronised Interactions in Spiked Neuronal Networks. The Computer Journal (2008) (Oxford Journals)
    Google Scholar
  13. Biegler-Konig, F., Barmann, F.: A learning algorithm for multilayered neural networks based on linear least squares problems. Neural Networks 6(1), 127–131 (1993)
    Article Google Scholar
  14. Fontenla-Romero, O., Erdogmus, D., Principe, J.C., Alonso-Betanzos, A., Castillo, E.: Linear least-squares based methods for neural networks learning. In: Artificial Neural Networks and Neural Information Processing ICANN/ICONIP 2003, pp. 84–91 (2003)
    Google Scholar
  15. Lawson, C.L., Hanson, R.J.: Solving Least Squares Problems. Prentice Hall, Englewood Cliffs (1987)
    Google Scholar
  16. Bro, R., Long, S.D.: A fast non-negativity-constrained least squares algorithm. Journal of Chemometrics 11(5), 393–401 (1997)
    Article Google Scholar
  17. Lin, C.J.: Projected gradient methods for nonnegative matrix factorization. Neural Computation 19(10), 2756–2779 (2007)
    Article MATH MathSciNet Google Scholar
  18. Bertsekas, D.: Nonlinear Programming. Athena Scientific (1995)
    Google Scholar
  19. Kim, D., Sra, S., Dhillon, I.: A new projected quasi-newton approach for the nonnegative least squares problem. Technical report, Dept. of Computer Sciences, The University of Texas at Austin (December 2006)
    Google Scholar

Download references

Author information

Authors and Affiliations

  1. Intelligent Systems and Networks Group Department of Electrical and Electronic Engineering Imperial College, , London, SW7 2BT, UK
    Stelios Timotheou

Editor information

Véra Kůrková Roman Neruda Jan Koutník

Rights and permissions

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Timotheou, S. (2008). Nonnegative Least Squares Learning for the Random Neural Network. In: Kůrková, V., Neruda, R., Koutník, J. (eds) Artificial Neural Networks - ICANN 2008. ICANN 2008. Lecture Notes in Computer Science, vol 5163. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87536-9\_21

Download citation

Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Publish with us