Comparison of RBF Network Learning and Reinforcement Learning on the Maze Exploration Problem (original) (raw)
Abstract
An emergence of intelligent behavior within a simple robotic agent is studied in this paper. Two control mechanisms for an agent are considered — a radial basis function neural network trained by evolutionary algorithm, and a traditional reinforcement learning algorithm over a finite agent state space. A comparison of these two approaches is presented on the maze exploration problem.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
- Broomhead, D.S., Lowe, D.: Multivariable functional interpolation and adaptive networks. Complex Systems 2, 321–355 (1988)
MATH MathSciNet Google Scholar - E-puck, online documentation, http://www.e-puck.org
- Fogel, D.B.: Evolutionary Computation: The Fossil Record. MIT/ IEEE Press (1998)
Google Scholar - Haykin, S.: Neural Networks: a comprehensive foundation, 2nd edn. Prentice-Hall, Englewood Cliffs (1999)
MATH Google Scholar - Holland, J.: Adaptation In Natural and Artificial Systems. MIT Press, Cambridge (1992)
Google Scholar - Mitchell, T.: Machine Learning. McGraw-Hill, New York (1997)
MATH Google Scholar - Moody, J., Darken, C.: Fast learning in networks of locally-tuned processing units. Neural Computation 1, 289–303 (1989)
Article Google Scholar - Nolfi, S., Floreano, D.: Evolutionary Robotics — The Biology, Intelligence and Techology of Self-Organizing Machines. MIT Press, Cambridge (2000)
Google Scholar - Pfeifer, R., Scheier, C.: Understanding Intelligence. MIT Press, Cambridge (2000)
Google Scholar - Poggio, T., Girosi, F.: A theory of networks for approximation and learning. Technical report, Massachusetts Institute of Technology, Cambridge, MA, USA, A. I. Memo No. 1140, C.B.I.P. Paper No. 31 (1989)
Google Scholar - Slušný, S., Neruda, R.: Evolving homing behaviour for team of robots. In: Computational Intelligence, Robotics and Autonomous Systems. Massey University, Palmerston North (2007)
Google Scholar - Slušný, S., Neruda, R., Vidnerová, P.: Evolution of simple behavior patterns for autonomous robotic agent. In: System Science and Simulation in Engineering, pp. 411–417. WSEAS Press (2007)
Google Scholar - Richard Sutton, S., Andrew Barto, G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)
Google Scholar - Watkins, C.J.C.H.: Learning from delayed rewards. Ph.D. thesis (1989)
Google Scholar - Webots simulator. On-line documentation, http://www.cyberbotics.com/
Author information
Authors and Affiliations
- Institute of Computer Science, Academy of Sciences of the Czech Republic, Pod vodárenskou věží 2, Prague 8, Czech Republic
Stanislav Slušný, Roman Neruda & Petra Vidnerová
Authors
- Stanislav Slušný
- Roman Neruda
- Petra Vidnerová
Editor information
Véra Kůrková Roman Neruda Jan Koutník
Rights and permissions
Copyright information
© 2008 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Slušný, S., Neruda, R., Vidnerová, P. (2008). Comparison of RBF Network Learning and Reinforcement Learning on the Maze Exploration Problem. In: Kůrková, V., Neruda, R., Koutník, J. (eds) Artificial Neural Networks - ICANN 2008. ICANN 2008. Lecture Notes in Computer Science, vol 5163. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-87536-9\_74
Download citation
- .RIS
- .ENW
- .BIB
- DOI: https://doi.org/10.1007/978-3-540-87536-9\_74
- Publisher Name: Springer, Berlin, Heidelberg
- Print ISBN: 978-3-540-87535-2
- Online ISBN: 978-3-540-87536-9
- eBook Packages: Computer ScienceComputer Science (R0)Springer Nature Proceedings Computer Science
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.