Learning algorithms for small mobile robots: case study on maze exploration (original) (raw)
Related papers
Comparison of RBF Network Learning and Reinforcement Learning on the Maze Exploration Problem
Lecture Notes in Computer Science
An emergence of intelligent behavior within a simple robotic agent is studied in this paper. Two control mechanisms for an agent are considereda radial basis function neural network trained by evolutionary algorithm, and a traditional reinforcement learning algorithm over a finite agent state space. A comparison of these two approaches is presented on the maze exploration problem.
Electronics, Robotics and Automotive Mechanics Conference (CERMA'06), 2006
This paper describes the use of soft computing techniques for acquiring adaptive behaviors to be used in mobile robot exploration. Action-based Environment Modeling (AEM) based navigation is used within unknown environments and unsupervised adaptive learning is used for obtaining of the dynamic behaviors. In this investigation it is shown that this unsupervised adaptive method is capable of training a simple low cost robot towards developing highly fit behaviors within a diverse set of complex environments. The experiments that endorse these affirmations were made in Khepera robot simulator. The robot makes use of a neural network to interpret the measurements from the robot sensors in order to determine its next behavior. The training of this network was made using a Genetic Algorithm (GA), where each individual robot is constituted by a neural network. Fitness evaluation provides the quality of robot behavior with respect to his exploration capability within his environment.
Rule-Based Analysis of Behaviour Learned by Evolutionary and Reinforcement Algorithms
Lecture Notes in Computer Science
We study behavioural patterns learned by a robotic agent by means of two different control and adaptive approaches-a radial basis function neural network trained by evolutionary algorithm, and a traditional reinforcement Qlearning algorithm. In both cases, a set of rules controlling the agent is derived from the learned controllers, and these sets are compared. It is shown that both procedures lead to reasonable and compact, albeit rather different, rule sets.
Learning for intelligent mobile robots
2003
Unlike intelligent industrial robots which often work in a structured factory setting, intelligent mobile robots must often operate in an unstructured environment cluttered with obstacles and with many possible action paths. However, such machines have many potential applications in medicine, defense, industry and even the home that make their study important. Sensors such as vision are needed. However, in many applications some form of learning is also required. The purpose of this paper is to present a discussion of recent technical advances in learning for intelligent mobile robots.
Learning Performance in Evolutionary Behavior Based Mobile Robot Navigation
Lecture Notes in Computer Science
In this paper we utilize information theory to study the impact in learning performance of various motivation and environmental configurations. This study is done within the context of an evolutionary fuzzy motivation based approach used for acquiring behaviors in mobile robot exploration of complex environments. Our robot makes use of a neural network to evaluate measurements from its sensors in order to establish its next behavior. Adaptive learning, fuzzy based fitness and Action-based Environment Modeling (AEM) are integrated and applied toward training the robot. Using information theory we determine the conditions that lead the robot toward highly fit behaviors. The research performed also shows that information theory is a useful tool in analyzing robotic training methods.
Self-learning Mobile Robot Navigation in Unknown Environment Using Evolutionary Learning
J. Univers. Comput. Sci., 2014
An autonomous mobile robot operating in an unstructured environment must be able to learn with dynamic changes to that environment. Learning navigation and control of mobile robot in an unstructured environment is one of the most challenging problems. Fuzzy logic control is a useful tool in the field of navigation of mobile robot. In this research, we optimized a performance of fuzzy logic controller by evolutionary learning technique. Two proposed approaches have been designed and implemented: Fuzzy Logic Controller (FLC) and Genetic- Fuzzy Controller (GA-FLC). The Genetic Algorithm is used for automatically learning to tune the membership function parameters for mobile robot motion control. Moreover, the performance of these approaches are compared through simulation.
Robotour Solution as a Learned Behavior Based on Artificial Neural Networks
2010
Our contribution describes a mobile robot platform that has been built for the purpose of the contest Robotourrobotika.cz outdoor delivery challenge. The robot is a standard differential-drive robot with a good quality consumer market digital video camera with a lightweight, but high-performance laptop computer used as the main control board. Supplementary board is used to control motors and sensors of the robot. The robot utilizes a behavior-based architecture and its vision module that is responsible for track-following is utilizing an artificial neural network that was trained on a set of images. This is a novel solution that has not been used in Robotour contest previously, and our early experiments demonstrate promising results.
Lecture Notes in Computer Science, 2003
Building intelligent systems that are capable of learning, acting reactively and planning actions before their execution is a major goal of artificial intelligence. This paper presents two reactive and planning systems that contain important novelties with respect to previous neural-network planners and reinforcement-learning based planners: (a) the introduction of a new component ("matcher") allows both planners to execute genuine taskable planning (while previous reinforcement-learning based models have used planning only for speeding up learning); (b) the planners show for the first time that trained neural-network models of the world can generate long prediction chains that have an interesting robustness with regards to noise; (c) two novel algorithms that generate chains of predictions in order to plan, and control the flows of information between the systems' different neural components, are presented; (d) one of the planners uses backward "predictions" to exploit the knowledge of the pursued goal; (e) the two systems presented nicely integrate reactive behavior and planning on the basis of a measure of "confidence" in action. The soundness and potentialities of the two reactive and planning systems are tested and compared with a simulated robot engaged in a stochastic path-finding task. The paper also presents an extensive literature review on the relevant issues. and taskable, i.e. to re-use knowledge to pursue different goals (see below, and see the concept of "state anticipation" of Butz et al. in this volume).
Control of a differentially driven mobile robot using radial basis function based neural networks
WSEAS Transactions on …, 2008
This paper proposes the use of radial basis function neural networks approach to the solution of a mobile robot orientation adjustment using reinforcement learning. In order to control the orientation of the mobile robot, a neural network control system has been constructed and implemented. Neural controller has been charged to enhance the control system by adding some degrees of award. Making use of the potential of neural networks to learn the relationships, the desired reference orientation and the error position of the mobile robot are used in training. The radial basis function based neural networks have been trained via reinforcement learning. The performance of the proposed controller and learning system has been evaluated by using a mobile robot that consists of a two driving wheels mounted on the same axis, and a free wheel on the front for balance.