Neuro-evolved Agent-based Cooperative Controller for a Behavior-based Autonomous Robot (original) (raw)
Related papers
Evolving cooperation of simple agents for the control of an autonomous robot
Proceedings of the 5th IFAC Symposium on …, 2004
A distributed and scalable architecture for the control of an autonomous robot is presented in this work. In our proposal a whole robotic agent is divided into sub-agents. Every sub-agent is coded into a very simple neural network, and controls one sensor/actuator element of the robot. Sub-agents learn by evolution how to handle their sensor/actuator and how to cooperate with the rest of sub-agents. Emergence of behaviors happens when the co-evolution of several sub-agents embodied into the single robotic agent is produced. It will be demonstrated that the proposed distributed controller learns faster and better than a neuro-evolved central controller.
Behavioral control through evolutionary neurocontrollers for autonomous mobile robot navigation
Robotics and Autonomous …, 2009
This paper deals with the study of scaling up behaviors in evolutive robotics (ER). Complex behaviours were obtained from simple ones. Each behavior is supported by an artificial neural network (ANN)-based controller or neurocontroller. Hence, a method for the generation of a hierarchy of neurocontrollers, resorting to the paradigm of Layered Evolution (LE), is developed and verified experimentally through computer simulations and tests in a Kheperamicro-robot. Several behavioral modules are initially evolved using specialized neurocontrollers based on different ANN paradigms. The results show that simple behaviors coordination through LE is a feasible strategy that gives rise to emergent complex behaviors. These complex behaviors can then solve real-world problems efficiently. From a pure evolutionary perspective, however, the methodology presented is too much dependent on user’s prior knowledge about the problem to solve and also that evolution take place in a rigid, prescribed framework. Mobile robot’s navigation in an unknown environment is used as a test bed for the proposed scaling strategies.
Neuro-Evolution of Mobile Robot Controller
MENDEL
We present a neuro-evolution design for control of a mobile robot in 2D simulation environment. The mobile robot is moving in unknown environment with obstacles from the start position to the goal position. The trajectory of the robot is controlled by a neural network – based controller which inputs are information from several laser beam sensors. The learning of the neural network controller is based on an evolutionary approach, which is provided by genetic algorithm.
Evolving cooperative neural agents for controlling vision guided mobile robots
2010
We have studied and developed the behavior of two specific neural processes, used for vehicle driving and path planning, in order to control mobile robots. Each processor is an independent agent defined by a neural network trained for a defined task. Through simulated evolution fully trained agents are encouraged to socialize by opening low bandwidth, asynchronous channels between them. Under evolutive pressure agents spontaneously develop communication skills (protolan-guage) that take advantages of interchanged information, even under noisy conditions. The emerged cooperative behavior raises the level of competence of vision guided mobile robots and allows a convenient autonomous exploration of the environment. The system has been tested in a simulated location and shows a robust performance.
A general learning co-evolution method to generalize autonomous robot navigation behavior
2009
In this paper a new coevolutive method, called Uniform Coevolution, is introduced, to learn weights of a neural network controller in autonomous robots. An evolutionary strategy is used to learn highperformance reactive behavior for navigation and collisions avoidance. The coevolutive method allows evolving the environment, to learn a general behavior able to solve the problem in different environments. Using a traditional evolutionary strategy method, without coevolution, the learning process obtains a specialized behavior. All the behaviors obtained, with/without coevolution have been tested in a set of environments and the capability of generalization is shown for each learned behavior. A simulator based on mini-robot Khepera has been used to learn each behavior. The results show that Uniform Coevolution obtains better generalized solutions to examples-based problems.
Generalization capabilities of co-evolution in learning robot behavior
Journal of Robotic Systems, 2002
In this article, a co-evolutive method is used to evolve neural controllers for general obstacle-avoidance of a Braitenberg vehicle. During a first evolutionary process, Evolution Strategies were applied to generate neural controllers; the generality of the obtained behaviors was quite poor. During a second evolutionary process, a new co-evolutive method, called Uniform Co-evolution, is introduced to co-evolve both the controllers and the environment. A comparison of both methods shows that the co-evolutive approach improves the generality of controllers.
Emergent behaviour evolution in collective autonomous mobile robots
This paper deals with genetic algorithm based methods for finding optimal structure for a neural network (weights and biases) and for a fuzzy controller (rule set) to control a group of mobile autonomous robots. We have implemented a predator and prey pursuing environment as a test bed for our evolving agents. Using theirs sensorial information and an evolutionary based behaviour decision controller the robots are acting in order to minimize the distance between them and the targets locations. The proposed approach is capable of dealing with changing environments and its effectiveness and efficiency is demonstrated by simulation studies. The goal of the robots, namely catching the targets, could be fulfilled only trough an emergent social behaviour observed in our experimental results.
Applying Evolution Strategies to Neural Networks Robot Controller
1999
In this paper an evolution strategy (ES) is introduced, to learn weights of a neural network controller in autonomous robots. An ES is used to learn high-performance reactive behavior for navigation and collisions avoidance. The learned behavior is able to solve the problem in different environments; so, the learning process has proven the ability to obtain a specialized behavior. All the behaviors obtained have been tested in a set of environment and the capability of generalization is showed for each learned behavior. No subjective information about “how to accomplish the task” has been included in the fitness function. A simulator based on mini-robot Khepera has been used to learn each behavior.
2006
This paper tackles the issue of designing homogeneous neuro-controllers with artificial evolution in order to control groups of robots that differ in terms of sensory capabilites. In order to accomplish a common goal, the agents have to complement the partial “view” they have of the environment. The results obtained prove that the agents are capable of cooperating and coordinating their actions in order to carry out a navigation task. A preliminary analysis of the mechanisms underlying the group behaviour is provided.