Evolving Formation Movement for a Homogeneous Multi-Robot System: Teamwork and Role-Allocation with Real Robots (original) (raw)
Related papers
Evolving Team Behaviour for Real Robots
We report on recent work in which we employed artificial evolution to design neural network controllers for small, homogeneous teams of mobile autonomous robots. The robots are evolved to perform a formation movement task from random starting positions, equipped only with infrared sensors. The dual constraints of homogeneity and minimal sensors make this a non-trivial task. We describe the behaviour of a successful evolved team in which robots adopt and maintain functionally distinct roles in order to achieve the task. We believe this to be the first example of the use of artificial evolution to design coordinated, cooperative behaviour for real robots.
Philosophical Transactions of The Royal Society A: Mathematical, Physical and Engineering Sciences, 2003
We report on recent work in which we employed arti¯cial evolution to design neural network controllers for small, homogeneous teams of mobile autonomous robots. The robots were evolved to perform a formation-movement task from random starting positions, equipped only with infrared sensors. The dual constraints of homogeneity and minimal sensors make this a non-trivial task. We describe the behaviour of a successful system in which robots adopt and maintain functionally distinct roles in order to achieve the task. We believe this to be the¯rst example of the use of arti¯cial evolution to design coordinated, cooperative behaviour for real robots.
Evolving Teamwork and Role-Allocation with Real Robots
2002
We report on recent work in which we employed artificial evolution to design neural network controllers for small, homogeneous teams of mobile autonomous robots. The robots are evolved to perform a formation movement task from random starting positions, equipped only with infrared sensors. The dual constraints of homogeneity and minimal sensors make this a non-trivial task. We describe the behaviour of a successful evolved team in which robots adopt and maintain functionally distinct roles in order to achieve the task. We believe this to be the first example of the use of artificial evolution to design coordinated, cooperative behaviour for real robots.
Evolving Neural Network Controllers for a Team of Self-Organizing Robots
2010
Self-organizing systems obtain a global system behavior via typically simple local interactions among a number of components or agents, respectively. The emergent service often displays properties like adaptability, robustness, and scalability, which makes the self-organizing paradigm interesting for technical applications like cooperative autonomous robots. The behavior for the local interactions is usually simple, but it is often difficult to de- fine the right set of interaction rules in order to achieve a desired global behavior. In this paper we describe a novel design approach using an evolutionary algorithm and artificial neural networks to automatize the part of the design process that requires most of the effort. A simulated robot soccer game was implemented to test and evaluate the proposed method. A new approach in evolving competitive behavior is also introduced using Swiss System instead of the full tournament to cut down the number of necessary simulations.
Strengths and synergies of evolved and designed controllers: A study within collective robotics
Artificial Intelligence, 2009
This paper analyses the strengths and weaknesses of self-organising approaches, such as evolutionary robotics, and direct design approaches, such as behaviour-based controllers, for the production of autonomous robots' controllers, and shows how the two approaches can be usefully combined. In particular, the paper proposes a method for encoding evolved neural-network based behaviours into motor schemabased controllers and then shows how these controllers can be modified and combined to produce robots capable of solving new tasks. The method has been validated in the context of a collective robotics scenario in which a group of physically assembled simulated autonomous robots are requested to produce different forms of coordinated behaviours (e.g., coordinated motion, walled-arena exiting, and light pursuing).
Evolving cooperation of simple agents for the control of an autonomous robot
Proceedings of the 5th IFAC Symposium on …, 2004
A distributed and scalable architecture for the control of an autonomous robot is presented in this work. In our proposal a whole robotic agent is divided into sub-agents. Every sub-agent is coded into a very simple neural network, and controls one sensor/actuator element of the robot. Sub-agents learn by evolution how to handle their sensor/actuator and how to cooperate with the rest of sub-agents. Emergence of behaviors happens when the co-evolution of several sub-agents embodied into the single robotic agent is produced. It will be demonstrated that the proposed distributed controller learns faster and better than a neuro-evolved central controller.
Emergent behaviour evolution in collective autonomous mobile robots
This paper deals with genetic algorithm based methods for finding optimal structure for a neural network (weights and biases) and for a fuzzy controller (rule set) to control a group of mobile autonomous robots. We have implemented a predator and prey pursuing environment as a test bed for our evolving agents. Using theirs sensorial information and an evolutionary based behaviour decision controller the robots are acting in order to minimize the distance between them and the targets locations. The proposed approach is capable of dealing with changing environments and its effectiveness and efficiency is demonstrated by simulation studies. The goal of the robots, namely catching the targets, could be fulfilled only trough an emergent social behaviour observed in our experimental results.
Artificial Life, 2013
Organisms that live in groups, from microbial symbionts to social insects and schooling fish, exhibit a number of highly efficient cooperative behaviours, often based on role taking and specialisation. These behaviours are relevant not only for the biologist but also for the engineer interested in decentralized collective robotics. We address these phenomena by carrying out experiments with groups of two simulated robots controlled by neural networks whose connection weights are evolved by using genetic algorithms. These algorithms and controllers are well suited to autonomously find solutions to decentralized collective robotic tasks based on principles of self-organization. The paper first presents a taxonomy of role taking and specialisation mechanisms related to evolved neuralnetwork controllers. Then it introduces two cooperation tasks which can be accomplished by either role taking or specialisation and uses these tasks to compare four different genetic algorithms to evaluate their capacity to evolve a suitable behavioural strategy which depends on the task demands.