Critical Mass in the Emergence of Collective Intelligence: A Parallelized Simulation of Swarms in Noisy Environments (original) (raw)

Emergence of Signal-Based Swarming

Swarming behavior is common in biology, from cell colonies to insect swarms and bird flocks. However, the conditions leading to the emergence of such behavior are still subject to research. Since Reynolds' boids, many artificial models have reproduced swarming behavior, focusing on details ranging from obstacle avoidance to the introduction of fixed leaders. This paper presents a minimalistic agent-based model, in which individuals develop swarming using only their ability to listen to each other's signals. The model simulates a population of agents looking for a vital resource they cannot directly detect, in a 3D environment. Each agent is controlled by an artificial neural network, whose weights are encoded in a genotype and evolved by an original asynchronous genetic algorithm. The results demonstrate that agents progressively become able to use the information exchanged between each other via signaling to establish temporary leader-follower relations. These relations allow agents to form swarming patterns, emerging as a transient behavior that improves the agents' ability to forage for the resource. Once they have acquired the ability to swarm, the individuals are able to outperform the non-swarmers at finding the resource. The population hence reaches a neutral evolutionary space which leads to a genetic drift of the genotypes. This reductionist approach to signal-based swarming not only contributes to shed light on the minimal conditions for the evolution of a swarming behavior, but also more generally it exemplifies the effect communication can have on optimal search patterns in collective groups of individuals.

Neural swarm intelligence?

2023

The synchronous firing of groups of neurons is reminiscent of the swarm behavior in the animal world. In both cases, the question arises as to how this kind of coherent behavior occurs and, above all, how goal-oriented behavior occurs. Research on swarm intelligence involves two questions: 1. How do swarms form 2. Why do they exhibit intelligent behavior. Both questions remain unanswered to this day, although there are a number of possible explanations. According to this, swarms are formed by individuals keeping as equal a distance as possible from their neighbors, adapting to their speed and striving for the center of the swarm [1]. The so-called mirror neurons are physiologically responsible [2]. The intelligent behavior consists in the higher effectiveness of the swarm in completing tasks (foraging, protection from predators). These explanations result exclusively from observations, i.e. they are already considered to be the causal relationships. On the one hand, the intelligence of the swarm is attested, but then explained by the actions of the individual agents, i.e. the intelligence is pushed back to the individual-a tautology. No approach from the realm of the living provides an ontology of swarm behavior. However, this would be important, for example, to explain the synchronous firing of neurons or to develop human-like artificial intelligence (AGI).

Emergence of Swarming Behavior: Foraging Agents Evolve Collective Motion Based on Signaling

Swarming behavior is common in biology, from cell colonies to insect swarms and bird flocks. However, the conditions leading to the emergence of such behavior are still subject to research. Since Reynolds' boids, many artificial models have reproduced swarming behavior, focusing on details ranging from obstacle avoidance to the introduction of fixed leaders. This paper presents a model of evolved artificial agents, able to develop swarming using only their ability to listen to each other's signals. The model simulates a population of agents looking for a vital resource they cannot directly detect, in a 3D environment. Instead of a centralized algorithm, each agent is controlled by an artificial neural network, whose weights are encoded in a genotype and adapted by an original asynchronous genetic algorithm. The results demonstrate that agents progressively evolve the ability to use the information exchanged between each other via signaling to establish temporary leader-follower relations. These relations allow agents to form swarming patterns, emerging as a transient behavior that improves the agents' ability to forage for the resource. Once they have acquired the ability to swarm, the individuals are able to outperform the non-swarmers at finding the resource. The population hence reaches a neutral evolutionary space which leads to a genetic drift of the genotypes. This reductionist approach to signal-based swarming not only contributes to shed light on the minimal conditions for the evolution of a swarming behavior, but also more generally it exemplifies the effect communication can have on optimal search patterns in collective groups of individuals.

Asynchronous Evolution: Emergence of Signal-Based Swarming

Since Reynolds boids, swarming behavior has often been reproduced in artificial models, but the conditions leading to its emergence are still subject to research, with candidates ranging from obstacle avoidance to virtual leaders. In this paper, we present a multi-agent model in which individuals develop swarming using only their ability to listen to each others signals. Our model uses an original asynchronous genetic algorithm to evolve a population of agents controlled by artificial neural networks, looking for an invisible resource in a 3D environment. The results demonstrate that agents use the information exchanged between them via signaling to form temporary leader-follower relations allowing them to flock together.

Biological Foundations of Swarm Intelligence

Why should a book on swarm intelligence start with a chapter on biology? Because swarm intelligence is biology. For millions of years many biological systems have solved complex problems by sharing information with group members. By carefully studying the underlying individual behaviours and combining behavioral observations with mathematical or simulation modeling we are now able to understand the underlying mechanisms of collective behavior in biological systems.

Unveiling Swarm Intelligence with Network Science - the Metaphor Explained

ArXiv, 2018

Self-organization is a natural phenomenon that emerges in systems with a large number of interacting components. Self-organized systems show robustness, scalability, and flexibility, which are essential properties when handling real-world problems. Swarm intelligence seeks to design nature-inspired algorithms with a high degree of self-organization. Yet, we do not know why swarm-based algorithms work well and neither we can compare the different approaches in the literature. The lack of a common framework capable of characterizing these several swarm-based algorithms, transcending their particularities, has led to a stream of publications inspired by different aspects of nature without much regard as to whether they are similar to already existing approaches. We address this gap by introducing a network-based framework$-$the interaction network$-$to examine computational swarm-based systems via the optics of social dynamics. We discuss the social dimension of several swarm classes a...

The Evolution of the Algorithms for Collective Behavior

Collective behavior is the outcome of a network of local interactions. Here, I consider collective behavior as the result of algorithms that have evolved to operate in response to a particular environment and physiological context. I discuss how algorithms are shaped by the costs of operating under the constraints that the environment imposes, the extent to which the environment is stable, and the distribution, in space and time, of resources. I suggest that a focus on the dynamics of the environment may provide new hypotheses for elucidating the algorithms that produce the collective behavior of cellular systems.

From Swarm Simulations to Swarm Intelligence

Proceedings of the 9th EAI International Conference on Bio-inspired Information and Communications Technologies (formerly BIONETICS), 2016

In self-organizing systems such as collective intelligent behaviors of animal or insect groups: flocks of birds, colonies of ants, schools of fish, swarms of bees, etc. there are ever emergent patterns which cannot be reduced to a linear composition of elementary subsystems properly. This reduction is possible only due to many repellents and an artificial environment. The emergent patterns are studied in the so-called swarm intelligence. In this paper we show that any swarm can be represented as a conventional automaton such as Kolmogorov-Uspensky machine, but with a very low accuracy because of deleting emergent phenomena. Furthermore, we show as well that implementing some unconventional algorithms of p-adic arithmetic and logic are much more applicable than conventional automata. By using p-adic integers we can code different emergent patterns.

Emergence of Signal-Based Swarmimg

Artificial Life 14: Proceedings of the Fourteenth International Conference on the Synthesis and Simulation of Living Systems, 2014

Since Reynolds boids, swarming behavior has often been reproduced in artificial models, but the conditions leading to its emergence are still subject to research, with candidates ranging from obstacle avoidance to virtual leaders. In this paper, we present a multi-agent model in which individuals develop swarming using only their ability to listen to each others signals. Our model uses an original asynchronous genetic algorithm to evolve a population of agents controlled by artificial neural networks, looking for an invisible resource in a 3D environment. The results demonstrate that agents use the information exchanged between them via signaling to form temporary leader-follower relations allowing them to flock together.

Emergence of a coherent and cohesive swarm based on mutual anticipation

Scientific reports, 2017

Collective behavior emerging out of self-organization is one of the most striking properties of an animal group. Typically, it is hypothesized that each individual in an animal group tends to align its direction of motion with those of its neighbors. Most previous models for collective behavior assume an explicit alignment rule, by which an agent matches its velocity with that of neighbors in a certain neighborhood, to reproduce a collective order pattern by simple interactions. Recent empirical studies, however, suggest that there is no evidence for explicit matching of velocity, and that collective polarization arises from interactions other than those that follow the explicit alignment rule. We here propose a new lattice-based computational model that does not incorporate the explicit alignment rule but is based instead on mutual anticipation and asynchronous updating. Moreover, we show that this model can realize densely collective motion with high polarity. Furthermore, we focu...