Phil Husbands - Academia.edu (original) (raw)
Chapters by Phil Husbands
Artificial Life XII: Proceedings of the Twelfth International Conference on the Synthesis and Simulation of Living Systems, 2010
Does the dynamical regime in which a system engages when it is coping with a situation A change a... more Does the dynamical regime in which a system engages when it is coping with a situation A change after adaptation to a new situation B? Is homeostatic instability a generic mechanism for flexible switching between dynamical regimes? We develop a model to approach these questions where a simulated agent that is stable and performing phototaxis has its vision field inverted so that it becomes unstable; instability activates synaptic plasticity changing the agent’s simulated nervous system attractor landscape towards a configuration that accommodates stable dynamics under normal and inverted vision. Our results show that: 1) the dynamical regime in which the agent engages under normal vision changes after adaptation to inverted vision; 2) homeostatic instability is not necessary for switching between dynamical regimes. Additionally, during the dynamical system analyses we also show that: 3) qualitatively similar behaviours (phototaxis) can be generated by different dynamics; 4) the agent’s simulated nervous system operates in transient dynamic towards an attractor that continuously move on the phase space; and 5) plasticity moves and reshapes the attractor landscape in order to accommodate a stable dynamical regimes to deal with inverted vision.
Papers by Phil Husbands
This paper describes a hitherto overlooked aspect of the information dynamics of embodied agents,... more This paper describes a hitherto overlooked aspect of the information dynamics of embodied agents, which can be thought of as hidden information transfer. This phenomenon is demonstrated in a minimal model of an autonomous agent. While it is well known that information transfer is generally low between closely synchronised systems, here we show how it is possible that such close synchronisation may serve to "carry" signals between physically separated endpoints. This creates seemingly paradoxical situations where transmitted information is not visible at some intermediate point in a network, yet can be seen later after further processing. We discuss how this relates to existing theories relating information transfer to agent behaviour, and the possible explanation by analogy to communication systems.
The impressive ability of social insects to learn long foraging routes guided by visual informati... more The impressive ability of social insects to learn long foraging routes guided by visual information [1] provides proof that robust spatial behaviour can be produced with limited neural resources . As such, social insects have become an important model system for understanding the minimal cognitive requirements for navigation . This is a goal shared by biomimetic engineers and those studying animal cognition using a bottom-up approach to the understanding of natural intelligence . Models of visual navigation that have been successful in replicating place homing are dominated by snapshot-type models where a single view of the world as memorized from the goal location is compared to the current view in order to drive a search for the goal [5], for review, see . Snapshot approaches only allow for navigation in the immediate vicinity of the goal however, and do not achieve robust route navigation over longer distances .
In this paper, we provide an analysis of orientation flights in bumblebees, employing a novel tec... more In this paper, we provide an analysis of orientation flights in bumblebees, employing a novel technique based on simultaneous localisation and mapping (SLAM) a probabilistic approach from autonomous robotics. We use SLAM to determine what bumblebees might learn about the locations of objects in the world through the arcing behaviours that are typical of these flights. Our results indicate that while the bees are clearly influenced by the presence of a conspicuous landmark, there is little evidence that they structure their flights to specifically learn about the position of the landmark.
Adaptive Behavior, 2007
Insects are able to navigate reliably between food and nest using only visual information. This b... more Insects are able to navigate reliably between food and nest using only visual information. This behavior has inspired many models of visual landmark guidance, some of which have been tested on autonomous robots. The majority of these models work by comparing the agent's current view with a view of the world stored when the agent was at the goal. The region from which agents can successfully reach home is therefore limited to the goal's visual locale, that is, the area around the goal where the visual scene is not radically different to the goal position. Ants are known to navigate over large distances using visually guided routes consisting of a series of visual memories. Taking inspiration from such route navigation, we propose a framework for linking together local navigation methods. We implement this framework on a robotic platform and test it in a series of environments in which local navigation methods fail. Finally, we show that the framework is robust to environments of varying complexity.
Advances in Artificial Life, ECAL 2013, 2013
This abstract summarises a model of route navigation inspired by the behaviour of ants presented ... more This abstract summarises a model of route navigation inspired by the behaviour of ants presented fully in . The ant"s embodiment coupled with an innate scanning behaviour means that robust route navigation can be achieved by a parsimonious biologically plausible algorithm.
It is known that ants learn long visually-guided routes through complex terrain. However, the mec... more It is known that ants learn long visually-guided routes through complex terrain. However, the mechanisms by which visual information is first learnt and then used to control a route direction are not well understood. In this paper we investigate whether a simple approach, involving scanning the environment and moving in the direction that appears most familiar, can provide a model of visually guided route learning in ants. The specific embodiment of an ant's visual system means that movement and viewing direction are tightly coupled, a familiar view specifies a familiar direction of viewing and thus a familiar movement to make. We show the feasibility of our approach as a model of ant-like route acquisition by learning non-trivial routes through a simulated environment firstly using the complete set of views experienced during learning and secondly using an approximation to the distribution of these views.
It is known that ants learn long visually-guided routes through complex terrain. However, the mec... more It is known that ants learn long visually-guided routes through complex terrain. However, the mechanisms by which visual information is first learnt and then used to control a route direction are not well understood. In this paper we investigate whether a simple approach, involving scanning the environment and moving in the direction that appears most familiar, can provide a model of visually guided route learning in ants. The specific embodiment of an ant's visual system means that movement and viewing direction are tightly coupled, a familiar view specifies a familiar direction of viewing and thus a familiar movement to make. We show the feasibility of our approach as a model of ant-like route acquisition by learning non-trivial routes through a simulated environment firstly using the complete set of views experienced during learning and secondly using an approximation to the distribution of these views.
cs.bham.ac.uk
A simple approach to route following is to scan the environment and move in the direction that ap... more A simple approach to route following is to scan the environment and move in the direction that appears most familiar. In this paper we investigate whether an approach such as this could provide a model of visually guided route learning in ants. As a proxy for familiarity we use the learning algorithm Adaboost [6] with simple Haar-like features to classify views as either part of a learned route or not. We show the feasibility of our approach as a model of ant-like route acquisition by learning a non-trivial route through a real-world environment using a large gantry robot equipped with a panoramic camera.
... Lincoln Smith, Andrew Philippides, Paul Graham, and Phil Husbands ... Biological Cybernetics ... more ... Lincoln Smith, Andrew Philippides, Paul Graham, and Phil Husbands ... Biological Cybernetics 95(5), 413–430 (2006) 7. Lambrinos, D., Möller, R., Pfeifer, R., Wehner, R., Labhart, T.: A mobile robot employing insect strategies for navigation. ...
In this paper we propose a model of visually guided route navigation in ants that captures the kn... more In this paper we propose a model of visually guided route navigation in ants that captures the known properties of real behaviour whilst retaining mechanistic simplicity and thus biological plausibility. For an ant, the coupling of movement and viewing direction means that a familiar view specifies a familiar direction of movement. Since the views experienced along a habitual route will be more familiar, route navigation can be re-cast as a search for familiar views. This search can be performed with a simple scanning routine, a behaviour that ants have been observed to perform. We test this proposed route navigation strategy in simulation, by learning a series of routes through visually cluttered environments consisting of objects that are only distinguishable as silhouettes against the sky. In the first instance we determine view familiarity by exhaustive comparison with the set of views experienced during training. In further experiments we train an artificial neural network to perform familiarity discrimination using the training views. Our results indicate that, not only is the approach successful, but also that the routes that are learnt show many of the characteristics of the routes of desert ants. As such, we believe the model represents the only detailed and complete model of insect route guidance to date. What is more, the model provides a general demonstration that visually guided routes can be produced with parsimonious mechanisms that do not specify when or what to learn, nor separate routes into sequences of waypoints.
To behave in a robust and adaptive way, animals must extract task-relevant sensory information ef... more To behave in a robust and adaptive way, animals must extract task-relevant sensory information efficiently. One way to understand how they achieve this is to explore regularities within the information animals perceive during natural behavior. In this chapter, we describe how we have used artificial neural networks (ANNs) to explore efficiencies in vision and memory that might underpin visually guided route navigation in complex worlds. Specifically, we use three types of neural network to learn the regularities within a series of views encountered during a single route traversal (the training route), in such a way that the networks output the familiarity of novel views presented to them. The problem of navigation is then reframed in terms of a search for familiar views, that is, views similar to those associated with the route. This approach has two major benefits. First, the ANN provides a compact holistic representation of the data and is thus an efficient way to encode a large set of views. Second, as we do not store the training views, we are not limited in the number of training views we use and the agent does not need to decide which views to learn.
Lecture Notes in Computer Science, 2007
Advances in Artificial Life, ECAL 2013, 2013
This paper describes a hitherto overlooked aspect of the information dynamics of embodied agents,... more This paper describes a hitherto overlooked aspect of the information dynamics of embodied agents, which can be thought of as hidden information transfer. This phenomenon is demonstrated in a minimal model of an autonomous agent. While it is well known that information transfer is generally low between closely synchronised systems, here we show how it is possible that such close synchronisation may serve to "carry" signals between physically separated endpoints. This creates seemingly paradoxical situations where transmitted information is not visible at some intermediate point in a network, yet can be seen later after further processing. We discuss how this relates to existing theories relating information transfer to agent behaviour, and the possible explanation by analogy to communication systems.
robotics.estec.esa.int
We are working on the ESA Bionics and Space System Design contract AO/1-4469/03/NL/SFe . The proj... more We are working on the ESA Bionics and Space System Design contract AO/1-4469/03/NL/SFe . The project is to review the application of biomimetics to space missions. Our particular focus is on robotic activities with an emphasis on planetary exploration. At the time of ASTRA ...
Kybernetes, 2011
The authors are particularly grateful to the surviving members of the Ratio Club, Horace Barlow, ... more The authors are particularly grateful to the surviving members of the Ratio Club, Horace Barlow, John Westcott, and Philip Woodward, who generously participated in the research for this paper, and to the late Jack Good, Harold Shipton, and Tommy Gold, all of whom we ...
Artificial Life XII: Proceedings of the Twelfth International Conference on the Synthesis and Simulation of Living Systems, 2010
Does the dynamical regime in which a system engages when it is coping with a situation A change a... more Does the dynamical regime in which a system engages when it is coping with a situation A change after adaptation to a new situation B? Is homeostatic instability a generic mechanism for flexible switching between dynamical regimes? We develop a model to approach these questions where a simulated agent that is stable and performing phototaxis has its vision field inverted so that it becomes unstable; instability activates synaptic plasticity changing the agent’s simulated nervous system attractor landscape towards a configuration that accommodates stable dynamics under normal and inverted vision. Our results show that: 1) the dynamical regime in which the agent engages under normal vision changes after adaptation to inverted vision; 2) homeostatic instability is not necessary for switching between dynamical regimes. Additionally, during the dynamical system analyses we also show that: 3) qualitatively similar behaviours (phototaxis) can be generated by different dynamics; 4) the agent’s simulated nervous system operates in transient dynamic towards an attractor that continuously move on the phase space; and 5) plasticity moves and reshapes the attractor landscape in order to accommodate a stable dynamical regimes to deal with inverted vision.
This paper describes a hitherto overlooked aspect of the information dynamics of embodied agents,... more This paper describes a hitherto overlooked aspect of the information dynamics of embodied agents, which can be thought of as hidden information transfer. This phenomenon is demonstrated in a minimal model of an autonomous agent. While it is well known that information transfer is generally low between closely synchronised systems, here we show how it is possible that such close synchronisation may serve to "carry" signals between physically separated endpoints. This creates seemingly paradoxical situations where transmitted information is not visible at some intermediate point in a network, yet can be seen later after further processing. We discuss how this relates to existing theories relating information transfer to agent behaviour, and the possible explanation by analogy to communication systems.
The impressive ability of social insects to learn long foraging routes guided by visual informati... more The impressive ability of social insects to learn long foraging routes guided by visual information [1] provides proof that robust spatial behaviour can be produced with limited neural resources . As such, social insects have become an important model system for understanding the minimal cognitive requirements for navigation . This is a goal shared by biomimetic engineers and those studying animal cognition using a bottom-up approach to the understanding of natural intelligence . Models of visual navigation that have been successful in replicating place homing are dominated by snapshot-type models where a single view of the world as memorized from the goal location is compared to the current view in order to drive a search for the goal [5], for review, see . Snapshot approaches only allow for navigation in the immediate vicinity of the goal however, and do not achieve robust route navigation over longer distances .
In this paper, we provide an analysis of orientation flights in bumblebees, employing a novel tec... more In this paper, we provide an analysis of orientation flights in bumblebees, employing a novel technique based on simultaneous localisation and mapping (SLAM) a probabilistic approach from autonomous robotics. We use SLAM to determine what bumblebees might learn about the locations of objects in the world through the arcing behaviours that are typical of these flights. Our results indicate that while the bees are clearly influenced by the presence of a conspicuous landmark, there is little evidence that they structure their flights to specifically learn about the position of the landmark.
Adaptive Behavior, 2007
Insects are able to navigate reliably between food and nest using only visual information. This b... more Insects are able to navigate reliably between food and nest using only visual information. This behavior has inspired many models of visual landmark guidance, some of which have been tested on autonomous robots. The majority of these models work by comparing the agent's current view with a view of the world stored when the agent was at the goal. The region from which agents can successfully reach home is therefore limited to the goal's visual locale, that is, the area around the goal where the visual scene is not radically different to the goal position. Ants are known to navigate over large distances using visually guided routes consisting of a series of visual memories. Taking inspiration from such route navigation, we propose a framework for linking together local navigation methods. We implement this framework on a robotic platform and test it in a series of environments in which local navigation methods fail. Finally, we show that the framework is robust to environments of varying complexity.
Advances in Artificial Life, ECAL 2013, 2013
This abstract summarises a model of route navigation inspired by the behaviour of ants presented ... more This abstract summarises a model of route navigation inspired by the behaviour of ants presented fully in . The ant"s embodiment coupled with an innate scanning behaviour means that robust route navigation can be achieved by a parsimonious biologically plausible algorithm.
It is known that ants learn long visually-guided routes through complex terrain. However, the mec... more It is known that ants learn long visually-guided routes through complex terrain. However, the mechanisms by which visual information is first learnt and then used to control a route direction are not well understood. In this paper we investigate whether a simple approach, involving scanning the environment and moving in the direction that appears most familiar, can provide a model of visually guided route learning in ants. The specific embodiment of an ant's visual system means that movement and viewing direction are tightly coupled, a familiar view specifies a familiar direction of viewing and thus a familiar movement to make. We show the feasibility of our approach as a model of ant-like route acquisition by learning non-trivial routes through a simulated environment firstly using the complete set of views experienced during learning and secondly using an approximation to the distribution of these views.
It is known that ants learn long visually-guided routes through complex terrain. However, the mec... more It is known that ants learn long visually-guided routes through complex terrain. However, the mechanisms by which visual information is first learnt and then used to control a route direction are not well understood. In this paper we investigate whether a simple approach, involving scanning the environment and moving in the direction that appears most familiar, can provide a model of visually guided route learning in ants. The specific embodiment of an ant's visual system means that movement and viewing direction are tightly coupled, a familiar view specifies a familiar direction of viewing and thus a familiar movement to make. We show the feasibility of our approach as a model of ant-like route acquisition by learning non-trivial routes through a simulated environment firstly using the complete set of views experienced during learning and secondly using an approximation to the distribution of these views.
cs.bham.ac.uk
A simple approach to route following is to scan the environment and move in the direction that ap... more A simple approach to route following is to scan the environment and move in the direction that appears most familiar. In this paper we investigate whether an approach such as this could provide a model of visually guided route learning in ants. As a proxy for familiarity we use the learning algorithm Adaboost [6] with simple Haar-like features to classify views as either part of a learned route or not. We show the feasibility of our approach as a model of ant-like route acquisition by learning a non-trivial route through a real-world environment using a large gantry robot equipped with a panoramic camera.
... Lincoln Smith, Andrew Philippides, Paul Graham, and Phil Husbands ... Biological Cybernetics ... more ... Lincoln Smith, Andrew Philippides, Paul Graham, and Phil Husbands ... Biological Cybernetics 95(5), 413–430 (2006) 7. Lambrinos, D., Möller, R., Pfeifer, R., Wehner, R., Labhart, T.: A mobile robot employing insect strategies for navigation. ...
In this paper we propose a model of visually guided route navigation in ants that captures the kn... more In this paper we propose a model of visually guided route navigation in ants that captures the known properties of real behaviour whilst retaining mechanistic simplicity and thus biological plausibility. For an ant, the coupling of movement and viewing direction means that a familiar view specifies a familiar direction of movement. Since the views experienced along a habitual route will be more familiar, route navigation can be re-cast as a search for familiar views. This search can be performed with a simple scanning routine, a behaviour that ants have been observed to perform. We test this proposed route navigation strategy in simulation, by learning a series of routes through visually cluttered environments consisting of objects that are only distinguishable as silhouettes against the sky. In the first instance we determine view familiarity by exhaustive comparison with the set of views experienced during training. In further experiments we train an artificial neural network to perform familiarity discrimination using the training views. Our results indicate that, not only is the approach successful, but also that the routes that are learnt show many of the characteristics of the routes of desert ants. As such, we believe the model represents the only detailed and complete model of insect route guidance to date. What is more, the model provides a general demonstration that visually guided routes can be produced with parsimonious mechanisms that do not specify when or what to learn, nor separate routes into sequences of waypoints.
To behave in a robust and adaptive way, animals must extract task-relevant sensory information ef... more To behave in a robust and adaptive way, animals must extract task-relevant sensory information efficiently. One way to understand how they achieve this is to explore regularities within the information animals perceive during natural behavior. In this chapter, we describe how we have used artificial neural networks (ANNs) to explore efficiencies in vision and memory that might underpin visually guided route navigation in complex worlds. Specifically, we use three types of neural network to learn the regularities within a series of views encountered during a single route traversal (the training route), in such a way that the networks output the familiarity of novel views presented to them. The problem of navigation is then reframed in terms of a search for familiar views, that is, views similar to those associated with the route. This approach has two major benefits. First, the ANN provides a compact holistic representation of the data and is thus an efficient way to encode a large set of views. Second, as we do not store the training views, we are not limited in the number of training views we use and the agent does not need to decide which views to learn.
Lecture Notes in Computer Science, 2007
Advances in Artificial Life, ECAL 2013, 2013
This paper describes a hitherto overlooked aspect of the information dynamics of embodied agents,... more This paper describes a hitherto overlooked aspect of the information dynamics of embodied agents, which can be thought of as hidden information transfer. This phenomenon is demonstrated in a minimal model of an autonomous agent. While it is well known that information transfer is generally low between closely synchronised systems, here we show how it is possible that such close synchronisation may serve to "carry" signals between physically separated endpoints. This creates seemingly paradoxical situations where transmitted information is not visible at some intermediate point in a network, yet can be seen later after further processing. We discuss how this relates to existing theories relating information transfer to agent behaviour, and the possible explanation by analogy to communication systems.
robotics.estec.esa.int
We are working on the ESA Bionics and Space System Design contract AO/1-4469/03/NL/SFe . The proj... more We are working on the ESA Bionics and Space System Design contract AO/1-4469/03/NL/SFe . The project is to review the application of biomimetics to space missions. Our particular focus is on robotic activities with an emphasis on planetary exploration. At the time of ASTRA ...
Kybernetes, 2011
The authors are particularly grateful to the surviving members of the Ratio Club, Horace Barlow, ... more The authors are particularly grateful to the surviving members of the Ratio Club, Horace Barlow, John Westcott, and Philip Woodward, who generously participated in the research for this paper, and to the late Jack Good, Harold Shipton, and Tommy Gold, all of whom we ...
Seal, 1998
This paper describes investigations into using evolutionary search for quantitative spectroscopy.... more This paper describes investigations into using evolutionary search for quantitative spectroscopy. Given the spectrum (intensity × frequency) of a sample of material of interest, we would like to be able to infer the make-up of the material in terms of percentages by mass of its constituent compounds. The problem is usually tackled using regression methods. This approach can have various