Enda Howley | University of Galway (original) (raw)

Papers by Enda Howley

Research paper thumbnail of The influence of random interactions and decision heuristics on norm evolution in social networks

In this paper we explore the effect that random social interactions have on the emergence and evo... more In this paper we explore the effect that random social interactions have on the emergence and evolution of social norms in a simulated population of agents. In our model agents observe the behaviour of others and update their norms based on these observations. An agent's norm is influenced by both their own fixed social network plus a second random network that is composed of a subset of the remaining population. Random interactions are based on a weighted selection algorithm that uses an individual's path distance on the network to determine their chance of meeting a stranger. This means that friends-of-friends are more likely to randomly interact with one another than agents with a higher degree of separation. We then contrast the cases where agents make highest utility based rational decisions about which norm to adopt versus using a Markov Decision process that associates a weight with the best choice. Finally we examine the effect that these random interactions have on the evolution of a more complex social norm as it propagates throughout the population. We discover that increasing the frequency and weighting of random interactions results in higher levels of norm convergence and in a quicker time when agents have the choice between two competing alternatives. This can be attributed to more information passing through the population thereby allowing for quicker convergence. When the norm is allowed to evolve we observe both global consensus formation and group splintering depending on the cognitive agent model used.

Research paper thumbnail of A Learning Architecture for Scheduling Workflow Applications in the Cloud

The scheduling of workflow applications involves the mapping of individual workflow tasks to comp... more The scheduling of workflow applications involves the mapping of individual workflow tasks to computational resources, based on a range of functional and non-functional quality of service requirements. Workflow applications require extensive computational requirements, and often involve the processing of significant amounts of data. Furthermore, dependencies that exist amongst tasks require that schedules must be generated strictly in accordance with defined precedence constraints. The emergence of cloud computing has introduced a utility-type market model, where computational resources of varying capacities can be procured on demand, in a pay-per-use fashion. In general the two most important objectives of workflow schedulers are the minimi- sation of both cost and makespan. As well as computational costs incurred from processing individual tasks, workflow schedulers must also plan for data transmission costs where potentially large amounts of data must be transferred between compute and storage sites. This paper proposes a novel cloud workflow scheduling approach which employs a Markov Decision Process to optimally guide the workflow execution process depending on environmental state. In addition the system employs a genetic algorithm to evolve workflow schedules. The overall architecture is presented, and initial results indicate the potential of this approach for developing viable workflow schedules on the Cloud.

Research paper thumbnail of The Effects of Payoff Preferences on Agent Tolerance

An objective of multi-agent systems is to build robust intelli- gent systems capable of existing ... more An objective of multi-agent systems is to build robust intelli- gent systems capable of existing in complex environments. These environments are often open, noisy and subject to rapid, unpredictable changes. This paper will explore how agents can bias their interactions and choices in these com- plex environments. Existing research has investigated how agents can bias their interactions based on factors such as similarity, trust or reputation. Unfortunately, much of this research has ignored how agents are influenced by their pref- erences for certain game payoffs. This paper will show that individual payoff preferences have a significant effect on the behaviors that emerge within an agent environment. We ar- gue that agents must not only determine with whom to in- teract, but also the levels of benefit or risk these interactions should represent. This paper presents a series of game theo- retic simulations examining the effects of agent payoff prefer- ences within an evolutionary setting. Our experiments show that these factors promote tolerance throughout the popula- tion. We provide an experimental benchmark using an almost identical game environment where payoffs are not considered by agents. Furthermore, we also present simulations involv- ing noise, thereby demonstrating the ability of these more tol- erant agents to cope with uncertainty in their environment.

Research paper thumbnail of The Effects of Evolved Sociability in a Commons Dilemma

This paper explores the evolution of strategies in an n-player dilemma game. These n-player dilem... more This paper explores the evolution of strategies in an n-player dilemma game. These n-player dilemmas provide a formal representation of many real world social dilemmas. Those social dilemmas include lit- tering, voting and sharing common resources such as sharing computer processing time. This paper explores the evolution of altruism using an n-player dilemma and our results show the importance of sociability in these games. We propose a novel tag-mediated mechanism to allow for n-player interactions. This paper provides an examination of the inter- action dynamics that occur in these n-player games when sociability is an evolved trait. Our results show how the agent population changes and evolves rapidly in response to the strategies of their peers in the population.

Research paper thumbnail of Co-evolutionary Analysis: A Policy Exploration Method for System Dynamics Models

In system dynamics (SD), complex nonlinear systems can generate a wide range of possible behaviou... more In system dynamics (SD), complex nonlinear systems can generate a wide range of possible behaviours that frequently require search and optimization algorithms in order to explore optimal policies. Within the SD literature, the conventional approach to optimization is the formulation of a single objective function, with a targeted parameter list, and the entire model is simulated repeatedly in order to arrive at optimum values. However, many sector-based SD models contain heuristics of ‘intended rationality’, and a desired outcome for modellers to be able to explore the policy implications of locally optimal behaviours. This can now be achieved through a method known as coevolution, which allows modelers to divide an unsolved problem into constituent parts, where each part can be solved with respect to its own fitness function. In this paper, we specify a solution for evolving locally rational strategies across a multi-sector SD structure. Using the beer distribution game (BDG) as an illustration, we demonstrate the utility of this approach in terms of the impact of two different order management strategies on the policy space of the BDG.

Research paper thumbnail of A search algorithm to identify the independent feedback loop set

System dynamics focuses on how feedback structures drive system behaviour. An established feed- b... more System dynamics focuses on how feedback structures drive system behaviour. An established feed- back loop analysis method is eigenvalue elasticity analysis (EEA), which analyzes a complete set of independent feedback loops in a given system. A widely accepted loop selection method is the shortest independent loop set (SILS) algorithm. It is utilized in EEA to compute the loop elasticity which identifies the dominant loops. However, this paper finds that in some scenarios, the SILS can only identify part of the complete independent loop set (ILS). In this case, SILS is no longer suit- able for EEA, because it produces incorrect loop elasticities. An agent-based goal diffusion model is then produced to demonstrate this specific scenario. Subsequently, we specify a more robust algorithm using the depth-first search to identify the complete set of independent loops. Finally, a summary is presented and it suggests a potential area for extending EEA applications.

Research paper thumbnail of Particle Swarm Optimisation with Enhanced Memory Particles

Particle swarm optimisation (PSO) is a general purpose op- timisation algorithm in which a popula... more Particle swarm optimisation (PSO) is a general purpose op- timisation algorithm in which a population of particles are attracted to their past success and the success of other particles. This paper intro- duces a new variant of the PSO algorithm, PSO with Enhanced Memory Particles, where the cognitive influence is enhanced by having particles remember multiple previous successes. The additional positions intro- duce diversity which aids exploration. Balancing the need for exploita- tion with this additional diversity is achieved through the use of a small memory and by using Roulette selection to select a single position from memory to use when calculating particles’ velocities. The research shows that PSO EMP performs better than the Standard PSO in most cases and does not perform significantly worse in any case.

Research paper thumbnail of Tag-Based Cooperation in N-Player Dilemmas

This paper studies the emergence of cooperation in the N- Player Prisoner’s Dilemma (NPD) using a... more This paper studies the emergence of cooperation in the N- Player Prisoner’s Dilemma (NPD) using a tag-mediated in- teraction model. Tags have been widely used to bias agent pairwise interactions which facilitates the emergence of co- operation. This paper shows some of the key parameters that influence the emergence of cooperation in an evolution- ary setting. The aim of this paper is to demonstrate the most vital factors that are commonly ignored in many ex- isting NPD studies.

Research paper thumbnail of Particle Swarm Optimisation with Gradually Increasing Directed Neighbourhoods

Particle swarm optimisation (PSO) is an intelligent random search algorithm, and the key to succe... more Particle swarm optimisation (PSO) is an intelligent random search algorithm, and the key to success is to effectively balance between the exploration of the solution space in the early stages and the exploitation of the solution space in the late stages. This paper presents a new dynamic topology called "gradually increasing di- rected neighbourhoods (GIDN)" that provides an effective way to balance between exploration and exploitation in the entire iteration process. In our model, each particle begins with a small number of connections and there are many small isolated swarms that im- prove the exploration ability. At each iteration, we gradually add a number of new connections between particles which improves the ability of exploitation gradually. Furthermore, these connections among particles are created randomly and have directions. We for- malise this topology using random graph representations. Experi- ments are conducted on 31 benchmark test functions to validate our proposed topology. The results show that the PSO with GIDN per- forms much better than a number of the state of the art algorithms on almost all of the 31 functions.

Research paper thumbnail of Applying reinforcement learning towards automating resource allocation and application scalability in the cloud

Public Infrastructure as a Service (IaaS) clouds such as Amazon, GoGrid and Rackspace deliver com... more Public Infrastructure as a Service (IaaS) clouds such as Amazon, GoGrid and Rackspace deliver computational resources by means of virtualisation technologies. These technologies allow multiple independent virtual machines to reside in apparent isolation on the same physical host. Dynamically scaling applications running on IaaS clouds can lead to varied and unpredictable results because of the performance interference effects associated with co-located virtual machines. Determining appropriate scaling policies in a dynamic non-stationary environment is non-trivial. One principle advantage exhibited by IaaS clouds over their traditional hosting counterparts is the ability to scale resources on-demand. However, a problem arises concerning resource allocation as to which resources should be added and removed when the underlying performance of the resource is in a constant state of flux. Decision theoretic frameworks such as Markov Decision Processes are particularly suited to decision making under uncertainty. By applying a temporal difference, reinforcement learning algorithm known as Q-learning, optimal scaling policies can be determined. Additionally, reinforcement learning techniques typically suffer from curse of dimensionality problems, where the state space grows exponentially with each additional state variable. To address this challenge, we also present a novel parallel Q-learning approach aimed at reducing the time taken to determine optimal policies whilst learning online.

Research paper thumbnail of A parallel framework for Bayesian reinforcement learning

Solving a finite Markov decision process using techniques from dynamic programming such as value ... more Solving a finite Markov decision process using techniques from dynamic programming such as value or policy iteration require a complete model of the environmental dynamics. The distribution of rewards, transition probabilities, states and actions all need to be fully observable, discrete and complete. For many problem domains, a complete model containing a full representation of the environmental dynamics may not be readily available. Bayesian reinforcement learning (RL) is a technique devised to make better use of the information observed through learning than simply computing Q-functions. However, this approach can often require extensive experience in order to build up an accurate representation of the true values. To address this issue, this paper proposes a method for parallelising a Bayesian RL technique aimed at reducing the time it takes to approximate the missing model. We demonstrate the technique on learning next state transition probabilities without prior knowledge. The approach is general enough for approximating any probabilistically driven component of the model. The solution involves multiple learning agents learning in parallel on the same task. Agents share probability density estimates amongst each other in an effort to speed up convergence to the true values.

Research paper thumbnail of The emergence of cooperation among agents using simple fixed bias tagging

The principle of cooperation influences our everyday lives. This conflict between individual and ... more The principle of cooperation influences our everyday lives. This conflict between individual and collective rationality can be modelled through the use of social dilemmas such as the prisoner's dilemma. Reflecting the reality that real world autonomous agents are not chosen at random to interact, we acknowledge the role some structuring mechanisms can play in increasing cooperation. This paper examines one simple structuring technique which has been shown to increase cooperation among agents. Tagging mechanisms structure a population into subgroups and as a result reflect many aspects which are relevant to the domains of kin selection and trust. We will outline some simulations involving a simple tagging system and outline the main factors which are vital to increasing cooperation.

Research paper thumbnail of The influence of random interactions and decision heuristics on norm evolution in social networks

In this paper we explore the effect that random social interactions have on the emergence and evo... more In this paper we explore the effect that random social interactions have on the emergence and evolution of social norms in a simulated population of agents. In our model agents observe the behaviour of others and update their norms based on these observations. An agent's norm is influenced by both their own fixed social network plus a second random network that is composed of a subset of the remaining population. Random interactions are based on a weighted selection algorithm that uses an individual's path distance on the network to determine their chance of meeting a stranger. This means that friends-of-friends are more likely to randomly interact with one another than agents with a higher degree of separation. We then contrast the cases where agents make highest utility based rational decisions about which norm to adopt versus using a Markov Decision process that associates a weight with the best choice. Finally we examine the effect that these random interactions have on the evolution of a more complex social norm as it propagates throughout the population. We discover that increasing the frequency and weighting of random interactions results in higher levels of norm convergence and in a quicker time when agents have the choice between two competing alternatives. This can be attributed to more information passing through the population thereby allowing for quicker convergence. When the norm is allowed to evolve we observe both global consensus formation and group splintering depending on the cognitive agent model used.

Research paper thumbnail of A Learning Architecture for Scheduling Workflow Applications in the Cloud

The scheduling of workflow applications involves the mapping of individual workflow tasks to comp... more The scheduling of workflow applications involves the mapping of individual workflow tasks to computational resources, based on a range of functional and non-functional quality of service requirements. Workflow applications require extensive computational requirements, and often involve the processing of significant amounts of data. Furthermore, dependencies that exist amongst tasks require that schedules must be generated strictly in accordance with defined precedence constraints. The emergence of cloud computing has introduced a utility-type market model, where computational resources of varying capacities can be procured on demand, in a pay-per-use fashion. In general the two most important objectives of workflow schedulers are the minimi- sation of both cost and makespan. As well as computational costs incurred from processing individual tasks, workflow schedulers must also plan for data transmission costs where potentially large amounts of data must be transferred between compute and storage sites. This paper proposes a novel cloud workflow scheduling approach which employs a Markov Decision Process to optimally guide the workflow execution process depending on environmental state. In addition the system employs a genetic algorithm to evolve workflow schedules. The overall architecture is presented, and initial results indicate the potential of this approach for developing viable workflow schedules on the Cloud.

Research paper thumbnail of The Effects of Payoff Preferences on Agent Tolerance

An objective of multi-agent systems is to build robust intelli- gent systems capable of existing ... more An objective of multi-agent systems is to build robust intelli- gent systems capable of existing in complex environments. These environments are often open, noisy and subject to rapid, unpredictable changes. This paper will explore how agents can bias their interactions and choices in these com- plex environments. Existing research has investigated how agents can bias their interactions based on factors such as similarity, trust or reputation. Unfortunately, much of this research has ignored how agents are influenced by their pref- erences for certain game payoffs. This paper will show that individual payoff preferences have a significant effect on the behaviors that emerge within an agent environment. We ar- gue that agents must not only determine with whom to in- teract, but also the levels of benefit or risk these interactions should represent. This paper presents a series of game theo- retic simulations examining the effects of agent payoff prefer- ences within an evolutionary setting. Our experiments show that these factors promote tolerance throughout the popula- tion. We provide an experimental benchmark using an almost identical game environment where payoffs are not considered by agents. Furthermore, we also present simulations involv- ing noise, thereby demonstrating the ability of these more tol- erant agents to cope with uncertainty in their environment.

Research paper thumbnail of The Effects of Evolved Sociability in a Commons Dilemma

This paper explores the evolution of strategies in an n-player dilemma game. These n-player dilem... more This paper explores the evolution of strategies in an n-player dilemma game. These n-player dilemmas provide a formal representation of many real world social dilemmas. Those social dilemmas include lit- tering, voting and sharing common resources such as sharing computer processing time. This paper explores the evolution of altruism using an n-player dilemma and our results show the importance of sociability in these games. We propose a novel tag-mediated mechanism to allow for n-player interactions. This paper provides an examination of the inter- action dynamics that occur in these n-player games when sociability is an evolved trait. Our results show how the agent population changes and evolves rapidly in response to the strategies of their peers in the population.

Research paper thumbnail of Co-evolutionary Analysis: A Policy Exploration Method for System Dynamics Models

In system dynamics (SD), complex nonlinear systems can generate a wide range of possible behaviou... more In system dynamics (SD), complex nonlinear systems can generate a wide range of possible behaviours that frequently require search and optimization algorithms in order to explore optimal policies. Within the SD literature, the conventional approach to optimization is the formulation of a single objective function, with a targeted parameter list, and the entire model is simulated repeatedly in order to arrive at optimum values. However, many sector-based SD models contain heuristics of ‘intended rationality’, and a desired outcome for modellers to be able to explore the policy implications of locally optimal behaviours. This can now be achieved through a method known as coevolution, which allows modelers to divide an unsolved problem into constituent parts, where each part can be solved with respect to its own fitness function. In this paper, we specify a solution for evolving locally rational strategies across a multi-sector SD structure. Using the beer distribution game (BDG) as an illustration, we demonstrate the utility of this approach in terms of the impact of two different order management strategies on the policy space of the BDG.

Research paper thumbnail of A search algorithm to identify the independent feedback loop set

System dynamics focuses on how feedback structures drive system behaviour. An established feed- b... more System dynamics focuses on how feedback structures drive system behaviour. An established feed- back loop analysis method is eigenvalue elasticity analysis (EEA), which analyzes a complete set of independent feedback loops in a given system. A widely accepted loop selection method is the shortest independent loop set (SILS) algorithm. It is utilized in EEA to compute the loop elasticity which identifies the dominant loops. However, this paper finds that in some scenarios, the SILS can only identify part of the complete independent loop set (ILS). In this case, SILS is no longer suit- able for EEA, because it produces incorrect loop elasticities. An agent-based goal diffusion model is then produced to demonstrate this specific scenario. Subsequently, we specify a more robust algorithm using the depth-first search to identify the complete set of independent loops. Finally, a summary is presented and it suggests a potential area for extending EEA applications.

Research paper thumbnail of Particle Swarm Optimisation with Enhanced Memory Particles

Particle swarm optimisation (PSO) is a general purpose op- timisation algorithm in which a popula... more Particle swarm optimisation (PSO) is a general purpose op- timisation algorithm in which a population of particles are attracted to their past success and the success of other particles. This paper intro- duces a new variant of the PSO algorithm, PSO with Enhanced Memory Particles, where the cognitive influence is enhanced by having particles remember multiple previous successes. The additional positions intro- duce diversity which aids exploration. Balancing the need for exploita- tion with this additional diversity is achieved through the use of a small memory and by using Roulette selection to select a single position from memory to use when calculating particles’ velocities. The research shows that PSO EMP performs better than the Standard PSO in most cases and does not perform significantly worse in any case.

Research paper thumbnail of Tag-Based Cooperation in N-Player Dilemmas

This paper studies the emergence of cooperation in the N- Player Prisoner’s Dilemma (NPD) using a... more This paper studies the emergence of cooperation in the N- Player Prisoner’s Dilemma (NPD) using a tag-mediated in- teraction model. Tags have been widely used to bias agent pairwise interactions which facilitates the emergence of co- operation. This paper shows some of the key parameters that influence the emergence of cooperation in an evolution- ary setting. The aim of this paper is to demonstrate the most vital factors that are commonly ignored in many ex- isting NPD studies.

Research paper thumbnail of Particle Swarm Optimisation with Gradually Increasing Directed Neighbourhoods

Particle swarm optimisation (PSO) is an intelligent random search algorithm, and the key to succe... more Particle swarm optimisation (PSO) is an intelligent random search algorithm, and the key to success is to effectively balance between the exploration of the solution space in the early stages and the exploitation of the solution space in the late stages. This paper presents a new dynamic topology called "gradually increasing di- rected neighbourhoods (GIDN)" that provides an effective way to balance between exploration and exploitation in the entire iteration process. In our model, each particle begins with a small number of connections and there are many small isolated swarms that im- prove the exploration ability. At each iteration, we gradually add a number of new connections between particles which improves the ability of exploitation gradually. Furthermore, these connections among particles are created randomly and have directions. We for- malise this topology using random graph representations. Experi- ments are conducted on 31 benchmark test functions to validate our proposed topology. The results show that the PSO with GIDN per- forms much better than a number of the state of the art algorithms on almost all of the 31 functions.

Research paper thumbnail of Applying reinforcement learning towards automating resource allocation and application scalability in the cloud

Public Infrastructure as a Service (IaaS) clouds such as Amazon, GoGrid and Rackspace deliver com... more Public Infrastructure as a Service (IaaS) clouds such as Amazon, GoGrid and Rackspace deliver computational resources by means of virtualisation technologies. These technologies allow multiple independent virtual machines to reside in apparent isolation on the same physical host. Dynamically scaling applications running on IaaS clouds can lead to varied and unpredictable results because of the performance interference effects associated with co-located virtual machines. Determining appropriate scaling policies in a dynamic non-stationary environment is non-trivial. One principle advantage exhibited by IaaS clouds over their traditional hosting counterparts is the ability to scale resources on-demand. However, a problem arises concerning resource allocation as to which resources should be added and removed when the underlying performance of the resource is in a constant state of flux. Decision theoretic frameworks such as Markov Decision Processes are particularly suited to decision making under uncertainty. By applying a temporal difference, reinforcement learning algorithm known as Q-learning, optimal scaling policies can be determined. Additionally, reinforcement learning techniques typically suffer from curse of dimensionality problems, where the state space grows exponentially with each additional state variable. To address this challenge, we also present a novel parallel Q-learning approach aimed at reducing the time taken to determine optimal policies whilst learning online.

Research paper thumbnail of A parallel framework for Bayesian reinforcement learning

Solving a finite Markov decision process using techniques from dynamic programming such as value ... more Solving a finite Markov decision process using techniques from dynamic programming such as value or policy iteration require a complete model of the environmental dynamics. The distribution of rewards, transition probabilities, states and actions all need to be fully observable, discrete and complete. For many problem domains, a complete model containing a full representation of the environmental dynamics may not be readily available. Bayesian reinforcement learning (RL) is a technique devised to make better use of the information observed through learning than simply computing Q-functions. However, this approach can often require extensive experience in order to build up an accurate representation of the true values. To address this issue, this paper proposes a method for parallelising a Bayesian RL technique aimed at reducing the time it takes to approximate the missing model. We demonstrate the technique on learning next state transition probabilities without prior knowledge. The approach is general enough for approximating any probabilistically driven component of the model. The solution involves multiple learning agents learning in parallel on the same task. Agents share probability density estimates amongst each other in an effort to speed up convergence to the true values.

Research paper thumbnail of The emergence of cooperation among agents using simple fixed bias tagging

The principle of cooperation influences our everyday lives. This conflict between individual and ... more The principle of cooperation influences our everyday lives. This conflict between individual and collective rationality can be modelled through the use of social dilemmas such as the prisoner's dilemma. Reflecting the reality that real world autonomous agents are not chosen at random to interact, we acknowledge the role some structuring mechanisms can play in increasing cooperation. This paper examines one simple structuring technique which has been shown to increase cooperation among agents. Tagging mechanisms structure a population into subgroups and as a result reflect many aspects which are relevant to the domains of kin selection and trust. We will outline some simulations involving a simple tagging system and outline the main factors which are vital to increasing cooperation.