A multi-agent cooperative reinforcement learning model using a hierarchy of consultants, tutors and workers (original) (raw)
Related papers
Cooperative reinforcement learning for independent learners
2014
A thesis submitted for the degree of Doctor of Philosophy Machine learning in multi-agent domains poses several research challenges. One challenge is how to model cooperation between reinforcement learners. Cooperation between independent reinforcement learners is known to accelerate convergence to optimal solutions. In large state space problems, independent reinforcement learners normally cooperate to accelerate the learning process using decomposition techniques or knowledge sharing strategies. This thesis presents two techniques to multi-agent reinforcement learning and a comparison study. The first technique is a formal decomposition model and an algorithm for distributed systems. The second technique is a cooperative Q-learning algorithm for multi-goal decomposable systems. The comparison study compares the performance of some of the best known cooperative Q-learning algorithms for independent learners. Distributed systems are normally organised into two levels: system and subsystem levels. This thesis presents a formal solution for decomposition of Markov Decision
Cooperative Q-learning: the knowledge sharing issue
Advanced Robotics, 2001
A group of cooperative and homogeneous Q-learning agents can cooperate to learn faster and gain more knowledge. In order to do so, each learner agent must be able to evaluate the expertness and the intelligence level of the other agents, and to assess the knowledge and the information it gets from them. In addition, the learner needs a suitable method to properly combine its own knowledge and what it gains from the other agents according to their relative expertness. In this paper, some expertness measuring criteria are introduced. Also, a new cooperative learning method called weighted strategy sharing (WSS) is introduced. In WSS, based on the amount of its teammate expertness, each agent assigns a weight to their knowledge and utilizes it accordingly. WSS and the expertness criteria are tested on two simulated hunter-prey and object-pushing systems.
Expertness based cooperative Q-learning
IEEE Transactions on Systems, Man and Cybernetics, Part B (Cybernetics), 2002
By using other agents' experiences and knowledge, a learning agent may learn faster, make fewer mistakes, and create some rules for unseen situations. These benefits would be gained if the learning agent can extract proper rules out of the other agents' knowledge for its own requirements. One possible way to do this is to have the learner assign some expertness values (intelligence level values) to the other agents and use their knowledge accordingly. In this paper, some criteria to measure the expertness of the reinforcement learning agents are introduced. Also, a new cooperative learning method, called weighted strategy sharing (WSS) is presented. In this method, each agent measures the expertness of its teammates and assigns a weight to their knowledge and learns from them accordingly. The presented methods are tested on two Hunter-Prey systems. We consider that the agents are all learning from each other and compare them with those who cooperate only with the more expert ones. Also, the effect of the communication noise, as a source of uncertainty, on the cooperative learning method is studied. Moreover, the Q-table of one of the cooperative agents is changed randomly and its effects on the presented methods are examined.
Cooperative Multi-Agent Systems Using Distributed Reinforcement Learning Techniques
Procedia Computer Science, 2018
In this paper, the fully cooperative multi-agent system is studied, in which all of the agents share the same common goal. The main difficulty in such systems is the coordination problem: how to ensure that the individual decisions of the agents lead to jointly optimal decisions for the group? Firstly, a multi-agent reinforcement learning algorithm combining traditional Q-learning with observation-based teammate modeling techniques, called T M Qlearning, is presented and evaluated. Several new cooperative action selection strategies are then suggested to improve the multi-agent coordination and accelerate learning, especially in the case of unknown and temporary dynamic environments. The effectiveness of combining T M Qlearning with the new proposals is demonstrated using the hunting game.
Hierarchical multi-agent reinforcement learning
Autonomous Agents and Multi-Agent Systems, 2006
In this paper we investigate the use of hierarchical reinforcement learning to speed up the acquisition of cooperative multi-agent tasks. We extend the MAXQ framework to the multi-agent case. Each agent uses the same MAXQ hierarchy to decompose a task into sub-tasks. Learning is decentralized, with each agent learning three interrelated skills: how to perform subtasks, which order to do them in, and how to coordinate with other agents. Coordination skills among agents are learned by using joint actions at the highest level(s) of the hierarchy. The Q nodes at the highest level(s) of the hierarchy are configured to represent the joint task-action space among multiple agents. In this approach, each agent only knows what other agents are doing at the level of sub-tasks, and is unaware of lower level (primitive) actions. This hierarchical approach allows agents to learn coordination faster by sharing information at the level of sub-tasks, rather than attempting to learn coordination taking into account primitive joint state-action values. We apply this hierarchical multi-agent reinforcement learning algorithm to a complex AGV scheduling task and compare its performance and speed with other learning approaches, including flat multi-agent, single agent using MAXQ, selfish multiple agents using MAXQ (where each agent acts independently without communicating with the other agents), as well as several well-known AGV heuristics like "first come first serve", "highest queue first" and "nearest station first". We also compare the tradeoffs in learning speed vs. performance of modeling joint action values at multiple levels in the MAXQ hierarchy. * Currently at Agilent Technologies, CA.
A reinforcement learning algorithm for building collaboration in multi-agent systems
ArXiv, 2017
This paper presents a proof-of concept study for demonstrating the viability of building collaboration among multiple agents through standard Q learning algorithm embedded in particle swarm optimisation. Collaboration is formulated to be achieved among the agents via some sort competition, where the agents are expected to balance their action in such a way that none of them drifts away of the team and none intervene any fellow neighbours territory. Particles are devised with Q learning algorithm for self training to learn how to act as members of a swarm and how to produce collaborative/collective behaviours. The produced results are supportive to the algorithmic structures suggesting that a substantive collaboration can be build via proposed learning algorithm.
Building Collaboration in Multi-agent Systems Using Reinforcement Learning
Lecture Notes in Computer Science, 2018
This paper presents a proof-of concept study for demonstrating the viability of building collaboration among multiple agents through standard Q learning algorithm embedded in particle swarm optimisation. Collaboration is formulated to be achieved among the agents via competition, where the agents are expected to balance their action in such a way that none of them drifts away of the team and none intervene any fellow neighbours territory, either. Particles are devised with Q learning for self training to learn how to act as members of a swarm and how to produce collaborative/collective behaviours. The produced experimental results are supportive to the proposed idea suggesting that a substantive collaboration can be build via proposed learning algorithm.
Distributed Reinforcement Learning in Multi-agent Decision Systems
Lecture Notes in Computer Science, 1998
Decision problems can be usually solved using systems that implement different paradigms. These systems may be integrated into a single distributed system, with the expectation of obtaining a group performance more satisfactory than individual performances. Such a distributed system is what we call a Multi Agent Decision System (MADES), a special kind of Multi Agent System, that integrates several heterogeneous autonomous decision systems (agents). A MADES must produce a single solution proposal for the problem instance it faces, despite the fact that its decision making is distributed, and every agent produces solution proposals according to its local view and to its idiosyncrasy. We present a distributed reinforcement algorithm for learning how to combine the decisions the agents make in a distributed way, into a single group decision (solution proposal).
Q-Decomposition for Reinforcement Learning Agents
The paper explores a very simple agent design method called Q-decomposition, wherein a complex agent is built from simpler subagents. Each subagent has its own reward function and runs its own reinforcement learning process. It supplies to a central arbitrator the Q-values (according to its own reward function) for each possible action. The arbitrator selects an action maximizing the sum of Q-values from all the subagents. This approach has advantages over designs in which subagents recommend actions. It also has the property that if each subagent runs the Sarsa reinforcement learning algorithm to learn its local Q-function, then a globally optimal policy is achieved. (On the other hand, local Q-learning leads to globally suboptimal behavior.) In some cases, this form of agent decomposition allows the local Q-functions to be expressed by muchreduced state and action spaces. These results are illustrated in two domains that require effective coordination of behaviors.
Learning Cooperative Behaviours in Multiagent Reinforcement Learning
We investigated the coordination among agents in a goal finding task in a partially observable environment. In our problem formulation, the task was to locate a goal in a 2D space. However, no information related to the goal was given to the agents unless they had formed a swarm. Further more, the goal must be located by a swarm of agents, not a single agent. In this study, cooperative behaviours among agents were learned using our proposed context dependent multiagent SARSA algorithms (CDM-SARSA). In essence, instead of tracking the actions from all the agents in the Q-table i.e., Q(s,bfa)Q(s,{\bf a})Q(s,bfa), the CDM-SARSA tracked only actions aia_iai of agent iii and the context ccc resulting from the actions of all the agents, i.e., Qi(s,ai,c)Q_i(s,a_i,c)Qi(s,ai,c). This approach reduced the size of the state space considerably. Tracking all the agents' actions was impractical since the state space increased exponentially with every new agent added into the system. In our opinion, tracking the context abstracted unnecessary details and this approach was a logical solution for multiagent reinforcement learning task. The proposed approach for learning cooperative behaviours was illustrated using a different number of agents and with different grid sizes. The empirical results confirmed that the proposed CDM-SARSA could learn cooperative behaviours successfully.