Multiagent Reinforcement Learning with Adaptive State Focus (original) (raw)

2005

Abstract

In realistic multiagent systems, learning on the basis of complete state information is not feasible. We introduce adaptive state focus Q-learning, a class of methods derived from Q-learning that start learning with only the state information that is strictly necessary for a single agent to perform the task, and that monitor the convergence of learning. If lack of convergence is detected, the learner dynamically expands its state space to incorporate more state information (e.g., states of other agents). Learning is faster and takes less resources than if the complete state were considered from the start, while being able to handle situations where agents interfere in pursuing their goals. We illustrate our approach by instantiating a simple version of such a method, and by showing that it outperforms learning with full state information without being hindered by the deficiencies of learning on the basis of a single agent's state.

Robert Babuska hasn't uploaded this paper.

Let Robert know you want this paper to be uploaded.

Ask for this paper to be uploaded.