Structure in Reinforcement Learning: A Survey and Open Problems (original) (raw)
Related papers
Structural abstraction experiments in reinforcement learning
AI 2005: Advances in …, 2005
A challenge in applying reinforcement learning to large problems is how to manage the explosive increase in storage and time complexity. This is especially problematic in multi-agent systems, where the state space grows exponentially in the number of agents. Function approximation based on simple supervised learning is unlikely to scale to complex domains on its own, but structural abstraction that exploits system properties and problem representations shows more promise. In this paper, we investigate several classes of known abstractions: 1) symmetry, 2) decomposition into multiple agents, 3) hierarchical decomposition, and 4) sequential execution. We compare memory requirements, learning time, and solution quality empirically in two problem variations. Our results indicate that the most effective solutions come from combinations of structural abstractions, and encourage development of methods for automatic discovery in novel problem formulations.
DEEP REINFORCEMENT LEARNING: A SURVEY
IAEME PUBLICATION, 2020
Reinforcement learning (RL) is poised to revolutionize the sector of AI, and represents a step toward building autonomous systems with a higher-level understanding of the real world. Currently, Deep Learning (DL) is enabling reinforcement learning (RL) to scale to issues that were previously intractable, like learning to play video games directly from pixels. Deep Reinforcement Learning (DRL) algorithms are applied to AI, allowing control policies for robots to be learned directly from camera inputs within the world. The success of Reinforcement Learning (RL) is because of its strong mathematical roots within the principles of deep learning, Monte Carlo simulation, function approximation, and Artificial Intelligence (AI). Topics treated in some details during this survey are: Temporal variations, Q-Learning, semi-MDPs and stochastic games. Many recent advances in Deep Reinforcement Learning (DRL), eg. Policy gradients and hierarchical Reinforcement Learning (RL), are covered besides references. Pointers to various examples of applications are provided. Since no presently available technique works in all situations, this paper tends to propose guidelines for using previous information regarding the characteristics of the control problem at hand to decide on the suitable experience replay strategy.
Deep reinforcement learning with relational inductive biases
2019
We introduce an approach for augmenting model-free deep reinforcement learning agents with a mechanism for relational reasoning over structured representations, which improves performance, learning efficiency, generalization, and interpretability. Our architecture encodes an image as a set of vectors, and applies an iterative message-passing procedure to discover and reason about relevant entities and relations in a scene. In six of seven StarCraft II Learning Environment mini-games, our agent achieved state-of-the-art performance, and surpassed human grandmasterlevel on four. In a novel navigation and planning task, our agent’s performance and learning efficiency far exceeded non-relational baselines, it was able to generalize to more complex scenes than it had experienced during training. Moreover, when we examined its learned internal representations, they reflected important structure about the problem and the agent’s intentions. The main contribution of this work is to introduc...
Deep Reinforcement Learning in Complex Structured Environments
2018
Creating general agents capable of learning useful policies in real-world environments is a difficult task. Reinforcement learning is the field that aims to solve this problem. It provides a general, rigorously defined framework within which algorithms can be designed to solve various problems. Complex real-world environments tend to have a structure that can be exploited. Humans are extremely proficient at this. Because the structure can vary dramatically between environments, creating agents capable of discovering and exploiting such structure without prior knowledge about the environment is a long-standing and unsolved problem in reinforcement learning. Hierarchical reinforcement learning is a sub-field focused specifically on finding and exploiting a hierarchical structure in the environment. In this work, we implement and study two hierarchical reinforcement learning methods, Strategic Attentive Writer and FeUdal Networks. We propose a modification of the FeUdal Networks model ...
DEEP REINFORCEMENT LEARNING: AN OVERVIEW
We give an overview of recent exciting achievements of deep reinforcement learning (RL). We start with background of deep learning and reinforcement learning, as well as introduction of testbeds. Next we discuss Deep Q-Network (DQN) and its extensions, asynchronous methods, policy optimization, reward, and planning. After that, we talk about attention and memory, unsupervised learning, and learning to learn. Then we discuss various applications of RL, including games, in particular, AlphaGo, robotics, spoken dialogue systems (a.k.a. chatbot), machine translation, text sequence prediction, neural architecture design, personalized web services, healthcare, finance, and music generation. We mention topics/papers not reviewed yet. After listing a collection of RL resources, we close with discussions. 2 We discuss how/why we organize the overview from Section 3 to Section 21 in the current way: starting with RL fundamentals: value function/control, policy, reward, and planning (model in to-do list); next attention and memory, unsupervised learning, and learning to learn, which, together with transfer/semi-supervised/oneshot learning, etc, would be critical mechanisms for RL; then various applications.
A Review of Current Perspective and Propensity in Reinforcement Learning (RL) in an Orderly Manner
International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 2023
Reinforcement learning is an area of Machine Learning. The three primary types of machine learning are supervised learning, unsupervised learning, and reinforcement learning (RL). Pre-training a model on a labeled dataset is known as supervised learning. The model is trained on unlabeled data in unsupervised learning, on the other hand. Instead of being driven by labels, RL is motivated by assessing feedback. By interacting with the environment and choosing the best course of action in each circumstance in order to maximize the reward, the agent learns the best way to solve sequential decision-making issues. The RL agent chooses how to carry out tasks on its own. Furthermore, since there are no training data, the agent learns by gaining experience. In order to make subsequent judgments, RL aids agents in efficiently interacting with their surroundings. In this essay, the state-of-the-art RL is thoroughly reviewed in the literature. Applications for reinforcement learning (RL) may be found in a wide range of industries, including smart grids, robots, computer vision, healthcare, gaming, transportation, finance, and engineering.
Deep Reinforcement Learning: A State-of-the-Art Walkthrough
Journal of Artificial Intelligence Research
Deep Reinforcement Learning is a topic that has gained a lot of attention recently, due to the unprecedented achievements and remarkable performance of such algorithms in various benchmark tests and environmental setups. The power of such methods comes from the combination of an already established and strong field of Deep Learning, with the unique nature of Reinforcement Learning methods. It is, however, deemed necessary to provide a compact, accurate and comparable view of these methods and their results for the means of gaining valuable technical and practical insights. In this work we gather the essential methods related to Deep Reinforcement Learning, extracting common property structures for three complementary core categories: a) Model-Free, b) Model-Based and c) Modular algorithms. For each category, we present, analyze and compare state-of-the-art Deep Reinforcement Learning algorithms that achieve high performance in various environments and tackle challenging problems in ...
Deep Reinforcement Learning with Adjustments
arXiv (Cornell University), 2021
Deep reinforcement learning (RL) algorithms can learn complex policies to optimize agent operation over time. RL algorithms have shown promising results in solving complicated problems in recent years. However, their application on real-world physical systems remains limited. Despite the advancements in RL algorithms, the industries often prefer traditional control strategies. Traditional methods are simple, computationally efficient and easy to adjust. In this paper, we first propose a new Q-learning algorithm for continuous action space, which can bridge the control and RL algorithms and bring us the best of both worlds. Our method can learn complex policies to achieve long-term goals and at the same time it can be easily adjusted to address short-term requirements without retraining. Next, we present an approximation of our algorithm which can be applied to address short-term requirements of any pre-trained RL algorithm. The case studies demonstrate that both our proposed method as well as its practical approximation can achieve short-term and long-term goals without complex reward functions.