Combining Off and On-Policy Training in Model-Based Reinforcement Learning (original) (raw)

The combination of deep learning and Monte Carlo Tree Search (MCTS) has shown to be effective in various domains, such as board and video games. AlphaGo [1] represented a significant step forward in our ability to learn complex board games, and it was rapidly followed by significant advances, such as AlphaGo Zero [2] and AlphaZero [3]. Recently, MuZero [4] demonstrated that it is possible to master both Atari games and board games by directly learning a model of the environment, which is then used with Monte Carlo Tree Search (MCTS) [5] to decide what move to play in each position. During tree search, the algorithm simulates games by exploring several possible moves and then picks the action that corresponds to the most promising trajectory. When training, limited use is made of these simulated games since none of their trajectories are directly used as training examples. Even if we consider that not all trajectories from simulated games are useful, there are thousands of potentiall...