Industry-Grade, Scalable Reinforcement Learning — Ray 3.0.0.dev0 (original) (raw)

Note

Ray 2.10.0 introduces the alpha stage of RLlib’s “new API stack”. The team is currently transitioning algorithms, example scripts, and documentation to the new code base throughout the subsequent minor releases leading up to Ray 3.0.

See here for more details on how to activate and use the new API stack.

../_images/rllib-logo.png

RLlib is an open source library for reinforcement learning (RL), offering support for production-level, highly scalable, and fault-tolerant RL workloads, while maintaining simple and unified APIs for a large variety of industry applications.

Whether training policies in a multi-agent setup, from historic offline data, or using externally connected simulators, RLlib offers simple solutions for each of these autonomous decision making needs and enables you to start running your experiments within hours.

RLlib is used in production by industry leaders in many different verticals, such asgaming,robotics,finance,climate- and industrial control,manufacturing and logistics,automobile, andboat design.

RLlib in 60 seconds#

../_images/rllib-index-header.svg

It only takes a few steps to get your first RLlib workload up and running on your laptop. Install RLlib and PyTorch, as shown below:

pip install "ray[rllib]" torch

Note

To be able to run our Atari or MuJoCo examples, you also need to run:pip install "gymnasium[atari,accept-rom-license,mujoco]".

This is all! You can now start coding against RLlib. Here is an example for running the PPO Algorithm on theTaxi domain. You first create a config for the algorithm, which defines the RL environment (taxi) and any other needed settings and parameters.

Next, build the algorithm and train it for a total of 5 iterations. One training iteration includes parallel (distributed) sample collection by the EnvRunner actors, followed by loss calculation on the collected data, and a model update step.

At the end of your script, the trained Algorithm is evaluated:

from ray.rllib.algorithms.ppo import PPOConfig from ray.rllib.connectors.env_to_module import FlattenObservations

1. Configure the algorithm,

config = ( PPOConfig() .environment("Taxi-v3") .env_runners( num_env_runners=2, # Observations are discrete (ints) -> We need to flatten (one-hot) them. env_to_module_connector=lambda env: FlattenObservations(), ) .evaluation(evaluation_num_env_runners=1) )

2. build the algorithm ..

algo = config.build()

3. .. train it ..

for _ in range(5): print(algo.train())

4. .. and evaluate it.

algo.evaluate()

You can use any Farama-Foundation Gymnasium registered environment with the env argument.

In config.env_runners() you can specify - amongst many other things - the number of parallelEnvRunner actors to collect samples from the environment.

You can also tweak the NN architecture used by by tweaking RLlib’s DefaultModelConfig, as well as, set up a separate config for the evaluation EnvRunner actors through the config.evaluation() method.

See here, if you want to learn more about the RLlib training APIs. Also, see herefor a simple example on how to write an action inference loop after training.

If you want to get a quick preview of which algorithms and environments RLlib supports, click the dropdowns below:

Farama-Foundation Environments
gymnasium single_agent pip install "gymnasium[atari,accept-rom-license,mujoco]"`` config.environment("CartPole-v1") # Classic Control config.environment("ale_py:ALE/Pong-v5") # Atari config.environment("Hopper-v5") # MuJoCo
PettingZoo multi_agent pip install "pettingzoo[all]" from ray.tune.registry import register_env from ray.rllib.env.wrappers.pettingzoo_env import PettingZooEnv from pettingzoo.sisl import waterworld_v4 register_env("env", lambda _: PettingZooEnv(waterworld_v4.env())) config.environment("env")
RLlib Multi-Agent
RLlib’s MultiAgentEnv API multi_agent config.environment("CartPole-v1") # Classic Control config.environment("ale_py:ALE/Pong-v5") # Atari

Why chose RLlib?#

RLlib workloads scale along various axes:

RLlib natively supports multi-agent reinforcement learning (MARL), thereby allowing you to run in any complex configuration.

Ray.Data has been integrated into RLlib, enabling large-scale data ingestion for offline RL and behavior cloning (BC) workloads.

See here for a basic tuned example for the behavior cloning algoand here for how to pre-train a policy with BC, then finetuning it with online PPO.

Learn More#

Get started with environments supported by RLlib, such as Farama foundation’s Gymnasium, Petting Zoo, and many custom formats for vectorized and multi-agent environments.

Learn more about the core concepts of RLlib, such as environments, algorithms and policies.

See the many available RL algorithms of RLlib for model-free and model-based RL, on-policy and off-policy training, multi-agent RL, and more.

Customizing RLlib#

RLlib provides powerful, yet easy to use APIs for customizing all aspects of your experimental- and production training-workflows. For example, you may code your own environmentsin python using the Farama Foundation’s gymnasium or DeepMind’s OpenSpiel, provide custom PyTorch models, write your own optimizer setups and loss definitions, or define custom exploratory behavior.

../_images/rllib-new-api-stack-simple.svg

RLlib’s API stack: Built on top of Ray, RLlib offers off-the-shelf, distributed and fault-tolerant algorithms and loss functions, PyTorch default models, multi-GPU training, and multi-agent support. User customizations are realized by subclassing the existing abstractions and - by overriding certain methods in those subclasses - define custom behavior.#