At a high level, RLlib provides you with an Algorithm
class which holds a policy for environment interaction. Through the algorithm's interface, you can train the policy compute actions, or store your algorithms. In multi-agent training, the algorithm manages the querying and optimization of multiple policies at once.
In this guide, we will first walk you through running your first experiments with the RLlib CLI, and then discuss our Python API in more detail.
The quickest way to run your first RLlib algorithm is to use the command line interface. You can train DQN with the following commands:
The rllib train
command (same as the train.py
script in the repo) has a number of options you can show by running rllib train --help.
Note that you choose any supported RLlib algorithm (--algo
) and environment (--env
). RLlib supports any Farama-Foundation Gymnasium environment, as well as a number of other environments (see rllib-environments-doc
). It also supports a large number of algorithms (see rllib-algorithms-doc
) to choose from.
Running the above will return one of the checkpoints that get generated during training after 30 training iterations, as well as a command that you can use to evaluate the trained algorithm. You can evaluate the trained algorithm with the following command (assuming the checkpoint path is called checkpoint
):
Note
By default, the results will be logged to a subdirectory of ~/ray_results
. This subdirectory will contain a file params.json
which contains the hyper-parameters, a file result.json
which contains a training summary for each episode and a TensorBoard file that can be used to visualize training process with TensorBoard by running
tensorboard --logdir=~/ray_results
For more advanced evaluation functionality, refer to Customized Evaluation During Training.
Note
Each algorithm has specific hyperparameters that can be set with --config
, see the algorithms documentation for more information. For instance, you can train the A2C algorithm on 8 workers by specifying num_workers: 8 in a JSON string passed to --config
:
rllib train --env=PongDeterministic-v4 --run=A2C --config '{"num_workers": 8}'
Some good hyperparameters and settings are available in the RLlib repository (some of them are tuned to run on GPUs).
If you find better settings or tune an algorithm on a different domain, consider submitting a Pull Request!
You can run these with the rllib train file
command as follows:
Note that this works with any local YAML file in the correct format, or with remote URLs pointing to such files. If you want to learn more about the RLlib CLI, please check out the RLlib CLI user guide <rllib-cli-doc>
.
The Python API provides the needed flexibility for applying RLlib to new problems. For instance, you will need to use this API if you wish to use custom environments, preprocessors, or models with RLlib.
Here is an example of the basic usage. We first create a PPOConfig and add properties to it, like the environment we want to use, or the resources we want to leverage for training. After we build the algo from its configuration, we can train it for a number of episodes (here 10) and save the resulting policy periodically (here every 5 episodes).
./doc_code/getting_started.py
All RLlib algorithms are compatible with the Tune API <tune-api-ref>
. This enables them to be easily used in experiments with Ray Tune <tune-main>
. For example, the following code performs a simple hyper-parameter sweep of PPO.
./doc_code/getting_started.py
Tune will schedule the trials to run in parallel on your Ray cluster:
== Status ==
Using FIFO scheduling algorithm.
Resources requested: 4/4 CPUs, 0/0 GPUs
Result logdir: ~/ray_results/my_experiment
PENDING trials:
- PPO_CartPole-v1_2_lr=0.0001: PENDING
RUNNING trials:
- PPO_CartPole-v1_0_lr=0.01: RUNNING [pid=21940], 16 s, 4013 ts, 22 rew
- PPO_CartPole-v1_1_lr=0.001: RUNNING [pid=21942], 27 s, 8111 ts, 54.7 rew
Tuner.fit()
returns an ResultGrid
object that allows further analysis of the training results and retrieving the checkpoint(s) of the trained agent.
./doc_code/getting_started.py
You can find your checkpoint's version by looking into the rllib_checkpoint.json
file inside your checkpoint directory.
Loading and restoring a trained algorithm from a checkpoint is simple. Let's assume you have a local checkpoint directory called checkpoint_path
. To load newer RLlib checkpoints (version >= 1.0), use the following code:
from ray.rllib.algorithms.algorithm import Algorithm
algo = Algorithm.from_checkpoint(checkpoint_path)
For older RLlib checkpoint versions (version < 1.0), you can restore an algorithm via:
from ray.rllib.algorithms.ppo import PPO
algo = PPO(config=config, env=env_class)
algo.restore(checkpoint_path)
The simplest way to programmatically compute actions from a trained agent is to use Algorithm.compute_single_action()
. This method preprocesses and filters the observation before passing it to the agent policy. Here is a simple example of testing a trained agent for one episode:
./doc_code/getting_started.py
For more advanced usage on computing actions and other functionality, you can consult the RLlib Algorithm API documentation <rllib-algorithm-api>
.
It is common to need to access a algorithm's internal state, for instance to set or get model weights.
In RLlib algorithm state is replicated across multiple rollout workers (Ray actors) in the cluster. However, you can easily get and update this state between calls to train()
via Algorithm.workers.foreach_worker()
or Algorithm.workers.foreach_worker_with_index()
. These functions take a lambda function that is applied with the worker as an argument. These functions return values for each worker as a list.
You can also access just the "master" copy of the algorithm state through Algorithm.get_policy()
or Algorithm.workers.local_worker()
, but note that updates here may not be immediately reflected in your rollout workers (if you have configured num_rollout_workers > 0
). Here's a quick example of how to access state of a model:
./doc_code/getting_started.py
Similar to accessing policy state, you may want to get a reference to the underlying neural network model being trained. For example, you may want to pre-train it separately, or otherwise update its weights outside of RLlib. This can be done by accessing the model
of the policy.
To run these examples, you need to install a few extra dependencies, namely pip install "gym[atari]" "gym[accept-rom-license]" atari_py.
Below you find three explicit examples showing how to access the model state of an algorithm.
Example: Preprocessing observations for feeding into a model
Then for the code:
doc_code/training.py
Example: Querying a policy's action distribution
doc_code/training.py
Example: Getting Q values from a DQN model
doc_code/training.py
This is especially useful when used with custom model classes.
You can configure RLlib algorithms in a modular fashion by working with so-called AlgorithmConfig objects. In essence, you first create a config = AlgorithmConfig() object and then call methods on it to set the desired configuration options. Each RLlib algorithm has its own config class that inherits from AlgorithmConfig. For instance, to create a PPO algorithm, you start with a PPOConfig object, to work with a DQN algorithm, you start with a DQNConfig object, etc.
Note
Each algorithm has its specific settings, but most configuration options are shared. We discuss the common options below, and refer to the RLlib algorithms guide <rllib-algorithms-doc>
for algorithm-specific properties. Algorithms differ mostly in their training settings.
Below you find the basic signature of the AlgorithmConfig class, as well as some advanced usage examples:
ray.rllib.algorithms.algorithm_config.AlgorithmConfig
As RLlib algorithms are fairly complex, they come with many configuration options. To make things easier, the common properties of algorithms are naturally grouped into the following categories:
training options <rllib-config-train>
,environment options <rllib-config-env>
,deep learning framework options <rllib-config-framework>
,rollout worker options <rllib-config-rollouts>
,evaluation options <rllib-config-evaluation>
,exploration options <rllib-config-exploration>
,options for training with offline data <rllib-config-offline_data>
,options for training multiple agents <rllib-config-multi_agent>
,reporting options <rllib-config-reporting>
,options for saving and restoring checkpoints <rllib-config-checkpointing>
,debugging options <rllib-config-debugging>
,options for adding callbacks to algorithms <rllib-config-callbacks>
,Resource options <rllib-config-resources>
and options for experimental features <rllib-config-experimental>
Let's discuss each category one by one, starting with training options.
For instance, a DQNConfig takes a double_q training argument to specify whether to use a double-Q DQN, whereas in a PPOConfig this does not make sense.
For individual algorithms, this is probably the most relevant configuration group, as this is where all the algorithm-specific options go. But the base configuration for training of an AlgorithmConfig is actually quite small:
ray.rllib.algorithms.algorithm_config.AlgorithmConfig.training
ray.rllib.algorithms.algorithm_config.AlgorithmConfig.environment
ray.rllib.algorithms.algorithm_config.AlgorithmConfig.framework
ray.rllib.algorithms.algorithm_config.AlgorithmConfig.rollouts
ray.rllib.algorithms.algorithm_config.AlgorithmConfig.evaluation
ray.rllib.algorithms.algorithm_config.AlgorithmConfig.exploration
ray.rllib.algorithms.algorithm_config.AlgorithmConfig.offline_data
ray.rllib.algorithms.algorithm_config.AlgorithmConfig.multi_agent
ray.rllib.algorithms.algorithm_config.AlgorithmConfig.reporting
ray.rllib.algorithms.algorithm_config.AlgorithmConfig.checkpointing
ray.rllib.algorithms.algorithm_config.AlgorithmConfig.debugging
ray.rllib.algorithms.algorithm_config.AlgorithmConfig.callbacks
You can control the degree of parallelism used by setting the num_workers
hyperparameter for most algorithms. The Algorithm will construct that many "remote worker" instances (see RolloutWorker class) that are constructed as ray.remote actors, plus exactly one "local worker", a RolloutWorker
object that is not a ray actor, but lives directly inside the Algorithm. For most algorithms, learning updates are performed on the local worker and sample collection from one or more environments is performed by the remote workers (in parallel). For example, setting num_workers=0
will only create the local worker, in which case both sample collection and training will be done by the local worker. On the other hand, setting num_workers=5
will create the local worker (responsible for training updates) and 5 remote workers (responsible for sample collection).
Since learning is most of the time done on the local worker, it may help to provide one or more GPUs to that worker via the num_gpus
setting. Similarly, the resource allocation to remote workers can be controlled via num_cpus_per_worker
, num_gpus_per_worker
, and custom_resources_per_worker
.
The number of GPUs can be fractional quantities (e.g. 0.5) to allocate only a fraction of a GPU. For example, with DQN you can pack five algorithms onto one GPU by setting num_gpus: 0.2
. Check out this fractional GPU example here as well that also demonstrates how environments (running on the remote workers) that require a GPU can benefit from the num_gpus_per_worker
setting.
For synchronous algorithms like PPO and A2C, the driver and workers can make use of the same GPU. To do this for an amount of n
GPUS:
gpu_count = n
num_gpus = 0.0001 # Driver GPU
num_gpus_per_worker = (gpu_count - num_gpus) / num_workers
If you specify num_gpus
and your machine does not have the required number of GPUs available, a RuntimeError will be thrown by the respective worker. On the other hand, if you set num_gpus=0
, your policies will be built solely on the CPU, even if GPUs are available on the machine.
ray.rllib.algorithms.algorithm_config.AlgorithmConfig.resources
ray.rllib.algorithms.algorithm_config.AlgorithmConfig.experimental
Here are some rules of thumb for scaling training with RLlib.
- If the environment is slow and cannot be replicated (e.g., since it requires interaction with physical systems), then you should use a sample-efficient off-policy algorithm such as
DQN <dqn>
orSAC <sac>
. These algorithms default tonum_workers: 0
for single-process operation. Make sure to setnum_gpus: 1
if you want to use a GPU. Consider also batch RL training with the offline data API. - If the environment is fast and the model is small (most models for RL are), use time-efficient algorithms such as
PPO <ppo>
,IMPALA <impala>
, orAPEX <apex>
. These can be scaled by increasingnum_workers
to add rollout workers. It may also make sense to enable vectorization for inference. Make sure to setnum_gpus: 1
if you want to use a GPU. If the learner becomes a bottleneck, multiple GPUs can be used for learning by settingnum_gpus > 1
. - If the model is compute intensive (e.g., a large deep residual network) and inference is the bottleneck, consider allocating GPUs to workers by setting
num_gpus_per_worker: 1
. If you only have a single GPU, considernum_workers: 0
to use the learner GPU for inference. For efficient use of GPU time, use a small number of GPU workers and a large number of envs per worker. - Finally, if both model and environment are compute intensive, then enable remote worker envs with async batching by setting
remote_worker_envs: True
and optionallyremote_env_batch_wait_ms
. This batches inference on GPUs in the rollout workers while letting envs run asynchronously in separate actors, similar to the SEED architecture. The number of workers and number of envs per worker should be tuned to maximize GPU utilization. If your env requires GPUs to function, or if multi-node SGD is needed, then also considerDD-PPO <ddppo>
.
In case you are using lots of workers (num_workers >> 10
) and you observe worker failures for whatever reasons, which normally interrupt your RLlib training runs, consider using the config settings ignore_worker_failures=True
, recreate_failed_workers=True
, or restart_failed_sub_environments=True
:
ignore_worker_failures
: When set to True, your Algorithm will not crash due to a single worker error but continue for as long as there is at least one functional worker remaining. recreate_failed_workers
: When set to True, your Algorithm will attempt to replace/recreate any failed worker(s) with newly created one(s). This way, your number of workers will never decrease, even if some of them fail from time to time. restart_failed_sub_environments
: When set to True and there is a failure in one of the vectorized sub-environments in one of your workers, the worker will try to recreate only the failed sub-environment and re-integrate the newly created one into your vectorized env stack on that worker.
Note that only one of ignore_worker_failures
or recreate_failed_workers
may be set to True (they are mutually exclusive settings). However, you can combine each of these with the restart_failed_sub_environments=True
setting. Using these options will make your training runs much more stable and more robust against occasional OOM or other similar "once in a while" errors on your workers themselves or inside your environments.
The "monitor": true
config can be used to save Gym episode videos to the result dir. For example:
rllib train --env=PongDeterministic-v4 \
--run=A2C --config '{"num_workers": 2, "monitor": true}'
# videos will be saved in the ~/ray_results/<experiment> dir, for example
openaigym.video.0.31401.video000000.meta.json
openaigym.video.0.31401.video000000.mp4
openaigym.video.0.31403.video000000.meta.json
openaigym.video.0.31403.video000000.mp4
Policies built with build_tf_policy
(most of the reference algorithms are) can be run in eager mode by setting the "framework": "tf2"
/ "eager_tracing": true
config options or using rllib train --config '{"framework": "tf2"}' [--trace]
. This will tell RLlib to execute the model forward pass, action distribution, loss, and stats functions in eager mode.
Eager mode makes debugging much easier, since you can now use line-by-line debugging with breakpoints or Python print()
to inspect intermediate tensor values. However, eager can be slower than graph mode unless tracing is enabled.
Algorithms that have an implemented TorchPolicy, will allow you to run rllib train using the command line --framework=torch
flag. Algorithms that do not have a torch version yet will complain with an error in this case.
You can use the data output API to save episode traces for debugging. For example, the following command will run PPO while saving episode traces to /tmp/debug
.
rllib train --run=PPO --env=CartPole-v1 \
--config='{"output": "/tmp/debug", "output_compress_columns": []}'
# episode traces will be saved in /tmp/debug, for example
output-2019-02-23_12-02-03_worker-2_0.json
output-2019-02-23_12-02-04_worker-1_0.json
You can control the log level via the "log_level"
flag. Valid values are "DEBUG", "INFO", "WARN" (default), and "ERROR". This can be used to increase or decrease the verbosity of internal logging. You can also use the -v
and -vv
flags. For example, the following two commands are about equivalent:
rllib train --env=PongDeterministic-v4 \
--run=A2C --config '{"num_workers": 2, "log_level": "DEBUG"}'
rllib train --env=PongDeterministic-v4 \
--run=A2C --config '{"num_workers": 2}' -vv
The default log level is WARN
. We strongly recommend using at least INFO
level logging for development.
You can use the ray stack
command to dump the stack traces of all the Python workers on a single node. This can be useful for debugging unexpected hangs or performance issues.
- To check how your application is doing, you can use the
Ray dashboard <observability-getting-started>
.