Skip to content

Latest commit

 

History

History
230 lines (198 loc) · 18.7 KB

rllib-examples.rst

File metadata and controls

230 lines (198 loc) · 18.7 KB

Examples

This page is an index of examples for the various use cases and features of RLlib.

If any example is broken, or if you'd like to add an example to this page, feel free to raise an issue on our Github repository.

Tuned Examples

Blog Posts

Environments and Adapters

Custom- and Complex Models

Training Workflows

Evaluation:

  • Custom evaluation function:

    Example of how to write a custom evaluation function that is called instead of the default behavior, which is running with the evaluation worker set through n episodes.

  • Parallel evaluation and training:

    Example showing how the evaluation workers and the "normal" rollout workers can run (to some extend) in parallel to speed up training.

Serving and Offline

  • Offline RL with CQL:

    Example showing how to run an offline RL training job using a historic-data json file.

  • Serving RLlib models with Ray Serve <serve-rllib-tutorial>: Example of using Ray Serve to serve RLlib models

    with HTTP and JSON interface. This is the recommended way to expose RLlib for online serving use case.

  • Another example for using RLlib with Ray Serve

    This script offers a simple workflow for 1) training a policy with RLlib first, 2) creating a new policy 3) restoring its weights from the trained one and serving the new policy via Ray Serve.

  • Unity3D client/server:

    Example of how to setup n distributed Unity3D (compiled) games in the cloud that function as data collecting clients against a central RLlib Policy server learning how to play the game. The n distributed clients could themselves be servers for external/human players and allow for control being fully in the hands of the Unity entities instead of RLlib. Note: Uses Unity's MLAgents SDK (>=1.0) and supports all provided MLAgents example games and multi-agent setups.

  • CartPole client/server:

    Example of online serving of predictions for a simple CartPole policy.

  • Saving experiences:

    Example of how to externally generate experience batches in RLlib-compatible format.

  • Finding a checkpoint using custom criteria:

    Example of how to find a checkpoint <rllib-saving-and-loading-algos-and-policies-docs> after a Tuner.fit() via some custom defined criteria.

Multi-Agent and Hierarchical

GPU examples

Special Action- and Observation Spaces

Community Examples

  • Arena AI:

    A General Evaluation Platform and Building Toolkit for Single/Multi-Agent Intelligence with RLlib-generated baselines.

  • CARLA:

    Example of training autonomous vehicles with RLlib and CARLA simulator.

  • The Emergence of Adversarial Communication in Multi-Agent Reinforcement Learning:

    Using Graph Neural Networks and RLlib to train multiple cooperative and adversarial agents to solve the "cover the area"-problem, thereby learning how to best communicate (or - in the adversarial case - how to disturb communication) (code).

  • Flatland:

    A dense traffic simulating environment with RLlib-generated baselines.

  • GFootball:

    Example of setting up a multi-agent version of GFootball with RLlib.

  • Neural MMO:

    A multiagent AI research environment inspired by Massively Multiplayer Online (MMO) role playing games – self-contained worlds featuring thousands of agents per persistent macrocosm, diverse skilling systems, local and global economies, complex emergent social structures, and ad-hoc high-stakes single and team based conflict.

  • NeuroCuts:

    Example of building packet classification trees using RLlib / multi-agent in a bandit-like setting.

  • NeuroVectorizer:

    Example of learning optimal LLVM vectorization compiler pragmas for loops in C and C++ codes using RLlib.

  • Roboschool / SageMaker:

    Example of training robotic control policies in SageMaker with RLlib.

  • Sequential Social Dilemma Games:

    Example of using the multi-agent API to model several social dilemma games.

  • Simple custom environment for single RL with Ray 2.0, Tune and Air:

    Create a custom environment and train a single agent RL using Ray 2.0 with Tune and Air.

  • StarCraft2:

    Example of training in StarCraft2 maps with RLlib / multi-agent.

  • Traffic Flow:

    Example of optimizing mixed-autonomy traffic simulations with RLlib / multi-agent.