Updated on 2021.09.30 DI-engine-v0.2.0 (beta)
DI-engine is a generalized Decision Intelligence engine. It supports most basic deep reinforcement learning (DRL) algorithms, such as DQN, PPO, SAC, and domain-specific algorithms like QMIX in multi-agent RL, GAIL in inverse RL, and RND in exploration problems. Various training pipelines and customized decision AI applications are also supported. Have fun with exploration and exploitation.
- DI-engine-docs
- treevalue
- DI-treetensor (preview)
You can simply install DI-engine from PyPI with the following command:
pip install DI-engine
If you use Anaconda or Miniconda, you can install DI-engine from conda-forge through the following command:
conda install -c opendilab di-engine
For more information about installation, you can refer to installation.
And our dockerhub repo can be found here,we prepare base image
and env image
with common RL environments.
- base: opendilab/ding:nightly
- atari: opendilab/ding:nightly-atari
- mujoco: opendilab/ding:nightly-mujoco
- smac: opendilab/ding:nightly-smac
The detailed documentation are hosted on doc(中文文档).
Bonus: Train RL agent in one line code:
ding -m serial -e cartpole -p dqn -s 0
No | Algorithm | Label | Implementation | Runnable Demo |
---|---|---|---|---|
1 | DQN | policy/dqn | python3 -u cartpole_dqn_main.py / ding -m serial -c cartpole_dqn_config.py -s 0 | |
2 | C51 | policy/c51 | ding -m serial -c cartpole_c51_config.py -s 0 | |
3 | QRDQN | policy/qrdqn | ding -m serial -c cartpole_qrdqn_config.py -s 0 | |
4 | IQN | policy/iqn | ding -m serial -c cartpole_iqn_config.py -s 0 | |
5 | Rainbow | policy/rainbow | ding -m serial -c cartpole_rainbow_config.py -s 0 | |
6 | SQL | policy/sql | ding -m serial -c cartpole_sql_config.py -s 0 | |
7 | R2D2 | policy/r2d2 | ding -m serial -c cartpole_r2d2_config.py -s 0 | |
8 | A2C | policy/a2c | ding -m serial -c cartpole_a2c_config.py -s 0 | |
9 | PPO | policy/ppo | python3 -u cartpole_ppo_main.py / ding -m serial_onpolicy -c cartpole_ppo_config.py -s 0 | |
10 | PPG | policy/ppg | python3 -u cartpole_ppg_main.py | |
11 | ACER | policy/acer | ding -m serial -c cartpole_acer_config.py -s 0 | |
12 | IMPALA | policy/impala | ding -m serial -c cartpole_impala_config.py -s 0 | |
13 | DDPG | policy/ddpg | ding -m serial -c pendulum_ddpg_config.py -s 0 | |
14 | TD3 | policy/td3 | python3 -u pendulum_td3_main.py / ding -m serial -c pendulum_td3_config.py -s 0 | |
15 | SAC | policy/sac | ding -m serial -c pendulum_sac_config.py -s 0 | |
16 | QMIX | policy/qmix | ding -m serial -c smac_3s5z_qmix_config.py -s 0 | |
17 | COMA | policy/coma | ding -m serial -c smac_3s5z_coma_config.py -s 0 | |
18 | QTran | policy/qtran | ding -m serial -c smac_3s5z_qtran_config.py -s 0 | |
19 | WQMIX | policy/wqmix | ding -m serial -c smac_3s5z_wqmix_config.py -s 0 | |
20 | CollaQ | policy/collaq | ding -m serial -c smac_3s5z_collaq_config.py -s 0 | |
21 | GAIL | reward_model/gail | ding -m serial_reward_model -c cartpole_dqn_config.py -s 0 | |
22 | SQIL | entry/sqil | ding -m serial_sqil -c cartpole_sqil_config.py -s 0 | |
23 | HER | reward_model/her | python3 -u bitflip_her_dqn.py | |
24 | RND | reward_model/rnd | python3 -u cartpole_ppo_rnd_main.py | |
25 | CQL | policy/cql | python3 -u d4rl_cql_main.py | |
26 | PER | worker/replay_buffer | rainbow demo |
|
27 | GAE | rl_utils/gae | ppo demo |
|
28 | D4PG | policy/d4pg | python3 -u pendulum_d4pg_config.py |
means discrete action space, which is only label in normal DRL algorithms(1-15)
means continuous action space, which is only label in normal DRL algorithms(1-15)
means distributed training (collector-learner parallel) RL algorithm
means multi-agent RL algorithm
means RL algorithm which is related to exploration and sparse reward
means Imitation Learning, including Behaviour Cloning, Inverse RL, Adversarial Structured IL
means other sub-direction algorithm, usually as plugin-in in the whole pipeline
P.S: The .py
file in Runnable Demo
can be found in dizoo
No | Environment | Label | Visualization | dizoo link |
---|---|---|---|---|
1 | atari | dizoo link | ||
2 | box2d/bipedalwalker | dizoo link | ||
3 | box2d/lunarlander | dizoo link | ||
4 | classic_control/cartpole | dizoo link | ||
5 | classic_control/pendulum | dizoo link | ||
6 | competitive_rl | dizoo link | ||
7 | gfootball | dizoo link | ||
8 | minigrid | dizoo link | ||
9 | mujoco | dizoo link | ||
10 | multiagent_particle | dizoo link | ||
11 | overcooked | dizoo link | ||
12 | procgen | dizoo link | ||
13 | pybullet | dizoo link | ||
14 | smac | dizoo link | ||
15 | d4rl | dizoo link | ||
16 | league_demo | dizoo link | ||
17 | pomdp atari | dizoo link | ||
18 | bsuite | dizoo link | ||
19 | ImageNet | dizoo link | ||
20 | slime_volleyball | dizoo link |
means multi-agent RL environment
means environment which is related to exploration and sparse reward
means Imitation Learning or Supervised Learning Dataset
means environment that allows agent VS agent battle
P.S. some enviroments in Atari, such as MontezumaRevenge, are also sparse reward type
We appreciate all contributions to improve DI-engine, both algorithms and system designs. Please refer to CONTRIBUTING.md for more guides. And our roadmap can be accessed by this link.
And users can join our slack communication channel or our forum for more detailed discussion.
For future plans or milestones, please refer to our GitHub Projects.
@misc{ding,
title={{DI-engine: OpenDILab} Decision Intelligence Engine},
author={DI-engine Contributors},
publisher = {GitHub},
howpublished = {\url{https://github.com/opendilab/DI-engine}},
year={2021},
}
DI-engine released under the Apache 2.0 license.