OfflineRL-Lib provides unofficial and benchmarked PyTorch implementations for selected OfflineRL algorithms, including:
- In-Sample Actor Critic (InAC)
- Extreme Q-Learning (XQL)
- Implicit Q-Learning (IQL)
- Decision Transformer (DT)
- Advantage-Weighted Actor Critic (AWAC)
- TD3-BC
- TD7
For Model-Based algorithms, please check OfflineRL-Kit!
- We benchmark and visualize the result via WandB. Click the following WandB links, and group the runs via the entry
task
(for offline experiments) orenv
(for online experiments). - Available Runs
If you use OfflineRL-Lib in your work, please use the following bibtex
@misc{offinerllib,
author = {Chenxiao Gao},
title = {OfflineRL-Lib: Benchmarked Implementations of Offline RL Algorithms},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/typoverflow/OfflineRL-Lib}},
}
We thank CORL for providing finetuned hyper-parameters.