Skip to content

nikisim/ReBRAC_for_robotics_tasks

 
 

Repository files navigation

CORL for Unitree and Fetch tasks (Clean Offline Reinforcement Learning)

🧵 CORL is an Offline Reinforcement Learning library that provides high-quality and easy-to-follow single-file implementations of SOTA ORL algorithms. Each implementation is backed by a research-friendly codebase, allowing you to run or tune thousands of experiments. Heavily inspired by cleanrl for online RL, check them out too!

  • 📜 Single-file implementation
  • 📈 Benchmarked Implementation for N algorithms
  • 🖼 Weights and Biases integration

  • new⭐ Preloaded Datasets with > 700.000 iterations in D4RL format
  • new⭐ Model Saving
  • new⭐ Video Saving for Fetch Tasks

Datasets

Datasets are available from Google Disk

Current datasets:

Environment Dataset Pretrained model GIF
FetchReach ReBRAC ✅, IQL ❌
FetchPush ReBRAC ✅, IQL ❌
FetchPickAndPlace ReBRAC ✅, IQL ❌
FetchSlide ReBRAC ✅, IQL ❌
Unitree A1 ground task (MetaGym) ReBRAC ✅, IQL ❌

All Fetch Tasks datasets collected by DDPG+HER from original repo.

Unitree A1 dataset collected by ETG RL.

Be aware that Fetch envs works with Gymnasium now, not Gym! But Unitree A1 task still use Gym.

Fancy Wandb report about ReBRAC usage with datasets above.

(maybe not so fancy, but it will be soon)

Model evaluation

To evaluate saved model:

python3 test_eval.py --env_name FetchPush --config_path data/saved_models/FetchPush/config.json --model_path data/saved_models/FetchPush/actor_state999.pkl --num_episodes 20

Getting started

Docker (Doesn't work yet, use Anaconda)

git clone https://github.com/tinkoff-ai/CORL.git && cd CORL
pip install -r requirements/requirements_dev.txt

# alternatively, you could use docker
docker build -t <image_name> .
docker run --gpus=all -it --rm --name <container_name> <image_name>

Releases

No releases published

Packages

 
 
 

Languages

  • Python 60.5%
  • Jupyter Notebook 39.4%
  • Other 0.1%