Skip to content

NJU-RL/CuGRO

Repository files navigation

Continual Offline Reinforcement Learning via Diffusion-based Dual Generative Replay

Overview

overview

Installation instructions

conda env create -f environment.yaml

Download dataset

CuGRO tested on two classical benchmarks: MuJoCo and Meta-World. The collected dataset can be downloaded here

Running

Running experiments based on our code can be quite easy. You can run all benchmarks by executing the shell file:

sh run.sh

Or you can execute the following command to run CuGRO, take "cheetah_vel" as an example :

  1. Train the state generator and the behavior generator.

    CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4  main-gene.py --env "cheetah_vel" --data_mode "gene" --actor_type "large" --diffusion_steps 100
  2. Train the critic model and plot the results in the logs of all sequential tasks:

    python critic.py --env "cheetah_vel" --data_mode "gene" --actor_type "large" --diffusion_steps 100 --gpu 0

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published