Skip to content
/ CQL Public
forked from young-geng/CQL

Conservative Q Learning on top of SAC

License

Notifications You must be signed in to change notification settings

pz1004/CQL

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CQL

A simple and modular implementation of the Conservative Q Learning and Soft Actor Critic algorithm in PyTorch.

If you like Jax, checkout my reimplementation of this codebase in Jax, which runs 4 times faster.

Installation

  1. Install and use the included Ananconda environment
$ conda env create -f environment.yml
$ source activate SimpleSAC

You'll need to get your own MuJoCo key if you want to use MuJoCo.

  1. Add this repo directory to your PYTHONPATH environment variable.
export PYTHONPATH="$PYTHONPATH:$(pwd)"

Run Experiments

You can run SAC experiments using the following command:

python -m SimpleSAC.sac_main \
    --env 'HalfCheetah-v2' \
    --logging.output_dir './experiment_output'

All available command options can be seen in SimpleSAC/conservative_sac_main.py and SimpleSAC/conservative_sac.py.

You can run CQL experiments using the following command:

python -m SimpleSAC.conservative_sac_main \
    --env 'halfcheetah-medium-v0' \
    --logging.output_dir './experiment_output'

If you want to run on CPU only, just add the --device='cpu' option. All available command options can be seen in SimpleSAC/sac_main.py and SimpleSAC/sac.py.

Visualize Experiments

You can visualize the experiment metrics with viskit:

python -m viskit './experiment_output'

and simply navigate to http://localhost:5000/

Weights and Biases Online Visualization Integration

This codebase can also log to W&B online visualization platform. To log to W&B, you first need to set your W&B API key environment variable:

export WANDB_API_KEY='YOUR W&B API KEY HERE'

Then you can run experiments with W&B logging turned on:

python -m SimpleSAC.conservative_sac_main \
    --env 'halfcheetah-medium-v0' \
    --logging.output_dir './experiment_output' \
    --device='cuda' \
    --logging.online

Results of Running CQL on D4RL Environments

In order to save your time and compute resources, I've done a sweep of CQL on certain D4RL environments with various min Q weight values. The results can be seen here. You can choose the environment to visualize by filtering on env. The results for each cql.cql_min_q_weight on each env is repeated and average across 3 random seeds.

Credits

The project organization is inspired by TD3. The SAC implementation is based on rlkit. THe CQL implementation is based on CQL. The viskit visualization is taken from viskit, which is taken from rllab.

About

Conservative Q Learning on top of SAC

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 69.3%
  • JavaScript 14.3%
  • HTML 12.2%
  • CSS 4.2%