This repository contains the implementation of various algorithms used in our paper, which can be found at ArXiv.
The repository is organized into several folders, each containing the implementation of a specific algorithm:
VCSE_A2C/: Contains the implementation of the A2C algorithm.VCSE_DrQv2/: Contains the implementation of the DrQv2 algorithm.VCSE_MWM/: Contains the implementation of the MWM algorithm.VCSE_SAC/: Contains the implementation of the SAC algorithm.
If you want to refer this research, please cite:
@article{kim2023accelerating,
title={Accelerating Reinforcement Learning with Value-Conditional State Entropy Exploration},
author={Kim, Dongyoung and Shin, Jinwoo and Abbeel, Pieter and Seo, Younggyo},
journal={arXiv preprint arXiv:2305.19476},
year={2023}
}
Our code is built on top of the RE3 + A2C implementation RE3. That trainning code can be found in rl-starter-files directory. Which is fork from rl-starter-files and A2C implementation is fork from torch-ac
Refer to the individual README files in VCSE_A2C for details of installation and instructions.
Our code is built on top of the drqv2 repository.
Refer to the individual README files in VCSE_DrQv2 for details of installation and instructions.
Our code is built on top of the MWM repository.
Refer to the individual README files in VCSE_MWM for details of installation and instructions.
Our code is built on top of the pytorch_sac repository.
Refer to the individual README files in VCSE_SAC for details of installation and instructions.