Skip to content

This repo refers to paper Invariant Transform Experience Replay, which had been submitted to ICRA-2020. And this repo is built on top of OpenAI Baseline. For more information please check:

Notifications You must be signed in to change notification settings

yijionglin/ITER_KER_GER

 
 

Repository files navigation

ITER_KER_GER

Description

This repo refers to the paper Invariant Transform Experience Replay, which had been submitted to ICRA-2020.

Deep reinforcement learning (DRL) is a promising approach for adaptive robot control, but its current application to robotics is currently hindered by high sample requirements. We propose two novel data augmentation techniques for DRL based on invariant transformations of trajectories in order to reuse more efficiently observed interaction. The first one called Kaleidoscope Experience Replay exploits reflectional symmetries, while the second called Goal-augmented Experi- ence Replay takes advantage of lax goal definitions. In the Fetch tasks from OpenAI Gym, our experimental results show a large increase in learning speed

And this repo is built on top of OpenAI Baselines and OpenAI Gym.

Installation

This implementation requires the installation of the OpenAI Baselines module. After the installation, please create a new folder for this repo and go inside.

mkdir ITER_KER_GER && cd $_

Download all the codes held in this repo.

git clone git@github.com:birlrobotics/ITER_KER_GER.git

Finally, please copy the files held in folder ITER_KER_GER/her and paste into baselines/baselines/.

copy -rf her ~/baselines/baselines/

Usage

To reproduce the results in our paper, please run :

python -m baselines.run --alg=her --env=FetchPickAndPlace-v1 --num_timesteps=1e6 --n_cycles=100 --save_path=/home/user/policies/her/iter --log_path=/home/user/log_data/her/iter --before_PER_minibatch_size=256 --n_rsym=8 --n_PER=4

options include:

  • --num_cpu: Number of workers(threads/cpus). The results in our paper just used 1 worker in order to show the significant improvements in learning speed. The original HER paper presents this HER implementation. (Please note that as the HER's author said, running the code with different cpus is NOT equivalent. For more information about this issue, please check here.)
  • --env: To specify the experimental environment in each run. Possible choices are FetchPickAndPlace-v1, FetchSlide-v1, FetchPush-v1. (There will be more choices on Baxter robot in the near future, please keep watching on our repo :). )
  • --before_PER_minibatch_size: To specify the original minibatch size.
  • --n_rsym: To specify the hyperparameter of KER. More specifically, it is to specify how many reflectional planes you would like to augment the samples. For more information, please checkout our Paper.
  • --n_PER: To specify the hyperparameter of GER. More specifically, it is to specify how many transitions' goals you would like to augment. For more information, please checkout our Paper.
  • --log_path: To specify the log file saved path.
  • --save_path: To specify the policy parameters saved path.

Loading and visualizing models

This page from OpenAI Baselines has a good indicaition on loading and visualizing models.

More Information

For more information please check:

  1. Website
  2. Paper
  3. Youtube, Youku

Credits

ITER_KER_GER is maintained by the BIRL Intelligent Manipulation team. Contributors include:

About

This repo refers to paper Invariant Transform Experience Replay, which had been submitted to ICRA-2020. And this repo is built on top of OpenAI Baseline. For more information please check:

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 88.8%
  • Jupyter Notebook 11.2%