Iou-Jen Liu, Raymond A. Yeh, Alexander G. Schwing
University of Illinois at Urbana-Champaign
The repository contains Pytorch implementation of High-Throughput Synchronous Deep RL (HTS-RL).
If you used this code for your experiments or found it helpful, please consider citing the following paper:
@inproceedings{LiuNEURIPS2020, author = {I.-J. Liu and R. Yeh and A.~G. Schwing}, title = {{High-Throughput Synchronous Deep RL}}, booktitle = {Proc. NeurIPS}, year = {2020}, }
- Platform: Ubuntu 16.04
- GPU: GEFORCE GTX 1080
- Conda 4.8.3
- Dependencies:
cd scripts
sh run_ours_eg.sh
sh run_ours_egc.sh
sh run_ours_3vs1.sh
sh run_ours_psk.sh
sh run_ours_rpsk.sh
sh run_ours_rs.sh
sh run_ours_rsk.sh
sh run_ours_ce.sh
sh run_ours_ch.sh
sh run_ours_corner.sh
sh run_ours_lazy.sh
Training log will be dumped to /tmp/hts-rl\_results
.
To test the determinism of HTS-RL, please run
cd test
python -m unittest test_deterministic.py
The test will run HTS-RL twice with the same random seed and compare the trajectories (action, action logits, observations, predicted values) generated by the two runs. To run the test, please install all dependencies in a Conda environment.
The PPO code is based on the PPO implementation of https://github.com/ikostrikov/pytorch-a2c-ppo-acktr-gail
HTS-RL is licensed under the MIT License