Skip to content

Nostrademous/Dota2_DPPO_bots

 
 

Repository files navigation

Pytorch-DPPO

Pytorch implementation of Distributed Proximal Policy Optimization: https://arxiv.org/abs/1707.02286 Using PPO with clip loss (from https://arxiv.org/pdf/1707.06347.pdf).

work in progress

#req python3.6 pytorch cmake

#install use cmake-gui and visual studio to build on windows ./build.sh to build on linux

#run ./run_with_log.sh on linux python .\main.py cppSimulator on windows

Acknowledgments

The structure of this code is based on https://github.com/alexis-jacq/Pytorch-DPPO.

Hyperparameters and loss computation has been taken from https://github.com/openai/baselines

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 52.1%
  • Python 45.9%
  • CMake 1.8%
  • Shell 0.2%