This is a clean and robust Pytorch implementation of PPO on Discrete action space. Here is the result:
All the experiments are trained with same hyperparameters. Other RL algorithms by Pytorch can be found here.
gym==0.18.3
numpy==1.21.2
pytorch==1.8.1
tensorboard==2.5.0
run 'python main.py', where the default enviroment is CartPole-v1.
run 'python main.py --write False --render True --Loadmodel True --ModelIdex 300000'
If you want to train on different enviroments, just run 'python main.py --EnvIdex 1'.
The --EnvIdex can be set to be 0~1, where
'--EnvIdex 0' for 'CartPole-v1'
'--EnvIdex 1' for 'LunarLander-v2'
You can use the tensorboard to visualize the training curve. History training curve is saved at '\runs'
For more details of Hyperparameter Setting, please check 'main.py'
Proximal Policy Optimization Algorithms
Emergence of Locomotion Behaviours in Rich Environments