Accelerated Policy Gradient: On the Convergence Rates of the Nesterov Momentum for Reinforcement Learning [arXiv]
Yen-Ju Chen, Nai-Chieh Huang, Ching-Pei Lee, Ping-Chun Hsieh
41th International Conference on Machine Learning (ICML 2024)
.
├── rl-baselines3-zoo/
├── spinningup/
├── true-gradient-mdp/
├── .gitignore
├── LICENSE
└── Readme.md
- To reproduce Atari 2600 experiments, please refer to repo rl-baselines3-zoo.
- To reproduce Bipedal-Walker experiment, please refer to repo spinningup.
- To reproduce Numerical Validation under MDP with true gradient, please refer to repo true-gradient-mdp.
If you find our repository helpful to your research, please cite our paper:
@article{chen2024accelerated,
title={Accelerated Policy Gradient: On the Convergence Rates of the Nesterov Momentum for Reinforcement Learning},
author={Chen, Yen-Ju and Huang, Nai-Chieh and Ching-Pei Lee and Hsieh, Ping-Chun},
journal={arXiv preprint arXiv:2310.11897},
year={2024}
}