Iou-Jen Liu*, Raymond A. Yeh*, Alexander G. Schwing
University of Illinois at Urbana-Champaign
(* indicates equal contribution)
The repository contains Pytorch implementation of MADDPG with Permutation Invariant Critic (PIC).
If you used this code for your experiments or found it helpful, please consider citing the following paper:
@inproceedings{LiuCORL2019, author = {I.-J. Liu$^\ast$ and R.~A. Yeh$^\ast$ and A.~G. Schwing}, title = {PIC: Permutation Invariant Critic for Multi-Agent Deep Reinforcement Learning}, booktitle = {Proc. CORL}, year = {2019}, note = {$^\ast$ equal contribution}, }
- Ubuntu 16.04
- Python 3.7
- Pytorch 1.1.0
- OpenAI gym 0.10.9 (https://github.com/openai/gym)
- matplotlib
- numba 0.43.1
- llvmlite 0.32.1
cd multiagent-particle-envs
pip install -e .
Please ensure that multiagent-particle-envs
has been added to your PYTHONPATH
.
cd maddpg
python main_vec.py --exp_name coop_navigation_n6 --scenario simple_spread_n6 --critic_type gcn_max --cuda
The MADDPG code is based on the DDPG implementation of https://github.com/ikostrikov/pytorch-ddpg-naf
The improved MPE code is based on the MPE implementation of https://github.com/openai/multiagent-particle-envs
The GCN code is based on the implementation of https://github.com/tkipf/gcn
PIC is licensed under the MIT License