Skip to content

HyunghoNa/EMU

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 

Repository files navigation

EMU: Efficient Episodic Memory Utilization of Cooperative Multi-agent Reinforcement Learning

Note

This codebase accompanies the paper submission "Efficient Episodic Memory Utilization of Cooperative Multi-agent Reinforcement Learning (EMU)" and is based on GRF, PyMARL and SMAC which are open-sourced. The paper is accepted by ICLR2024 and now available in OpenReview and arXiv.

PyMARL is WhiRL's framework for deep multi-agent reinforcement learning and our code includes implementations of the following algorithms:

Run an experiment

Note: Please use the updated configuration file for experiments. We have corrected some errors in the previously uploaded configurations. To train EMU(QPLEX) on SC2 setting tasks, run the following command:

python3 src/main.py --config=EMU_sc2 --env-config=sc2 with env_args.map_name=5m_vs_6m

For EMU(CDS), please change config file to EMU_sc2_cds.

To train EMU(QPLEX) on SC2 setting tasks, run the following command:

python3 src/main.py --config=EMU_grf --env-config=academy_3_vs_1_with_keeper

For EMU(CDS), please change config file to EMU_grf_cds.

Publication

If you find this repository useful, please cite our paper:

@inproceedings{na2024efficient,
  title={Efficient Episodic Memory Utilization of Cooperative Multi-Agent Reinforcement Learning},
  author={Na, Hyungho and Seo, Yunkyeong and Moon, Il-chul},
  booktitle={The Twelfth International Conference on Learning Representations},
  year={2024}
}

About

(Official) PyTorch implementation for Efficient Episodic Memory Utilization of Cooperative Multi-Agent Reinforcement Learning (EMU) in ICLR 2024.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages