This repository has been deprecated, please find new MALib repository at https://github.com/sjtu-marl/malib.
This Framework aims to provide an easy to use toolkit for Multi-Agent Reinforcement Learning research. Overall architecture:
Environment: There are two differences for Multi-Agent Env Class: 1. The step(action_n) accepts n actions at each time; 2. The Env class needs a MAEnvSpec property which describes the action spaces and observation spaces for all agents.
Agent: the agent class has no difference than common RL agent, it uses the MAEnvSpec from Env Class to init the policy/value nets and replay buffer.
MASampler: Because the agents have to rollout simultaneously, a MASampler Class is designed to perform the sampling steps and add/return the step tuple to each agent's replay buffer.
MATrainer: In single agent, the trainer is included in the Agent Class. However, due to the complexity of Multi-Agent Training, which has to support independent/centralized/communication/opponent modelling, it is necessary to have a MATrainer Class to abstract these requirements from Agent Class. This is the core for Multi-agent training.
Required Python Version: >= 3.6
- Using Local Python Environment:
cd malib
sudo pip3 install -r requirements.txt
sudo pip3 install -e .
- Using virtualenv Environment:
cd malib
python3 -m venv env
source env/bin/activate
pip3 install -r requirements.txt
pip3 install -e .
- Using Conda Environment:
cd malib
conda env create --file=environment.yml
conda activate malib
conda develop ./
or
cd malib
conda env create -n malib python=3.7
conda activate malib
pip install -r requirements.txt
conda develop ./
cd examples
python run_trainer.py
python -m pytest tests
Testing With Keyword
python -m pytest tests -k "environments"
The project implementation has referred much and adopted some codes from the following projects: agents, maddpg, softlearning, garage, markov-game, multiagent-particle-env. Thanks a lot!