Skip to content
Mind-aware Multi-agent Management Reinforcement Learning
Python
Branch: master
Clone or download
Yuandong Tian
Yuandong Tian Initial commit
Latest commit 273f1b2 Mar 6, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
agents Initial commit Mar 6, 2019
envs Initial commit Mar 6, 2019
models Initial commit Mar 6, 2019
sampler Initial commit Mar 6, 2019
utils Initial commit Mar 6, 2019
CODE_OF_CONDUCT.md Initial commit Mar 6, 2019
CONTRIBUTING.md Initial commit Mar 6, 2019
LICENSE Initial commit Mar 6, 2019
README.md
__init__.py Initial commit Mar 6, 2019
run_commands_examples.md Initial commit Mar 6, 2019
test.py Initial commit Mar 6, 2019
train.py

README.md

M^3RL: Mind-aware Multi-agent Management Reinforcement Learning

PyTorch implementation for our ICLR 2019 paper M^3RL: Mind-aware Multi-agent Management Reinforcement Learning.

Requirements

  • Python 3.6
  • Numpy >= 1.15.2
  • PyTorch 0.3.1
  • termcolor >= 1.1.0

Task Settings

Resource Collection

The environments Collection_v1, Collection_v0, and Collection_v2 correspond to the S1, S2, and S3 settings in the paper respectively. The multiple bonus levels settings can be run in Collection_v3 (each agent has 1 skill) and Collection_v4 (each worker has 3 skills).

Crafting

Crafting_v0 and Crafting_v1 stand for the standard setting and the multiple bonus levels setting respectively in the paper.

How to Use

Please refer to examples for how to run training and testing in different settings.

License

See LICENSE for additional details.

You can’t perform that action at this time.