This is a simple implementation of DeepMind's PySC2 RL agents.
Switch branches/tags
Nothing to show
Clone or download
Type Name Latest commit message Commit time
Failed to load latest commit information.
agents disable print actions Sep 5, 2017
images Add files via upload Sep 12, 2017 Only support for pysc2 v1.1 Nov 8, 2017 Fix #4: use absl.flags/app. Oct 24, 2017 initial commit Aug 26, 2017 initial commit Aug 26, 2017

PySC2 agents

This is a simple implementation of DeepMind's PySC2 RL agents. In this project, the agents are defined according to the original paper, which use all feature maps and structured information to predict both actions and arguments via an A3C algorithm.


  • PySC2 is a learning environment of StarCraft II provided by DeepMind. It provides an interface for RL agents to interact with StarCraft II, getting observations and sending actions. You can follow the tutorial in PySC2 repo to install it.
pip install s2clientprotocol==1.1
pip install pysc2==1.1
  • Python packages might miss: tensorflow and absl-py. If pip is set up on your system, it can be easily installed by running
pip install absl-py
pip install tensorflow-gpu

Getting Started

Clone this repo:

git clone
cd pysc2-agents


  • Download the pretrained model from here and extract them to ./snapshot/.

  • Test the pretrained model:

python -m main --map=MoveToBeacon --training=False
  • You will get the following results for different maps.
MoveToBeacon CollectMineralShards DefeatRoaches
Mean Score ~25 ~62 ~87
Max Score 31 97 371


Train a model by yourself:

python -m main --map=MoveToBeacon


  • Different from the original A3C algorithm, we replace the policy penalty term with epsilon greedy exploration.
  • When train a model by yourself, you'd better to run several times and choose the best one. If you get better results than ours, it's grateful to share with us.

Licensed under The MIT License.