Skip to content
Go to file


Failed to load latest commit information.
Latest commit message
Commit time
Jan 17, 2018
Nov 4, 2017
Aug 19, 2017
Jan 4, 2018

StartCraft II Reinforcement Learning Examples

This example program was built on

Current examples


  • CollectMineralShards with Deep Q Network


Quick Start Guide

1. Get PySC2


The easiest way to get PySC2 is to use pip:

$ pip install git+

Also, you have to install baselines library.

$ pip install git+

2. Install StarCraft II

Mac / Win

You have to purchase StarCraft II and install it. Or even the Starter Edition will work.

Linux Packages

Follow Blizzard's documentation to get the linux version. By default, PySC2 expects the game to live in ~/StarCraftII/.

3. Download Maps

Download the ladder maps and the mini games and extract them to your StarcraftII/Maps/ directory.

4. Train it!

$ python --algorithm=a2c

5. Enjoy it!

$ python

4-1. Train it with DQN

$ python --algorithm=deepq --prioritized=True --dueling=True --timesteps=2000000 --exploration_fraction=0.2

4-2. Train it with A2C(A3C)

$ python --algorithm=a2c --num_agents=2 --num_scripts=2 --timesteps=2000000
Description Default Parameter Type
map Gym Environment CollectMineralShards string
log logging type : tensorboard, stdout tensorboard string
algorithm Currently, support 2 algorithms : deepq, a2c a2c string
timesteps Total training steps 2000000 int
exploration_fraction exploration fraction 0.5 float
prioritized Whether using prioritized replay for DQN False boolean
dueling Whether using dueling network for DQN False boolean
lr learning rate (if 0 set random e-5 ~ e-3) 0.0005 float
num_agents number of agents for A2C 4 int
num_scripts number of scripted agents for A2C 4 int
nsteps number of steps for update policy 20 int