Notes and scripts for SC2LE released by DeepMind and Blizzard, more details [here](https://github.com/deepmind/pysc2).
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
Agents
Images
Notes
ResearchLog
.gitignore
LICENSE
README.md

README.md

pysc2-RLagents

Notes and scripts for SC2LE released by DeepMind and Blizzard, more details here.

There seems to be a bug where the agent's performance drops drastically after prolonged training eg. in MoveToBeacon from 25 to 1. I'm still trying to work on this when I can spare the time.

Important Links

Original SC2LE Paper

DeepMind blog post

Blizzard blog post

PySC2 repo

Blizzard's SC2 API

Blizzard's SC2 API Protocol

Python library for SC2 API Protocol

Work by others

Chris' blog post and repo

Siraj's Youtube tutorial and accompanying code

Steven's Medium articles for a simple scripted agent and one based on Q-tables

pekaalto's work on adapting OpenAI's gym environment to SC2LE and an implementation of the FullyConv algorithm plus results on three minigames

Arthur Juliani's posts and repo for RL agents

Not SC2LE but mentioned here because my agent script was built on Juliani's A3C implementation.

Let me know if anyone else is also working on this and I'll add a link here!

Notes

Contains general notes on working with SC2LE.

Total Action Space

The entire unfiltered action space for an SC2LE agent.

It contains 524 base actions / functions with 101938719 possible actions given a minimap_resolution of (64, 64) and screen_resolution of (84, 84).

List of Action Argument Types

The entire list of action argument types for use in the actions / functions.

It contains 13 argument types with descriptions.

Running an Agent

Notes on running an agent in the pysc2.env.sc2_env.SC2Env environment. In particular, showing details and brief descriptions of the TimeStep object (observation) fed to the step function of an agent or returned from calling the step function of an environment.

ResearchLog

Contains notes on developing RL agents for SC2LE.

Agents

Contains scripts for training and running RL agents in SC2LE.

PySC2_A3C_FullyConv.py

This script implements the A3C algorithm with the FullyConv architecture described in DeepMind's paper, for SC2LE. The code is based on Arthur Juliani's A3C implementation for the VizDoom environment (see above).

To run the script, use the following command:

python PySC2_A3C_FullyConv.py --map_name MoveToBeacon

If --map_name is not supplied, the script runs DefeatRoaches by default.

PySC2_A3C_AtariNet.py

This script implements the A3C algorithm with the Atari-net architecture described in DeepMind's paper, for SC2LE. The code is based on Arthur Juliani's A3C implementation for the VizDoom environment (see above).

This is a generalized version of PySC2_A3C_old.py that works for all minigames and also contains some bug fixes.

To run the script, use the following command:

python PySC2_A3C_AtariNet.py --map_name MoveToBeacon

If --map_name is not supplied, the script runs DefeatRoaches by default.

PySC2_A3C_old.py

This is an initial script that only works for the DefeatRoaches minigame. There is also a model file in this repo that will load if you just run python PySC2_A3C_old.py.

I initially focused on the DefeatRoaches minigame and so I only took in 7 screen features and 3 nonspatial features for the state space and the action space is limited to 17 base actions and their relevant arguments.

For the action space, I modeled the base actions and arguments independently. In addition, I also model x and y coordinates independently for spatial arguments, to further reduce the effective action space.

The agent currently samples the distributions returned from the policy networks for the actions taken, instead of an epsilon-greedy.

Also, the policy networks for the arguments are updated irregardless of whether the argument was used (eg. even if a no_op action is taken, the argument policies are still updated), which should probably be corrected.

Will be updating this to work with all the minigames.

As of ~10 million steps on DefeatRoaches, the agent achieved max and average scores of 338 and 65, compared to DeepMind's Atari-net agent that achieved max and average scores of 351 and 101 after 600 million steps.