Skip to content
Customisable 3D environment for assessing generalisation in Reinforcement Learning.
Python Shell
Branch: master
Clone or download
Latest commit 5c80af3 Aug 14, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
assets release Jun 7, 2019
baseline_experiments
mazeexplorer fix return observation on reset Jul 30, 2019
tests release Jun 7, 2019
.gitignore release Jun 7, 2019
.gitmodules release Jun 7, 2019
LICENSE release Jun 7, 2019
NOTICE.txt release Jun 7, 2019
README.md Added paper reference Aug 14, 2019
example.py release Jun 7, 2019
setup.py fix return observation on reset Jul 30, 2019
test.sh release Jun 7, 2019

README.md

MazeExplorer

MazeExplorer is a customisable 3D benchmark for assessing generalisation in Reinforcement Learning.

Simply put, MazeExplorer makes it easy to create separate training and test environments for your agents.

It is based on the 3D first-person game Doom and the open-source environment VizDoom.

This repository contains the code for the MazeExplorer Gym Environment along with the scripts to generate baseline results. The paper can be found here.

By Luke Harries*, Sebastian Lee*, Jaroslaw Rzepecki, Katja Hofmann, and Sam Devlin.
* Joint first author

Default textures Random Textures Random Textures

The Mission

The goal is to navigate a procedurally generated maze and collect a set number of keys.

The environment is highly customisable, allowing you to create different training and test environments.

The following features of the environment can be configured:

  • Unique or repeated maps
  • Number of maps
  • Map Size (X, Y)
  • Maze complexity
  • Maze density
  • Random/Fixed keys
  • Random/Fixed textures
  • Random/Fixed spawn
  • Number of keys
  • Environment Seed
  • Episode timeout
  • Reward clipping
  • Frame stack
  • Resolution
  • Action frame repeat
  • Actions space
  • Specific textures (Wall, ceiling, floor)
  • Data Augmentation

Example Usage

from mazeexplorer import MazeExplorer

train_env = MazeExplorer(number_maps=1,
                         size=(15, 15),
                         random_spawn=True,
                         random_textures=False,
                         keys=6)
              
test_env = MazeExplorer(number_maps=1,
                        size=(15, 15),
                        random_spawn=True,
                        random_textures=False,
                        keys=6)

# training
for _ in range(1000):
    obs, rewards, dones, info = train_env.step(train_env.action_space.sample())
    
    
# testing
for _ in range(1000):
    obs, rewards, dones, info = test_env.step(test_env.action_space.sample())

Installation

  1. Install the dependencies for VizDoom: Linux, MacOS or Windows.
  2. pip3 install virtualenv pytest
  3. Create a virtualenv and activate it
    1. virtualenv mazeexplorer-env
    2. source maze-env/bin/activate
  4. Git clone this repo git clone https://github.com/microsoft/MazeExplorer
  5. cd into the repo: cd MazeExplorer
  6. Pull the submodules with git submodule update --init --recursive
  7. Install the dependencies: pip3 install -e .
  8. Run the tests: bash test.sh

Baseline experiments

The information to reproduce the baseline experiments is shown in baseline_experiments/experiments.md.

Citation

If you use this environment please cite the following:

@article{harrieslee2019, title={MazeExplorer: A Customisable 3D Benchmark for Assessing Generalisation in Reinforcement Learning}, author={Harries*, Luke and Lee*, Sebastian and Rzepecki, Jaroslaw and Hofmann, Katja and Devlin, Sam}, journal={In Proc. IEEE Conference on Games}, year={2019} }

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

You can’t perform that action at this time.