Skip to content
MineRL Competition for Sample Efficient Reinforcement Learning - Python Package
Branch: master
Clone or download
ThisIsIsaac and MadcowD minor typo (#213)
fixed typo:

> changw


> change
Latest commit 170af29 Aug 13, 2019

The MineRL Python Package

Build Status Downloads PyPI version "Open Issues" GitHub issues by-label Discord

Support us on patron

Python package providing easy to use gym environments and a simple data api for the MineRLv0 dataset.

To get started please read the docs here!

We develop minerl in our spare time, please consider supporting us on Patreon <3


With JDK-8 installed run this command

pip3 install --upgrade minerl

Basic Usage

Running an environment:

import minerl
import gym
env = gym.make('MineRLNavigateDense-v0')

obs = env.reset()

done = False
while not done:
    action = env.action_space.sample() 
    # One can also take a no_op action with
    # action =env.action_space.noop()
    obs, reward, done, info = env.step(

Sampling the dataset:

import minerl

# YOU ONLY NEED TO DO THIS ONCE!'/your/local/path')

data =

# Iterate through a single epoch gathering sequences of at most 32 steps
for current_state, action, reward, next_state, done \
    in data.sarsd_iter(
        num_epochs=1, max_sequence_len=32):

        # Print the POV @ the first step of the sequence

        # Print the final reward pf the sequence!

        # Check if final (next_state) is terminal.

        # ... do something with the data.
        print("At the end of trajectories the length"
              "can be < max_sequence_len", len(reward))

Visualizing the dataset:


# Make sure your MINERL_DATA_ROOT is set!
export MINERL_DATA_ROOT='/your/local/path'

# Visualizes a random trajectory of MineRLObtainDiamondDense-v0
python3 -m minerl.viewer MineRLObtainDiamondDense-v0

MineRL Competition

If you're here for the MineRL competition. Please check the main competition website here.

You can’t perform that action at this time.