Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time


PyPI version


You need to have pytorch pre-installed. Easy to use download scripts can be found on their website.

$ git clone
$ cd tetrisRL
$ python install


$ pip install tetrisrl


  • - DQN reinforcement learning agent trains on tetris
  • - The same convolutional model as DQN trains on a dataset of user playthroughs
  • - Play tetris and accumulate information as a training set
  • - Evaluate a saved agent model on a visual game of tetris (i.e.)
$ python checkpoint.pth.tar


Using the Environment

The interface is similar to an OpenAI Gym environment.

Initialize the Tetris RL environment

from engine import TetrisEngine

width, height = 10, 20
env = TetrisEngine(width, height)

Simulation loop

# Reset the environment
obs = env.clear()

while True:
    # Get an action from a theoretical AI agent
    action = agent(obs)

    # Sim step takes action and returns results
    obs, reward, done = env.step(action)

    # Done when game is lost
    if done:

Example Usages

Play Tetris for Training Data

Play games and accumulate a data set for a supervised learning algorithm to trian on. An element of data stores a (state, reward, done, action) tuple for each frame of the game.

You may notice the rules are slightly different than normal Tetris. Specifically, each action you take will result in a corresponding soft drop This is how the AI will play and therefore how the training data must be taken.

To play Tetris:

$ python

W: Hard drop (piece falls to the bottom)
A: Shift left
S: Soft drop (piece falls one tile)
D: Shift right
Q: Rotate left
E: Rotate right

At the end of each game, choose whether you want to store the information of that game in the data set. Data accumulates in a local file called 'training_data.npy'.

Example supervised learning agent from data

Run the supervised agent file and specify the standard training data file generated in the previous step as a command line argument.

$ python training_data.npy

Example reinforcement learning agent

# Start from a new randomized dqn agent
$ python
# Start from a the last recorded dqn checkpoint
$ python resume
# Specify a custom checkpoint
$ python resume supervised_checkpoint.pth.tar

The DQN agent currently optimizes on a metric of freedom of action. In essence the agent should learn to maximize the entropy of the board. A player in Tetris has the most freedom of action when the area is clear of pieces.

Watch a checkpoint play a game

$ python checkpoint.pth.tar


No releases published


No packages published

Contributors 4