Skip to content
Basic python snake implementation with event listeners to hook into machine learning.
Python Makefile
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.

PySnake - a TensorFlow project

This is a project to accompany the course "Implementing Artificial Neural Networks with TensorFlow".

We implement a simple snake game and different playing strategies:

  • Human controlled: python [--store subjectID] runs the snake to be controlled with the arrow keys. Supply a subject ID to store the score in participant[subjectID].csv. If no ID but --store is supplied, it is stored in participant_unspecified.csv, to not lose it.
  • Systematic: python [--store] runs the systematic snake. Supply --store to store the result into systematic.csv.
  • Q-Learning:
    • python test CHECKPOINT_PATH e.g. python test ckpts/q_snake-20170218-001217-100000
  • Evolve: Use python train if you want to start a new training session with the parameters defined in the class or use python play <file>.np to replay a given snake network with the correct number of weights.

For the accompanying project report, check the documentation folder.


You can add --video [name.mp4] to save a movie. If you do not supply a name, it's stored as pysnake.mp4. This is not perfect yet, but it's there. Note that if you use this in conjunction with --store subject you should fully specify all but the last argument:

python --video human.mp4 --store
python --store 123 --video
python --video human.mp4 --store 123

Similarly you should use --store before --video or supply a file name for the systematic snake:

python --store --video
python --video systematic.mp4 --store


To aggregate n runs of data of the systematic snake, run:

make sys n=100
make sys n=1000

Similarily, for collecting q snake data, run:

make q n=100 p=ckpts/q_snake-20170218-001217-100000

To run a participant, just run:

make id=1

The script looks for the cheapest AWS region to lunch a p2.xlarge spot instance and provides you with a docker-machine command to launch such an instance prepared with an AMI optimized for Tensorflow GPU computing using nvidia-docker and Google's official Tensorflow container image.

After launching a container scp your files to the container (or use git ssh'ed into the container) and start training:

localhost$ eval $(docker-machine env machinename)
localhost$ docker-machine scp ./ machinename:/home/ubuntu/
localhost$ docker-machine ssh machinename
awsremote$ sudo nvidia-docker run -it -v /home/ubuntu:/workdir tensorflow/tensorflow:latest-gpu-py3 python3 /workdir/

Or if you want to run the use the following command after copying over the relevant source files:

sudo docker run -it -v /home/ubuntu:/workdir tensorflow/tensorflow:latest-py3 python3 /workdir/
You can’t perform that action at this time.