Skip to content
Deep learning driven jazz generation using Keras & Theano!
Branch: master
Clone or download


Type Name Latest commit message Commit time
Failed to load latest commit information.
midi Cleaned up files Apr 3, 2016
.gitignore Updated README Mar 26, 2017
LICENSE Added .gitignore, license, notice Apr 2, 2016
NOTICE Updated NOTICE, minor changes Apr 21, 2016 Sad goodbyes May 29, 2019 Clarifications on running on GPU Apr 22, 2016 Minor updates to comments Dec 4, 2016 Minor updates to comments Dec 4, 2016 Modularized code Apr 21, 2016 Fixed random # generator bug, chords bug Apr 7, 2016

Note: deepjazz is no longer being actively developed. It may be refactored at some point in the future. Goodbye and thank you for your interest 😢


Using Keras & Theano for deep learning driven jazz generation

I built deepjazz in 36 hours at a hackathon. It uses Keras & Theano, two deep learning libraries, to generate jazz music. Specifically, it builds a two-layer LSTM, learning from the given MIDI file. It uses deep learning, the AI tech that powers Google's AlphaGo and IBM's Watson, to make music -- something that's considered as deeply human.

Check out deepjazz's music on SoundCloud!



Run on CPU with command:

python [# of epochs]

Run on GPU with command:

THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python [# of epochs]

Note: running Keras/Theano on GPU is formally supported for only NVIDIA cards (CUDA backend).

Note: must be modified to work with other MIDI files (the relevant "melody" MIDI part needs to be selected). The ability to handle this natively is a planned feature.


Ji-Sung Kim
Princeton University, Department of Computer Science
hello (at)


This project develops a lot of preprocessing code (with permission) from Evan Chow's jazzml. Thank you Evan! Public examples from the Keras documentation were also referenced.

Code License, Media Copyright

Code is licensed under the Apache License 2.0
Images and other media are copyrighted (Ji-Sung Kim)

You can’t perform that action at this time.