Permalink
Switch branches/tags
Nothing to show
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
70 lines (40 sloc) 3.33 KB

Recurrent Neural Networks with Swift

This is the code that accompanies my blog post Recurrent Neural Networks with Swift and Accelerate.

The blog post describes how to use a neural network with LSTM (Long Short-Term Memory) to train a drummer. We use the trained model in an iOS app to make it play new drum rhythms.

This project includes the following folders:

  • Drummer: A simple iOS app that uses the trained model to play the drums.
    • Resources: The weights for the trained model: Wx.bin and Wy.bin.
  • Examples: mp3 and MIDI files of output generated by the drummer.
  • Scripts: Python scripts to train the model with TensorFlow on your Mac.
    • Data: A few example drum patterns. For copyright reasons I cannot include the original dataset that was used for training.

The iOS app

Open the Drummer.xcodeproj file in Xcode 8 and run the app on a device. MIDI playback doesn't seem to work very well inside the simulator.

Each time you tap the Play those funky drums! button, the model will generate a completely new sequence of drum notes and play them.

Rock on!

Training the model

Note: It's not very useful to train this model yourself if you don't have access to a large library of drum patterns in MIDI format. I can't share my MIDI patterns with you because they are part of a commercial package. However, you can run the iOS app because it includes a pre-trained model.

To train the model yourself, do the following:

  1. Make sure these are installed: python3, numpy, tensorflow.

  2. Add a bunch of new drum patterns to Scripts/Data. It's really not exciting without enough data! (You can train on the two patterns in the Data folder but the model won't be very good.)

  3. Run the convert_midi.py script to convert the MIDI files with the drum patterns into a dataset. This outputs the files X.npy, ix_to_note.p, and ix_to_tick.p.

Note: If you use your own MIDI patterns, the conversion script may not work very well since it makes a few specific assumptions about the layout of those files. You may need to tweak convert_midi.py.

  1. Run the lstm.py script as:
python3 lstm.py train

This trains the neural network and saves the model to a new directory checkpoints every 500 training steps.

Training happens in a loop that goes on for a very long time, so press Ctrl+C when you're happy with the training set accuracy.

Testing the model

Because you may not be able to train the model, I have included a file demo.mid that was generated using the version of the model that I trained.

Open GarageBand or Logic Pro and drag the demo.mid file into a drum track to hear what it sounds like. (Or just play the MP3 version.)

If you trained the model yourself, you can test it with:

python3 lstm.py sample checkpoints/model-XXXX

where XXXX is the iteration that gave the best results in training. In "sample" mode the script will create a file called generated.mid.

Using the model with the iOS app

If you trained the model yourself, you first need to export the weights so they can be used in the app:

python3 lstm.py export checkpoints/model-XXXX

This outputs Wx.bin and Wy.bin. Copy these files to the Drummer/Resources directory to embed them in the iOS app.