PyTorch code to interpolate through untrained ProGAN and MIDI creation models.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.
midi Near-vanilla import of midi and pro_gan_pytorch Feb 8, 2019
.gitignore Initial commit. Feb 8, 2019 Update Feb 8, 2019 Initial commit. Feb 8, 2019
sample.gif Add gif. Feb 8, 2019 Initial commit. Feb 8, 2019

Neural Reverbs

Watch the full video on YouTube.

This video and audio have been created with neural networks that have not seen any data. The video samples from a ProGAN and shows interpolations through the latent and parameter space of the generator. The audio samples from an LSTM to produce a midi and slowly interpolates through the parameter space to achieve variation.

Running the Code

This was created with PyTorch and is written assuming access to a CUDA device, but can be modified to be run without CUDA.

Video Generation:

  • Create frames with ./
  • Create video with ffmpeg -framerate 20 -i frames/%06d.png -c:v libx264 -pix_fmt yuv420p -crf 23 vid.mp4

This portion uses the akanimax/pro_gan_pytorch implementation of ProGAN.

Audio Generation:

  • Create MIDI with ./
  • Synthesize the MIDI with audio software like GarageBand or with a Python library like pretty_midi.
  • Add the audio to the video with ffmpeg -i vid.mp4 -i audio.mp3 -c:v libx264 -c:a copy

This portion uses code from warmspringwinds/pytorch-rnn-sequence-generation-classification and the midi library from here.


Thanks to Roger Iyengar for helpful suggestions.


Unless otherwise noted, the code in and is in the public domain. The code in pro_gan_pytorch and midi remains under the original licensing.