Skip to content
An implementation of MusicVAE made for the NES MDB in PyTorch.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
nesmdb24_exprsco
nesmdb24_seprsco
svgs
.gitignore
100_12_TR_song_dict
12_12_song_dict
24_12_song_dict
36_12_song_dict
40_12_TR_song_dict
48_12_song_dict
52_12_TR_song_dict
52_24_TR_song_dict
52_sample_10.wav
52_sample_7.wav
52_sample_8.wav
52_sample_9.wav
54_12_TR_song_dict
64_12_TR_song_dict
MIDI and Audio.ipynb
MusicVAE.py
MusicVAE_TF.py
NES-MDB.ipynb
VAE_Trainer.py
checkpoint.py
data_utils.py
generate.py
generate_36.py
generate_tr.py
input.md
interpolation.py
prepare_data.py
preprocessing.py
readme.md
train.py

readme.md

8-bit VAE: A Latent Variable Model for NES Music

Xavier Garcia

This github contains the code for 8-bit VAE, a latent variable model for Nintendo Entertainment System (NES) music. Before diving into the details, here are a couple of samples derived from the model:

Sample 1

Sample 2

You can read more about the model in the blog post. To use this, first prepare the data by running the prepare_data.py. Once the data is prepared, run the train.py model to train the model. To generate music, run the generate_tr.py script. If you want to train without the TR voice, then modify the data accordingly and use generate.py to generate music.

You can’t perform that action at this time.