Vector Quantized Contrastive Predictive Coding for Template-based Music Generation
This is the companion github of the paper Vector Quantized Contrastive Predictive Coding for Template-based Music Generation. Results are available on our accompanying website.
clone the repository.
run (we recommend using a virtualenv)
pip install -r requirements.txt
How to use it
All the experiments reported here can be reproduced with the different configuration files located in VQCPCB/configs.
Encoders are trained independently from the decode, in a self-supervised manner. To train a particular encoder, run the following command
python main_encoder.py -t -c VQCPCB/configs/encoder_*.py
with encoder_* being the name of the configuration file.
Trained models are stored in models/. To observe the clusters learned by a trained encoder, you can run the command
python main_encoder.py -l -c models/encoder_*/config.py
To train a decoder for a particular encoder, you can run
python main_decoder.py -t -c VQCPCB/configs/decoder_*.py
after having specified in the configuration file VQCPCB/configs/decoder_*.py the path to the encoder:
Variations of chorales excerpts as well as the complete re-harmonisation of all the chorales found in our corpus can be generated by running
python main_decoder.py -l -c models/decoder_*/config.py