Skip to content
master
Go to file
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 
 
 

README.md

Vector Quantized Contrastive Predictive Coding for Template-based Music Generation

Gaëtan Hadjeres, Sony CSL, Paris, France (gaetan.hadjeres@sony.com)
Léopold Crestel, Sony CSL, Paris, France (leopold.crestel@sony.com)

This is the companion github of the paper Vector Quantized Contrastive Predictive Coding for Template-based Music Generation. Results are available on our accompanying website.

Installation

To install

  • clone the repository.

  • run (we recommend using a virtualenv)

      pip install -r requirements.txt
    

How to use it

All the experiments reported here can be reproduced with the different configuration files located in VQCPCB/configs.

Encoders are trained independently from the decode, in a self-supervised manner. To train a particular encoder, run the following command

python main_encoder.py -t -c VQCPCB/configs/encoder_*.py

with encoder_* being the name of the configuration file.

Trained models are stored in models/. To observe the clusters learned by a trained encoder, you can run the command

python main_encoder.py -l -c models/encoder_*/config.py

To train a decoder for a particular encoder, you can run

python main_decoder.py -t -c VQCPCB/configs/decoder_*.py 

after having specified in the configuration file VQCPCB/configs/decoder_*.py the path to the encoder:

'config_encoder':              'models/encoder_*/config.py',

Variations of chorales excerpts as well as the complete re-harmonisation of all the chorales found in our corpus can be generated by running

python main_decoder.py -l -c models/decoder_*/config.py 

About

Vector Quantized Contrastive Predictive Coding for Template-based Music Generation

Resources

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages