Skip to content
master
Go to file
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
vae
 
 
 
 
 
 

README.md

Auto-mask Music Generative Model via EC2-VAE Disentanglement

This is the work with this paper in IEEE International Conference on Semantic Computing ICSC 2020: link

We implement EC2-VAE into the conditional generative model to let people generate the music melody in terms of controlling rhythm patterns and chord progressions, and even extra chord function labels.

See repo structure:

  • processed_data: processed Nottingham data in EC2-VAE latent vector sequences, due to the 100MB limits, we have some missing files here.
  • vae: EC2-VAE model
  • AmMGM_model_decode.ipynb: about how to use the trained model parameters to generate the music from train/valid/test dataset.
  • model_mask_cond: conditional generative model
  • train_AmMGM: training model file.
  • result: the vae_nottingham_output, model_generation_out, and sample_for_presentation

We did not provide the trained parameters in github, if you want find out both AmMGM-parameters and EC2-VAE-parameters we trained for this model, check out the link here.

Credit

Please cite this paper if you want to base on this work to make improvements or further research.

@inproceedings{amg-ec2vae-icsc,
         author = {Ke Chen and Gus Xia and Shlomo Dubnov},
         title = {Continuous Melody Generation via Disentangled Short-Term Representations and Structural Conditions},
         booktitle = {{IEEE} 14th International Conference on Semantic Computing, {ICSC}},
         pages = {128--135},
         publisher = {{IEEE}},
         year = {2020},
         address = {San Diego, CA, USA}
}

About

Implementing EC2-VAE to the conditional generative model to generate music with controlling rhythm patterns

Resources

License

Releases

No releases published

Packages

No packages published