Skip to content

feizc/Perceiver-Music-Generation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Perceiver-Music-Generation

Create pop music based on Perceiver-AR model implemented by lucidrains.

Copy mechanism is introduced to enhance the rhythm of music.

1. Requirements

torch == 1.11.0

transformers == 4.19.4

pyarrow == 8.0.0

2. Dataset

Download the Magenta MAESTRO v.2.0.0 Piano MIDI Dataset from the web, and put the file under the file direction:

./data

The music dataset is pre-processed with midi neural pprocessor, do not worry, the processing code is integrated in this repository, and you only needs to run:

$ python preprocess.py 

Of course, other midi music dataset is also supported.

3. Training

$ python train.py --data_dir [data path] --ckpt_dir [ckpt path]

4. Inference

We provide the trained ckpt in the google drive. Download the trained ckpt and put it to the ckpt_dir, then run the command:

$ python generate.py --data_dir [data path] --ckpt_dir [ckpt path] --output_dir [output path]

Some well-generated music cases can be found in google drive.

About

music generation with perceiver-ar model

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages