Skip to content
No description, website, or topics provided.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
pyScoreParser @ 379b01a
.gitattributes Initial commit Jul 26, 2018
.gitignore Merge branch 'master' into xml_parser_modulization Jun 10, 2019
.gitmodules
README.md Update README.md Jun 10, 2019
binary_index.py add pedal cleanings May 9, 2019
data_split.py minor update Dec 2, 2018
data_statistics.py wip Apr 3, 2019
draw_correlation.py correlation, test piece in constant, perfrom style to measure May 16, 2019
isgn_param.dat add pretrained model Jun 10, 2019
model_constants.py correlation, test piece in constant, perfrom style to measure May 16, 2019
model_parameters.py change hyperparameters Jun 9, 2019
model_run.py add pretrained model Jun 10, 2019
nnModel.py add pretrained model Jun 10, 2019
prime_isgn_best.pth.tar add pretrained model Jun 10, 2019
training_data_stat.dat
trill_default_best.pth.tar add pretrained model Jun 10, 2019
trill_default_param.dat
virtuosoEvaluator.py add isgn baseline, alt version May 4, 2019

README.md

VirtuosoNet

Our research project is developing a system for generating expressive piano perfomrance, or simply 'AI Pianist'. The system reads a given music score in MusicX ML and generates a human-like performance MIDI file.

This repository contains PyTorch code and pre-trained model for Graph Neural Network for Music Score Data and Modeling Expressive Piano Performance (ICML 2019).

This documentation is currently a work in progress. contact: jdasam@kaist.ac.kr

How to generate performance MIDI from musicXML

  1. Put your musicXML in a folder. The filename shouldbe 'musicxml_cleaned.musicxml' or 'xml.xml' or 'musicxml.musicxml' We recommend ./test_pieces/

  2. Select the composer of piece. There are 16 composers in our data set: 'Bach', 'Balakirev', 'Beethoven', 'Brahms', 'Chopin', 'Debussy', 'Glinka', 'Haydn', 'Liszt', 'Mozart', 'Prokofiev', 'Rachmaninoff', 'Ravel', 'Schubert', 'Schumann', 'Scriabin' You can select one of them using -comp=. The input composer does not have to be the same with the actual composer of the input piece. We recommend to use composer among the following list because they have more data than others: Bach, Beethoven, Chopin, Haydn, Liszt, Mozart, Ravel, Schubert (example -comp=Mozart) (start with capital letter)

  3. select model model code is: isgn (proposed method), vnet(VirtuosoNet), gvnet(Graph VirtuosoNet), baseline(LSTM) (example: -code=isgn)

  • (Option) select initial tempo You can select initial tempo of the piece in quater note per minute. If you do not enter tempo, the tempo used in MusicXML file will be used.
  1. run python script

python3 model_run.py -mode=test -code=isgn -path=./test_pieces/bwv_858/ -comp=Bach -tempo=60)

  1. You can use -mode=testAll to generate performance for the pre-defined test set, which is defined in model_constants.py It will encode emotion cue from pre-recorded performances in emotionNet folder, and generate the performance with encoded z for each emotion for each piece in the list. 'OR' represent original, or natural emotion of the piece.

python3 model_run.py -mode=testAll -code=isgn

You can also generate performance for the pre-defined test set only

  1. check the output file The file is saved in ./test_result/ folder. z0 means latent vector z was sampled from normal distribution.
  • Caution on pedal We add sustain pedal and soft pedal in MIDI file as a CC event of channel 64 and 67. Depending on your MIDI player, the pedal can be applied in different way. For example, Logic Pro X activate pedal if the value is lager than zero, while the actual Disklavier's pedal threshold is about 64. In this case, our performance will sound too 'wet', or too much pedal. In this case, we propose to use option -bp=true (--boolPedal), which makes value of pedal event zero under certain threshold.

If the MIDI player cannot handle pedal, the articulation of our notes will sound extremly short, since the performance we used for training set did not consider much to the articulation of notes with pedal.

How to train the model

We have uploaded toy training set (icml_haydn_set.dat, icml_haydn_set_stat.dat, icml_haydn_set_test.dat). You can train model by selecting this data set and training mode. You can change model parameters in model_parameters.py

python3 model_run.py -mode=train -code=isgn_test -data=icml_haydn_set)

You can’t perform that action at this time.