This repo presents the code implementation for the paper Pianist Identification Using Convolutional Neural Networks
The training was monitored by with W&B. Pre-trained models and artifacts could be downloaded though the given link to the project.
For re-training models, please contact me for the data and run the following commands:
python main.py --cuda_devices YOUR_CUDA_DEVICES --mode train
Checkpoints trained with different input lengths and number of features are available here.
In this study, we used piano MIDI performances from the ATEPP dataset. However, we have also made attempts on this task with the following datasets:
The MAESTRO dataset does not provide information about performers for each performance. We complemented the name and nationality to the meta-data by crawling the website of the International E-Piano Competition and manual verification. Results are provided here.
Around a hundred audio recordings were found wrongly labeled by the discography given in MazurkaBL during the research progress. By a cover song detection algorithm and manual verification, we created a clean version of the discography, provided here.
We applied the piano transcription algorithm by Kong et al. to both the datasets (cleaned version). The transicribed midis are available here.
@ARTICLE{2023arXiv231000699T,
author = {{Tang}, Jingjing and {Wiggins}, Geraint and {Fazekas}, Gyorgy},
title = "{Pianist Identification Using Convolutional Neural Networks}",
journal = {arXiv e-prints},
keywords = {Computer Science - Sound, Electrical Engineering and Systems Science - Audio and Speech Processing},
year = 2023,
month = oct,
eid = {arXiv:2310.00699},
pages = {arXiv:2310.00699},
doi = {10.48550/arXiv.2310.00699},
archivePrefix = {arXiv},
eprint = {2310.00699},
primaryClass = {cs.SD},
adsurl = {https://ui.adsabs.harvard.edu/abs/2023arXiv231000699T},
adsnote = {Provided by the SAO/NASA Astrophysics Data System}
}
Jingjing Tang: jingjing.tang@qmul.ac.uk