Skip to content
/ crepe Public
forked from marl/crepe

CREPE: A Convolutional Representation for Pitch Estimation -- pre-trained model (ICASSP 2018)

License

Notifications You must be signed in to change notification settings

aguai/crepe

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CREPE Pitch Tracker Build Status

CREPE is a monophonic pitch tracker based on a deep convolutional neural network operating directly on the time-domain waveform input. CREPE is state-of-the-art (as of 2018), outperfoming popular pitch trackers such as pYIN and SWIPE:

Further details are provided in the following paper:

CREPE: A Convolutional Representation for Pitch Estimation
Jong Wook Kim, Justin Salamon, Peter Li, Juan Pablo Bello.
Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2018.

We kindly request that academic publications making use of CREPE cite the aforementioned paper.

Using CREPE

CREPE requires a few Python dependencies as specified in requirements.txt. To install them, run the following command in your Python environment:

$ pip install -r requirements.txt

This repository includes a pre-trained version of the CREPE model for easy use. To estimate the pitch of audio_file.wav, run:

$ python crepe.py audio_file.wav

The resulting audio_file.f0.csv contains 3 columns: the first with timestamps (a 10 ms hop size is used), the second contains the predicted fundamental frequency in Hz, and the third contains the voicing confidence, i.e. the confidence in the presence of a pitch:

time,frequency,confidence
0.00,185.616,0.907112
0.01,186.764,0.844488
0.02,188.356,0.798015
0.03,190.610,0.746729
0.04,192.952,0.771268
0.05,195.191,0.859440
0.06,196.541,0.864447
0.07,197.809,0.827441
0.08,199.678,0.775208
...

The script can also optionally save the output activation matrix of the model to an npy file (--save-activation), where the matrix dimensions are (n_frames, 360) using a hop size of 10 ms (there are 360 pitch bins covering 20 cents each).The script can also output a plot of the activation matrix (--save-plot), saved to audio_file.activation.png including an optional visual representation of the model's voicing detection (--plot-voicing). Here's an example plot of the activation matrix (without the voicing overlay) for an excerpt of male singing voice:

salience

For batch processing of files, you can provide a folder path instead of a file path:

$ python crepe.py audio_folder

The script will process all WAV files found inside the folder.

For more information on the usage, please refer to the help message:

$ python crepe.py --help

Please Note

  • The current version only supports WAV files as input.
  • The model is trained on 16 kHz audio, and if the input audio has a different sample rate, it will be first resampled to 16 kHz using resampy.
  • While in principle the code should run with any Keras backend, it has only been tested with the TensorFlow backend. The model was trained using Keras 2.1.5 and TensorFlow 1.6.0.
  • Prediction is significantly faster if Keras (and the corresponding backend) is configured to run on GPU.
  • The provided model is trained using the following datasets, composed of vocal and instrumental audio, and is therefore expected to work best on this type of audio signals.
    • MIR-1K [1]
    • Bach10 [2]
    • RWC-Synth [3]
    • MedleyDB [4]
    • MDB-STEM-Synth [5]
    • NSynth [6]

References

[1] C.-L. Hsu et al. "On the Improvement of Singing Voice Separation for Monaural Recordings Using the MIR-1K Dataset", IEEE Transactions on Audio, Speech, and Language Processing. 2009.

[2] Z. Duan et al. "Multiple Fundamental Frequency Estimation by Modeling Spectral Peaks and Non-Peak Regions", IEEE Transactions on Audio, Speech, and Language Processing. 2010.

[3] M. Mauch et al. "pYIN: A fundamental Frequency Estimator Using Probabilistic Threshold Distributions", Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). 2014.

[4] R. M. Bittner et al. "MedleyDB: A Multitrack Dataset for Annotation-Intensive MIR Research", Proceedings of the International Society for Music Information Retrieval (ISMIR) Conference. 2014.

[5] J. Salamon et al. "An Analysis/Synthesis Framework for Automatic F0 Annotation of Multitrack Datasets", Proceedings of the International Society for Music Information Retrieval (ISMIR) Conference. 2017.

[6] J. Engel et al. "Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders", arXiv preprint: 1704.01279. 2017.

About

CREPE: A Convolutional Representation for Pitch Estimation -- pre-trained model (ICASSP 2018)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%