Skip to content

NeuroDrum - A neural network based percussion synthesiser

License

Notifications You must be signed in to change notification settings

pc2752/percussive_synth

Repository files navigation

NeuroDrum

António Ramires, Pritish Chandna, Xavier Favory, Emilia Gomez, Xavier Serra

Music Technology Group, Universitat Pompeu Fabra, Barcelona

This repository contains the source code for NeuroDrum, a parametric percussion synthesis using the Wave-U-Net architecture. The syntheser is controlled using only high-level timbral characteristics: the envelope and the sounds hardness, depth, brightness, roughness, boominess, warmth and sharpness.

Interactive demo available as a Google Colab Notebook.

Selected sound examples available at the website.

A preprint is available at https://arxiv.org/abs/1911.11853

Installation

To install NeuroDrum and its dependencies, clone the repository and use:
pip install -r percussive_synth/requirements.txt 

Then, you will have to download the model weights which you will link on the generation process.

Generation

Sounds can be generated within Python.

The following example shows how to generate and save a sound from NeuroDrum

# Import the models module and create an instance of it, this part should only be ran once
import models
model = models.PercSynth()

# Load one of the pre-trained models
sess = model.load_sess(log_dir="/percussive_synth/log_free_full/")

# Generate the sound:
# env should have 16000 elements from 0 to 1
# parameters should be an array with values from 0 to 1 corresponding to each of the following features:
# ['brightness', 'hardness', 'depth', 'roughness', 'boominess', 'warmth', 'sharpness']
output = model.get_output(envelope, parameters , sess)
sf.write('audio.wav', output, 16000)

Training

If you would like to train a model with a private dataset, the following steps should be taken:

The sounds should be in 16kHz sample rate and be cut or padded to have 1 second of length. Perform the analysis of the dataset using the ac-audio-extractor.

Prepare the data for use, set the wav_dir and the ana_dir in the config.py and run prep_data.py.

Once setup, you can run the following command to train the model:

python main.py -t

To generate examples from the validation set from the command line, the following command can be used:

python main.py -e

Acknowledgments

This work is partially funded by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No765068, MIP-Frontiers.

This work is partially supported by the Towards Richer Online Music Public-domain Archives (TROMPA) (H2020 770376) European project.

The TITANX used for this research was donated by the NVIDIA Corporation.

About

NeuroDrum - A neural network based percussion synthesiser

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages