Skip to content
/ unagan Public

Code for Unconditional Audio Generation with GAN and Cycle Regularization

License

Notifications You must be signed in to change notification settings

ciaua/unagan

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Unconditional Audio Generation with GAN and Cycle Regularization

This repository contains the code and samples for our paper "Unconditional Audio Generation with GAN and Cycle Regularization", accepted by INTERSPEECH 2020. The goal is to unconditionally generate singing voices, speech, and instrument sounds with GAN.

The model is implemented with PyTorch.

Paper

Unconditional Audio Generation with GAN and Cycle Regularization

Install dependencies

pip install -r requirements.txt

Download pretrained parameters

The pretrained parameters can be downloaded here: Pretrained parameters

Unzip it so that the models folder is in the current folder.

Or use the following script

bash download_and_unzip_models.sh

Usage

Display the options

python generate.py -h

Generate singing voices

The following commands are equivalent.

python generate.py
python generate.py -data_type singing -arch_type hc --duration 10 --num_samples 5
python generate.py -d singing -a hc --duration 10 -ns 5

Generate speech

python generate.py -d speech

Generate piano sounds

python generate.py -d piano

Generate violin sounds

python generate.py -d violin

Vocoder

We use MelGAN as the vocoder. The trained vocoders are included in the models.zip

For singing, piano, and violin, we have modify the MelGAN to include GRU in the vocoder architecture. We have found that this modification yields improved audio quality. For speech, we directly use the trained LJ vocoder from MelGAN.

Train your own model

One may use the following steps to train their own models.

  1. (Singing only) Separate singing voices from the audios you collect. We use a separation model we developed. You can use open-sourced ones such as Open-Unmix or Spleeter.

  2. scripts/collect_audio_clips.py

  3. scripts/extract_mel.py

  4. scripts/make_dataset.py

  5. scripts/compute_mean_std.mel.py

  6. scripts/train.*.py

Generation with custom models

You can replace the path in the param_fp variable in generate.py with either params.Generator.best_Convergence.torch or params.Generator.latest.torch in the folder of the trained model. Files with extensions .torch and .pt are both saved parameters.

Audio samples

Some generated audio samples can be found in:

samples/

About

Code for Unconditional Audio Generation with GAN and Cycle Regularization

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published