Skip to content

Pytorch implementation of "f0-consistent many-to-many non-parallel voice conversion via conditional autoencoder"

Notifications You must be signed in to change notification settings

hrnoh/f0-autovc

Repository files navigation

F0-AUTOVC: F0-Consistent Many-to-Many Non-Parallel Voice Conversion via Conditional Autoencoder

This repository provides a PyTorch implementation of the paper F0-AUTOVC.

Based on

Dependencies

  • Python 3.7
  • Pytorch 1.6.0
  • TensorFlow
  • Numpy
  • librosa
  • tqdm

Usage

  1. Prepare dataset
    we used the VCTK dataset as used in original paper.
    But, you can use your own dataset.

  2. Prepare the speaker to gender file as shown in nikl_spk.txt and run make_spk2gen.py

    • Format
      speaker1 gender1
      speaker2 gender2

    • Example:
      p225 W
      p226 M
      p301 W
      p302 W
      .
      .

  3. Preprocess data using preprocess.py

  4. Run task_launcher.py

About

Pytorch implementation of "f0-consistent many-to-many non-parallel voice conversion via conditional autoencoder"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages