Skip to content
Convert your voice to favorite voice
Branch: master
Clone or download
Latest commit 83fccf4 Jul 2, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
become_yukarin numpy.load Jun 12, 2019
recipe レシピ追加 Mar 7, 2018
scripts Update super_resolution_test.py Jun 30, 2019
tests リアルタイム機能を切り分け Mar 11, 2018
LICENSE ライセンス追加 Feb 12, 2018
README.md english readme Jun 22, 2019
README_jp.md english readme Jun 22, 2019
requirements.txt Update requirements.txt Jul 1, 2019
setup.py ライブラリの依存をなくした Feb 18, 2018
train.py remove chainerui Jul 1, 2019
train_sr.py remove chainerui Jul 1, 2019

README.md

Become Yukarin: Convert your voice to favorite voice

Become Yukarin is a repository for voice conversion with a Deep Learning model. By traingin with a large amount of the original and favorite voice, The Deep Learning model can convert the original voice to the favorite voice.

Japanese README

Supported environment

  • Linux OS
  • Python 3.6

Preparation

# install required libraries
pip install -r requirements.txt

Training

To run a Python script for training, you should set the environment variable PYTHONPATH to find the become_yukarin library. For example, you can execute scripts/extract_acoustic_feature.py with the following command:

PYTHONPATH=`pwd` python scripts/extract_acoustic_feature.py ---

First Stage Model

  • Prepare voice data
    • Put input/target voice data in two directories (with same file names)
  • Create acoustic feature
    • scripts/extract_acoustic_feature.py
  • Train
    • train.py
  • Test
    • scripts/voice_conversion_test.py

Second Stage Model

  • Prepare voice data
    • Put input/target voice data in two directories
  • Create acoustic feature
    • scripts/extract_spectrogram_pair.py
  • Train
    • train_sr.py
  • Test
    • scripts/super_resolution_test.py
  • Convert other voice data
    • Use SuperResolution class and AcousticConverter class
    • sample code

Reference

License

MIT License

You can’t perform that action at this time.