Chainer implementation of Deepmind's WaveNet
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
_tests_ ✌️ Apr 1, 2017
train_audio v2 Jun 21, 2017
.gitignore ✌️ Apr 1, 2017
README.md v2 Jun 21, 2017
data.py
faster_wavenet.py ✌️ Apr 1, 2017
wavenet.py v2 Jun 21, 2017

README.md

WaveNet: A Generative Model for Raw Audio

This is the Chainer implementation of WaveNet

この記事で実装したコードです。

まだ完成していませんが音声の生成はできます。

Todo:

  • Generating audio
  • Local conditioning
  • Global conditioning
  • Training on CSTR VCTK Corpus

Training the network

Requirements

  • Chainer 2
  • scipy.io.wavfile

Preprocessing

Donwsample your .wav to 16KHz / 8KHz to speed up convergence.

Create data directory

Add all .wav files to /train_audio/wav

Hyperparameters

You can edit the hyperparameters of the network in model.py before running train.py, or edit /params/params.json after training starts.

Training

run train.py

Generating audio

run generate.py

Passing --use_faster_wavenet will generate audio faster than original WaveNet.

Listen to a sample generated by WaveNet

🎶 music

Implementation

figure

figure

figure