Skip to content
/ avocodo Public

Official implementation of "Avocodo: Generative Adversarial Network for Artifact-Free Vocoder" (AAAI2023)

License

Notifications You must be signed in to change notification settings

ncsoft/avocodo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🥑 Avocodo: Generative Adversarial Network for Artifact-Free Vocoder

Accepted for publication in the 37th AAAI conference on artificial intelligence.

https://img.shields.io/badge/arXiv-2211.04610-red.svg?style=plastic https://img.shields.io/badge/Sample_Page-Avocodo-blue.svg?style=plastic https://img.shields.io/badge/NC_SpeechAI-publications-brightgreen.svg?style=plastic

In our paper, we proposed Avocodo. We provide our implementation as an open source in this repository.

Abstract : Neural vocoders based on the generative adversarial neural network (GAN) have been widely used due to their fast inference speed and lightweight networks while generating high-quality speech waveforms. Since the perceptually important speech components are primarily concentrated in the low-frequency bands, most GAN-based vocoders perform multi-scale analysis that evaluates downsampled speech waveforms. This multi-scale analysis helps the generator improve speech intelligibility. However, in preliminary experiments, we discovered that the multi-scale analysis which focuses on the low-frequency bands causes unintended artifacts, e.g., aliasing and imaging artifacts, which degrade the synthesized speech waveform quality. Therefore, in this paper, we investigate the relationship between these artifacts and GAN-based vocoders and propose a GAN-based vocoder, called Avocodo, that allows the synthesis of high-fidelity speech with reduced artifacts. We introduce two kinds of discriminators to evaluate speech waveforms in various perspectives: a collaborative multi-band discriminator and a sub-band discriminator. We also utilize a pseudo quadrature mirror filter bank to obtain downsampled multi-band speech waveforms while avoiding aliasing. According to experimental resutls, Avocodo outperforms baseline GAN-based vocoders, both objectviely and subjectively, while reproducing speech with fewer artifacts.

Pre-requisites

  1. Install pyenv
  1. Clone this repository
  2. Setup virtual environment and install python requirements. Please refer pyproject.toml
pyenv install 3.8.11
pyenv virtualenv 3.8.11 avocodo
pyenv local avocodo

pip install wheel
pip install poetry

poetry install
  1. Download and extract the LJ Speech dataset.
  • Move all wav files to LJSpeech-1.1/wavs
  • Split dataset into a trainset and a validationset.
cat LJSpeech-1.1/metadata.csv | tail -n 13000 > training.txt
cat LJSpeech-1.1/metadata.csv | head -n 100 > validation.txt

Training

python avocodo/train.py --config avocodo/configs/avocodo_v1.json

Inference

python avocodo/inference.py --version ${version} --checkpoint_file_id ${checkpoint_file_id}

Reference

We referred to below repositories to make this project.

HiFi-GAN

Parallel-WaveGAN

About

Official implementation of "Avocodo: Generative Adversarial Network for Artifact-Free Vocoder" (AAAI2023)

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages