We provide our pretrained model and notebook for inference in this repository.
Abstract : The goal of voice conversion (VC) is to convert input voice to match the target speaker's voice while keeping text and prosody intact. VC is usually used in entertainment and speaking-aid systems, as well as applied for speech data generation and augmentation. The development of any-to-any VC systems, which are capable of generating voices unseen during model training, is of particular interest to both researchers and the industry. Despite recent progress, any-to-any conversion quality is still inferior to natural speech. In this work, we propose a new any-to-any voice conversion pipeline. Our approach uses automated speech recognition (ASR) features, pitch tracking, and a state-of-the-art waveform prediction model. According to multiple subjective and objective evaluations, our method outperforms modern baselines in terms of voice quality, similarity and consistency.
Paper demo: samples
Pre-trained model: google drive. put the model to the root of the repository.
-
Using docker (recommended):
Build docker image:
docker build . -t hifi_vc
run docker with exact one gpu!
docker run --gpus '"device=0"' -it --net=host hifi_vc
-
Using Pip (use
torch>=1.13
):pip install -r requirements
- Download the model.
- Build a docker.
- Inference with
inference.ipynb
The f0_utils.py
in modified from PPG-VC
Feel free to use our library in your commercial and private applications.
hifi_vc is covered by Apache 2.0. Read more about this license here