-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementation status and planned TODOs #1
Comments
Hi, I am one of your followers. I feel glad and excited to see the growth of a great project and want to do something for that. I am familiar with Chinese and I can help with Chinese frontend if needed. About Vocoder About Recipes Thank you, I am on your call ! |
Hi, @Yablon, many thanks for your comments! Your help with Chinese frontend support is definitely welcome! Let me first make a Japanese version of the entire system, and then let's discuss how to extend it to other languages. About vocoder: About recipe: FYI, I don't want to add Kaldi requirement to the repo. I guess it would make users having installation issues... |
@r9y9 I agree with you and hope to see the entire system. |
Hi @r9y9, really excited to see where this project goes! For training acoustic models that have a WORLD vocoder target perhaps it's a good idea to take a look at WGANSing.? In addition to the actual model used, I think their preprocessing gives some insight into how to predict WORLD vocoder features efficiently. |
Hi @apeguero1, thanks for sharing your thoughts! I will look into their paper and code to find something useful. Seems like they used https://smcnus.comp.nus.edu.sg/nus-48e-sung-and-spoken-lyrics-corpus/ for singing voice synthesis, but unfortunately there are no musicxml files and MIDI files available, which makes the task quite difficult. I guess the dataset was designed for speech-to-singing voice conversion. |
I can help with the DSP part too. When you publish your data process pipeline, I can help with building the LPCNET vocoder for the specific spectrogram and pitch or else. |
That would be great! I am now working on refactoring data processing code for Kiritan database. After that, I will make a simple time-lag model and a duration model (those described in https://ieeexplore.ieee.org/document/8659797). Once we complete
we can start experimenting with advanced ideas including neural vocoder integration, explicit vibrato consideration, end-to-end approach, GAN, Transformer, etc. I will keep posting progress here. I suppose that I will finish making a whole system in one or two weeks hopefully. |
Great ! |
A new paper for Chinese singing voice synthesis comes up on arxiv! It was submitted to INTERSPEECH 2020. Looks very interesting. "ByteSing: A Chinese Singing Voice Synthesis System Using Duration Allocated Encoder-Decoder Acoustic Models and WaveRNN Vocoders" |
Yes, it is. Tacotron(2) structure could be used everywhere and perform well. Is your implemention performs as well ? I think tacotron structure may needs more data, and dnn based may need less and performs more stable. What's your oppion ? |
In TTS, we typically need more than 10 hours of data to build attention-based seq2seq models. However, in contrast to TTS, SVS is highly constrained by a musical score (e.g. pitch, note duration, tempo, etc), so I suppose that we can build Tacotron-like models even on a small dataset. For example, see https://arxiv.org/abs/1910.09989. There are pros and cons in traditional parametric-based approaches and end-to-end approaches. I want to try the traditional one first, since it is simple and enables us to perform fast iterations of experiments, which I think is important at the early stage of prototyping. As for the Tacotron implementation, I implemented it before (https://github.com/r9y9/tacotron_pytorch) but it is now outdated. I would use https://github.com/espnet/espnet for Tacotron 2 or Transformer implementations. The toolkit is a little bit complicated, but it is well tested and worth resuing the components. |
I pushed the data preparation scripts for kiritan database: https://github.com/r9y9/kiritan_singing. I suppose I will finish making the entire system this weekend. Please wait for a few days! |
That is exciting! |
That's awesome can't wait to test it! :D |
I have implemented the time-lag model and duration model as well as the acoustic model. Now that we can generate a singing voice from a musicxml file. A generated sample can be found at https://soundcloud.com/r9y9/kiritan-01-test-svs-7?in=r9y9/sets/dnn-based-singing-voice. The quality is not good but not bad. I pushed lots of code including feature extraction, normalization, training, inference. The inference script is too dirty at the moment and needs to be refactored. I plan to do it tomorrow. Also, I pushed a recipe so that anyone can (ideally) reproduce my experiments. https://github.com/r9y9/dnnsvs/tree/master/egs/kiritan_singing Note that this is still WIP and may subject to change. |
I think the recipe is helpful for researchers but not very friendly for those who are not familiar with the internal of singing voice synthesis systems. I plan to make a Jupyter notebook to demonstrate the usage and how it works. |
I realized that SVS systems are more complicated than I initially thought. There are lots of things we need to do! |
Hi, just notice the project. It's awesome! There are not any open-source toolkit for singing voices out there. I'm not sure but it seems there are systems directly using the singing and alignment for training (e.g. https://github.com/seaniezhao/torch_npss). A possible direction might be pre-trained on raw data (maybe some alignment) and then refined on data with musicxml (after all, those strictly aligned data is much hard to obtain). BTW, do you have any intention to make the project with more general framework that not only confined to synthesis? As for ESPNet, it also has tasks including asr, speech translation, and speech enhancement. |
Hi @ftshijt. Thanks :) The paper "A Neural Parametric Singing Synthesizer" is very interesting. They propose a multi-stream autoregressive model for vocoder parameters; that's what I planned to do next! I was inspired by the paper "Autoregressive Neural F0 Model for Statistical Parametric Speech Synthesis" https://ieeexplore.ieee.org/abstract/document/8341752/. As for alignment, yes, it is sometimes hard to obtain. Regarding the Japanese Kiritan database, they provide annotated alignments, so I am using it (with small corrections). If there are no manual alignments, we can take the learning-based approach. For example, similar to what the authors of the above paper have done, we can use HMM to obtain alignment in an unsupervised manner. For this project direction, I want to focus on singing voice synthesis. ESPnet is an excellent tool for many speech tasks (I am one of the authors of ESPnet-TTS paper). However, it comes with complexity. Some of my friends in the TTS community told me it was difficult to use. To simplify the codebase and make it hackable, extensible, I want to focus on SVS. That said, I want to make a generic tool to support a broader range of models, from parametric to end-to-end. |
Not planned yet, but the speech-to-singing voice conversion task may fit in ESPnet's unified approach. |
Whoa! seems like OpenAI just released the GPT2 of music! I wonder how hard it would be to reproduce this without a million songs or hundreds of gpus. And if it works for songs with instrumentation then maybe it would be easier to train on a purely vocal dataset? The paper doesn't mention much about finetuning but perhaps there's some transfer learning opportunities here? |
I was so surprised that OpenAI's model is able to generate singing voices and instrumental simultaneously. It would be easier to train on a vocal dataset and transfer learning is definitely worth trying. |
As a minor issue, let me rename the repo from dnnsvs to nnsvs. |
I have created a jupyter notebook to demonstrate how we can use pre-trained models to generate singing voice samples. Neural network-based singing voice synthesis demo using kiritan_singing database (Japanese)Here goes if any of you are interested. If you want to just see the demo, check the pre-rendered nbviewer's page. If you want a interactive demo, look the google colab's one. |
I pushed all the code for feature extraction, training, and inference as well. Models used in the above demo can be reproduced by running the following recipe: https://github.com/r9y9/nnsvs/tree/master/egs/kiritan_singing/00-svs-world |
The notebook is great! The step by step approach makes it easier to follow (: voice sounds good so far! |
I made a new recipe for nit-song070, which is a singing voice dataset provided by the HTS working group. The dataset contains 31 songs recorded by a female Japanese singer. Data size is not huge but it is good for testing. |
I have added another recipe for jsut-song dataset. |
Good news: the author of NSF published a pytorch implementation of NSF: https://github.com/nii-yamagishilab/project-NN-Pytorch-scripts It should be easy to integrate it with our codebase. |
|
I think transformer is not a good choice for svs. Songs' mel spectrogram length always large than text speech audio, so it will out of memmory (such as decoder mask and ). |
this paper looks interesting https://speechresearch.github.io/hifisinger/ |
prepare_featuress の高速化を ProcessPoolExecutor を使用するように変更
https://soundcloud.com/r9y9/sets/nnsvs-and-neutrino-comparison I think the latest nnsvs is finally comparable with neutrino (that was the goal of this project). Still, there's a pretty of room for improvement of acoustic modeling though. |
WoW, This sounds great already! |
I moved this repository to https://github.com/nnsvs/nnsvs since I plan to add some related repositories in the future. Nothing is changed in terms of functionality. |
I guess I can finally close this issue once #167 is ready. |
I think I have finally done achieving NEUTRINO-level quality. Closing this issue, finally. |
This is an umbrella issue to track progress and discuss priority items. Comments and requests are always welcome.
MIlestones
Fundamental components
Quantized F0 modelingHMM (or similar)-based unsupervised phone-level alignment.https://github.com/DYVAUX/SHIRO-Models-JapaneseDemo
Dataset
Frontend
MusicXML -> context features
Chinese language supportRecipe for opencpop database #105Pure python implementation for musicxml parsingWe can use https://github.com/oatsu-gh/utaupy for converting UST to HTS labelsFrontend implementation for MIDI filesFrontend can be done by external toolsDSP
Acoustic model
Context features -> acoustic features
Timing model & duration model
Vocoder
Acoustic features -> raw waveform
LPCNetCommand-line tools
Data loader
Phrase-based mini-batch creationDesign TODOs
Think and write software designSoftware quality
Recipes
Misc
References
The text was updated successfully, but these errors were encountered: