Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I have a few questions about the paper. #2

Closed
SeongYeonPark opened this issue Nov 9, 2022 · 3 comments
Closed

I have a few questions about the paper. #2

SeongYeonPark opened this issue Nov 9, 2022 · 3 comments

Comments

@SeongYeonPark
Copy link

SeongYeonPark commented Nov 9, 2022

Hi, thank you for such creative work on Voice Conversion.

I have 3 questions about your paper, FreeVC: Towards High-Quality Text-Free One-Shot Voice Conversion.
It would help a lot if you would answer them.

  1. Did you freeze the WavLM module during training? (Also, did you use pre-trained weights for this module?)
  2. Can I get some architectural information about the bottleneck extractor? Is it just two fully connected layers that map the x_ssl features into d and then 2d dimension?
  3. During training, using the SR augmentation horizontally would un-align the source mel and the target wav.
    So I assume that you only used the SR augmentation vertically during training. Is this true?

++ Edit
I have one further question.
Could you possibly share the config file for the Hifi-gan that vocodes the augmented x_mel into WavLm input y' s?

@OlaWod
Copy link
Owner

OlaWod commented Nov 9, 2022

Thank your for your interest in our work.

  1. We freeze the WavLM module and use the pre-trained weights.
  2. The bottleneck extractor consists of a linear projection layer that projects the WavLM feature into $d$-dim hidden representation, 16 layers of non-causal WaveNet residual blocks, and a linear projection layer that projects $d$-dim hidden representation into $2d$-dim hidden representation, which is latter split into $d$-dim $\mu_{\theta}$ and $d$-dim $\sigma_{\theta}$.
  3. Yes.
  4. We use the same config as the official HiFi-GAN for producing $y'$.

@SeongYeonPark
Copy link
Author

Thank you very much for your clear answer.

So it can be said that

  1. The mel-spectrograms that are input to the SR-augmentation part are obtained with 22050Hz waves, using hop size 256, window size 1024 (following the config v1 of the official HiFi-GAN repository)
  2. and the linear-spectrograms that are input to the posterior encoder are obtained with 16000Hz waves, using hop size 320, window size 1280 (as said in your paper, section 3.1)
  3. Perhaps the waveform reconstructed after SR-augmentation and HiFi-GAN config v1 model (which will have a sampling rate 22050 Hz) is resampled to 16000Hz before inputting into WavLM?

@OlaWod
Copy link
Owner

OlaWod commented Nov 9, 2022

Thank you very much for your clear answer.

So it can be said that

  1. The mel-spectrograms that are input to the SR-augmentation part are obtained with 22050Hz waves, using hop size 256, window size 1024 (following the config v1 of the official HiFi-GAN repository)
  2. and the linear-spectrograms that are input to the posterior encoder are obtained with 16000Hz waves, using hop size 320, window size 1280 (as said in your paper, section 3.1)
  3. Perhaps the waveform reconstructed after SR-augmentation and HiFi-GAN config v1 model (which will have a sampling rate 22050 Hz) is resampled to 16000Hz before inputting into WavLM?

Yes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants