Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Relation / comparison to encodec #1

Closed
vadimkantorov opened this issue May 15, 2023 · 5 comments
Closed

Relation / comparison to encodec #1

vadimkantorov opened this issue May 15, 2023 · 5 comments
Labels
question Further information is requested

Comments

@vadimkantorov
Copy link

Hi! Thanks for the open-source release, along with the training code!

I noticed that the AudioDec paper does not cite High Fidelity Neural Audio Compression (https://github.com/facebookresearch/encodec). I wonder if any comparisons with pretrained encodec were conducted. Or please correct me if I missed something.

Thank you!

@bigpon
Copy link
Contributor

bigpon commented May 15, 2023

Hi!
AudioDec (ICASSP 2023 deadline: 22/10/26) and Encodec (arXiv, published on 22/10/24) were developed by different teams in Meta at almost the same time, so we haven’t compared AudioDec with Encodec. However, since the AudioDec project focuses on human sounds while Encodec focuses on general audio, there are several main differences.

  1. AudioDec only adopts the simplest single-resolution mel loss while Encodec adopts multi-resolution mel losses and waveform-based losses because only simple mel loss already modeled the speech well.
  2. AudioDec has a 2 stages architecture (encoder + vocoder) which is suitable for developing new encoders/decoders for different applications such as denoising or binaural rending.
  3. The autoencoder architecture of AudioDec is almost the same as SoundStream while Encodec’s architecture is a combination of SoundSteam and Demucs.
  4. The currently provided AudioDec pre-trained models were trained using speech-only corpora (VCTK or LibriTTs) while the Encodec models were trained using many different kinds of audio. Since training/testing data mismatch usually causes significant performance degradations for the data-driven models, we recommend you use the AudioDec pre-trained models only for speech and train new AudioDec models for other types of audio signals.

@vadimkantorov
Copy link
Author

Interesting. Thank you!

Also, the reason I've asked is because projects like NaturalSpeech2 are using neural codec as important part of their TTS pipeline, so it seems that maybe AudioDec would be better suited for replication of such a model!

@bigpon
Copy link
Contributor

bigpon commented May 15, 2023

Yes, since the training script is provided, and the training is not very computation-consuming, it might be easy to train a new AudioDec using a new dataset for a new downstream task.

@vadimkantorov
Copy link
Author

Maybe the information in this issue would be a great addition for README. I assume, many people would have similar questions (on similarities/differences with encodec/soundstream/lyrav2)

@bigpon
Copy link
Contributor

bigpon commented May 16, 2023

Thanks for the suggestion!
I will add this question to README.

@bigpon bigpon closed this as completed May 16, 2023
@bigpon bigpon added the question Further information is requested label May 17, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants