-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CTC-only training recipes for LibriSpeech (code from Samsung AI Cambridge) #2290
Conversation
@@ -152,9 +152,12 @@ def __init__( | |||
), | |||
torch.nn.Dropout(dropout), | |||
) | |||
self.custom_tgt_module = ModuleList( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@asumagic I guess that this is the place of the problem?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this change does what I ultimately suggested to do in the discussion in my PR, with the same consequences:
Though, now, I'm thinking that we could just skip the headache and fix the problem by gating it behind
num_decoder_layers > 0
and break models: As far as I can tell, actually, the only affected model in the SB repo seems to be the conformer-transducer model.
Then, just in case, we could provide a script that would remove the module from a checkpoint file.
Thanks @shucongzhang ! Could you start by fixing the tests? It can be done locally by installing pre-commit :-) |
Definitely. I have fixed the tests :) |
Thank you @shucongzhang! This is a great contribution. My main comment is that we need to update this PR to the new version of speechbrain that is currently available in the unstable branch (but soon will be merged in the development branch). |
Hello @shucongzhang, Thanks for your great work! We recently released in Best, |
Hi everyone, I have been modifying this PR so that it complains with SB 1.0. I also added support for beam search decoding (and latter I will run n-gram decoding as well). I'm in the process of training the models. I found that one epoch is about ~40 minutes on an A100 80GB with fp16. However, in the yaml @shucongzhang specified Ping @TParcollet as well as I guess you were part of this PR. Best, |
Hello @shucongzhang small ping on my previous message about training time. Could you please confirm me that 500 epochs is reasonable ? Ty. |
Hi @Adel-Moumen , Yes, the training time also makes me feel annoying. From my knowledge and my experiments, it seems for the vanilla Conformer/Branchformer a large batch size and a large number of epochs are necessary. Based on my Conformer training log,
Thus, maybe it is Okay to do some trade off between the training time and WER? Thx! |
I'm wondering about something. Why are we using a sentencepiece 128 BPE vocab for our CTC branch/conformer ? If you are training with CTC then why not using label_encoder with chars ? Ping @TParcollet / @shucongzhang |
@Adel-Moumen Hi Adel, I was in China for the Lunar New Year and just back to work. Happy Lunar New Year to you :) I used the 128 BPE to follow some previous works. I didn't test what will be the results if using using label_encoder with chars. |
@shucongzhang Welcome back, and Happy Lunar New Year to you too 🎉! Thanks for the clarification. It makes sense from your point of view. I will try with label_encoder since I think it could be useful here (and from a CTC point of view, I don't really understand the need of using BPE instead of phonemes/chars). I let you know when I have some results to share. I might take a bit of time due to the next release of SB 1.0, but I'll let you know what happens :) |
@Adel-Moumen we should avoid using label_encoder imho. SentencePiece can be degenerated to char only. I don't like our recipes that use label_encoder, we have developed this class because we did not want to use SentencePiece at first, but here we are. Maybe we should deprecate it. |
Hello, So I had the opportunity to train a bit (~200 epochs) with each model (Branchformer and Conformer), and everything ran smoothly :) I will therefore merge this PR. Thanks a lot @shucongzhang for your very nice work. :) Testspython -c 'from tests.utils.recipe_tests import run_recipe_tests; print("TEST FAILED!") if not(run_recipe_tests(filters_fields=["Hparam_file"], filters=[["recipes/LibriSpeech/ASR/CTC/hparams/conformer_large.yaml", "recipes/LibriSpeech/ASR/CTC/hparams/branchformer_large.yaml"]], do_checks=False, run_opts="--device=cuda")) else print("TEST PASSED")'
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks again @shucongzhang :)
Thank you so much @Adel-Moumen ! |
…AI Cambridge) (speechbrain#2290)" This reverts commit d086cde.
* Skip lazy imports when the caller is inspect.py This avoids having certain inspect functions import our lazy modules when we don't want them to. `getframeinfo` in particular appears to do it, and this gets called by PyTorch at some point. IPython might also be doing it but autocomplete still seems to work. This does not appear to break anything. Added test for hyperpyyaml to ensure we're not breaking that. * SSL_Semantic_Token _ new PR (speechbrain#2509) * remove unnecassry files and move to dasb * remove extra recepie from test * update ljspeech qunatization recepie * add discrete_ssl and remove extra files * fix precommit * update kmeans and add tokeizer for postprocessing * fix precommit * Update discrete_ssl.py * fix clone warning --------- Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com> * _ensure_module Raises docstring * Expose `ensure_module` so that docs get generated for it This is already an internal class anyway, and this is safe to call. * Update actions/setup-python * Use `uv` in test CI + merge some dep installs The consequence is faster dependency installation. Merging some of the dependency installs helps avoid some packages being reinstalled from one line to the next. Additionally, CPU versions are specified when relevant, to avoid downloading CUDA stuff the CI can't use anyway. * Use `uv` in doc CI + merge some dep installs Similar rationale as for the test CI * Parallelize doc generation with Sphinx This does not affect the entire doc generation process but should allow some minor multithreading even with the 2-core CI workers. * Enable `uv` caching on the test CI * Enable `uv` caching on the docs CI * CTC-only training recipes for LibriSpeech (code from Samsung AI Cambridge) (speechbrain#2290) CTC-only pre-training of conformer and branchformer. --------- Co-authored-by: Shucong Zhang/Embedded AI /SRUK/Engineer/Samsung Electronics <s1.zhang@sruk-ccn4.eu.corp.samsungelectronics.net> Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com> Co-authored-by: Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com> Co-authored-by: Parcollet Titouan <titouan.parcollet@univ-avignon.fr> * Update CommonVoice transformer recipes (code from Samsung AI Center Cambridge) (speechbrain#2465) * Update CV transformer recipes to match latest results with conformer. --------- Co-authored-by: Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net> Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com> Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com> * Whisper improvements: flash attention, KV caching, lang_id, translation, training... (speechbrain#2450) Whisper improvements: - flash attention - kv caching - lang identifaction - translation - finetuning amelioration ... and more ... * Update README.md * precommit * update zed download link (speechbrain#2514) * `RelPosEncXL` refactor and precision fixes (speechbrain#2498) * Add `RelPosEncXL.make_pe`, rework precision handling * Rework RelPosEncXL output dtype selection * Fix in-place input normalization when using `sentence`/`speaker` norm (speechbrain#2504) * fix LOCAL_RANK to be RANK in if_main_process (speechbrain#2506) * Fix Separation and Enhancement recipes behavior when NaN encountered (speechbrain#2524) * Fix Separation and Enhancement recipes behavior when NaN encountered * Formatting using precommit hooks * Lock torch version in requirements.txt (speechbrain#2528) * Fix compatibility for torchaudio versions without `.io` (speechbrain#2532) This avoids having the Python interpreter attempt to resolve the type annotation directly. * fix docstrings * consistency tests - classification * consistency tests - classification * consistency tests - interpret * default to no wham * fix after tests pass * fix after tests pass * tests after that * fix consistency --------- Co-authored-by: asu <sdelang@sdelang.fr> Co-authored-by: Pooneh Mousavi <moosavi.pooneh@gmail.com> Co-authored-by: Mirco Ravanelli <mirco.ravanelli@gmail.com> Co-authored-by: shucongzhang <104781888+shucongzhang@users.noreply.github.com> Co-authored-by: Shucong Zhang/Embedded AI /SRUK/Engineer/Samsung Electronics <s1.zhang@sruk-ccn4.eu.corp.samsungelectronics.net> Co-authored-by: Adel Moumen <adelmoumen.pro@gmail.com> Co-authored-by: Adel Moumen <88119391+Adel-Moumen@users.noreply.github.com> Co-authored-by: Parcollet Titouan <titouan.parcollet@univ-avignon.fr> Co-authored-by: Parcollet Titouan <parcollet.titouan@gmail.com> Co-authored-by: Titouan Parcollet/Embedded AI /SRUK/Engineer/Samsung Electronics <t.parcollet@sruk-ccn4.eu.corp.samsungelectronics.net> Co-authored-by: Yingzhi WANG <41187612+BenoitWang@users.noreply.github.com> Co-authored-by: Peter Plantinga <plantinga.peter@protonmail.com> Co-authored-by: Séverin <123748182+SevKod@users.noreply.github.com>
…AI Cambridge) (speechbrain#2290)" This reverts commit d086cde.
… AI Cambridge) (speechbrain#2290)" This reverts commit 6d408c5.
What does this PR do?
This PR provides recipes for training CTC models from scratch (no W2V2, no Whisper). It yields to strong Branchformer-CTC and Conformer-CTC models.