-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training twice as long with Torch > 1.11 #2531
Comments
Hello @Craya, thanks you very much for reporting to us this issue. I am pinging @asumagic here since the error is related to the streaming inference which is using features from a very recent version of torchaudio. May I ask you if you used |
Hi @Adel-Moumen , I used Fabien. |
Good. Also, do you mind sharing with me the exact commit hash of your speechbrain version? Are you using the latest SB version available in the dev branch? We fixed some slowness issues linked to the torchaudio resampler and maybe this is why you were seeing some slowness. Regarding the results that you obtained, do you confirm that you are unable to FT a wav2vec on the CV CTC recipe template? If so, I will try to retrain one myself and investigate what is happening. |
Correct, the issue is those type annotations again even if you don't use that code... Will fix ASAP. In the mean time, you can work the issue by removing this type annotation. Only the inference interfaces are affected, the new torchaudio features are only necessary for the ffmpeg streaming functionality. |
As for the training speed issue, this might be relevant: https://pytorch.org/blog/pytorch-1.12-released/#changes-to-float32-matrix-multiplication-precision-on-ampere-and-later-cuda-hardware Using fp16/bf16 autocast as described should resolve the issue. For fp32 training, |
Interesting. I guess we should re-introduce |
I don't think there was an explicit decision not to do it in SpeechBrain, more that it was never brought up. |
Test with Test with Thanks a lot @Adel-Moumen & @asumagic , you solved my problem faster than 2 epochs! |
Np. Keep me posted about the final results. And could you please let me know if you are still experiencing bad results on SB 1.0 on CV CTC? If so, I can take a deeper look. |
@Adel-Moumen As described in the issue, we are performing a custom FT with our own dataset, not the default CV FT training. Now the full training is ended, I can confirm that:
Thanks a lot for your help. |
Describe the bug
Context
We are performing a custom ASR training based on CommonVoice/CTC recipe.
Our dataset is composed of 95 000 records for Train, 16 000 records for Val, 17 000 records for Test (batch_size 12).
With Speechbrain 0.5.15
100%|██████████| 7879/7879 [34:18<00:00, 3.83it/s, train_loss=2.38]
100%|██████████| 7879/7879 [1:03:01<00:00, 2.08it/s, train_loss=2.88]
Note that the number of it/s drops a lot with torch version > 1.11
With speechbrain 1.0.0
Conclusion
We would like to switch speechbrain 1.0.0, but our trainings are twice as long as the one performed with speechbrain 0.5.15 with the same dataset, same recipe and same hyperparameters, (it seems) because of needed torch version upgrade.
Expected behaviour
We expect that our trainings with speechbrain 1.0.0 last the same or less time with newer version of torch
To Reproduce
I didn't had time to try to reproduce the problem with the original recipe.
Environment Details
Ubuntu 20.04.5 LTS
Nvidia A100 GPU
Driver Version: 530.30.02
CUDA Version: 12.1
Python 3.8.10
transformers 4.37.2 & 4.40.1
torch/torchaudio/torchvision (installed following these guidelines)
Relevant Log Output
No response
Additional Context
Note that, with torch>1.11 and SB 0.5.15, we also faced the problem of non-convergence during fine-tuning of wav2vec 2.0 models reported in several issues. This problem never occured with torch==1.11.
The text was updated successfully, but these errors were encountered: