Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training twice as long with Torch > 1.11 #2531

Closed
Craya opened this issue Apr 29, 2024 · 10 comments
Closed

Training twice as long with Torch > 1.11 #2531

Craya opened this issue Apr 29, 2024 · 10 comments
Assignees
Labels
bug Something isn't working important

Comments

@Craya
Copy link

Craya commented Apr 29, 2024

Describe the bug

Context
We are performing a custom ASR training based on CommonVoice/CTC recipe.
Our dataset is composed of 95 000 records for Train, 16 000 records for Val, 17 000 records for Test (batch_size 12).

With Speechbrain 0.5.15

  • with torch==1.11.0+cu113, each epoch last around 35min
    100%|██████████| 7879/7879 [34:18<00:00, 3.83it/s, train_loss=2.38]
  • with torch==1.12.0+cu113 and above (until torch 2.2.2), each epoch last around 1h03min
    100%|██████████| 7879/7879 [1:03:01<00:00, 2.08it/s, train_loss=2.88]

Note that the number of it/s drops a lot with torch version > 1.11

With speechbrain 1.0.0

  • Despite requirements.txt specifies torch>=1.9.0 and torchaudio>=1.9.0, if we install torch <1.12.0, train fails with the following error:
Traceback (most recent call last):
  File "./training.py", line 11, in <module>
    import model_eval
  File "/home/jovyan/spell-dev/model_eval.py", line 14, in <module>
    import inference
  File "/home/jovyan/spell-dev/inference.py", line 30, in <module>
    from speechbrain.inference.ASR import EncoderASR
  File "/usr/local/lib/python3.8/dist-packages/speechbrain/inference/__init__.py", line 5, in <module>
    from .ASR import *  # noqa
  File "/usr/local/lib/python3.8/dist-packages/speechbrain/inference/ASR.py", line 519, in <module>
    class StreamingASR(Pretrained):
  File "/usr/local/lib/python3.8/dist-packages/speechbrain/inference/ASR.py", line 546, in StreamingASR
    self, streamer: torchaudio.io.StreamReader, frames_per_chunk: int
AttributeError: module 'torchaudio' has no attribute 'io'
  • When we install a version of torch >1.11, we face the same problem of slow trainings as speechbrain 0.5.15.

Conclusion
We would like to switch speechbrain 1.0.0, but our trainings are twice as long as the one performed with speechbrain 0.5.15 with the same dataset, same recipe and same hyperparameters, (it seems) because of needed torch version upgrade.

Expected behaviour

We expect that our trainings with speechbrain 1.0.0 last the same or less time with newer version of torch

To Reproduce

I didn't had time to try to reproduce the problem with the original recipe.

Environment Details

Ubuntu 20.04.5 LTS
Nvidia A100 GPU
Driver Version: 530.30.02
CUDA Version: 12.1
Python 3.8.10
transformers 4.37.2 & 4.40.1
torch/torchaudio/torchvision (installed following these guidelines)

  • torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0
  • torch==1.12.0+cu113 torchvision==0.13.0+cu113 torchaudio==0.12.0
  • torch==2.2.2 torchvision==0.17.2 torchaudio==2.2.2 (cu121)

Relevant Log Output

No response

Additional Context

Note that, with torch>1.11 and SB 0.5.15, we also faced the problem of non-convergence during fine-tuning of wav2vec 2.0 models reported in several issues. This problem never occured with torch==1.11.

[...]
epoch: 35, lr_model: 1.80e-02, lr_wav2vec: 1.50e-05 - train loss: 1.22e-01 - valid loss: 1.46, valid CER: 9.28, valid WER: 15.07
epoch: 36, lr_model: 1.44e-02, lr_wav2vec: 1.35e-05 - train loss: 1.27e-01 - valid loss: 1.47, valid CER: 9.25, valid WER: 15.09
epoch: 37, lr_model: 1.15e-02, lr_wav2vec: 1.22e-05 - train loss: 1.24e-01 - valid loss: 1.43, valid CER: 9.24, valid WER: 15.03
epoch: 38, lr_model: 1.15e-02, lr_wav2vec: 1.22e-05 - train loss: 1.21e-01 - valid loss: 0.00e+00, valid CER: 1.00e+02, valid WER: 1.00e+02
epoch: 39, lr_model: 1.15e-02, lr_wav2vec: 1.22e-05 - train loss: 1.21e-01 - valid loss: 0.00e+00, valid CER: 1.00e+02, valid WER: 1.00e+02
epoch: 40, lr_model: 9.22e-03, lr_wav2vec: 1.09e-05 - train loss: 1.17e-01 - valid loss: 0.00e+00, valid CER: 1.00e+02, valid WER: 1.00e+02
[...]
@Craya Craya added the bug Something isn't working label Apr 29, 2024
@Adel-Moumen
Copy link
Collaborator

Hello @Craya, thanks you very much for reporting to us this issue. I am pinging @asumagic here since the error is related to the streaming inference which is using features from a very recent version of torchaudio.

May I ask you if you used --precision with the values fp16/bf16 with SB 1.0? You should see a very nice speedup.

@Craya
Copy link
Author

Craya commented Apr 29, 2024

Hi @Adel-Moumen ,

I used precision: fp32 as defined in the original recipe, I'll try with fp16/bf16 immediately, thanks for the tip.

Fabien.

@Adel-Moumen
Copy link
Collaborator

Hi @Adel-Moumen ,

I used precision: fp32 as defined in the original recipe, I'll try with fp16/bf16 immediately, thanks for the tip.

Fabien.

Good. Also, do you mind sharing with me the exact commit hash of your speechbrain version? Are you using the latest SB version available in the dev branch? We fixed some slowness issues linked to the torchaudio resampler and maybe this is why you were seeing some slowness.

Regarding the results that you obtained, do you confirm that you are unable to FT a wav2vec on the CV CTC recipe template? If so, I will try to retrain one myself and investigate what is happening.

@asumagic
Copy link
Collaborator

asumagic commented Apr 29, 2024

Hello @Craya, thanks you very much for reporting to us this issue. I am pinging @asumagic here since the error is related to the streaming inference which is using features from a very recent version of torchaudio.

May I ask you if you used --precision with the values fp16/bf16 with SB 1.0? You should see a very nice speedup.

Correct, the issue is those type annotations again even if you don't use that code... Will fix ASAP.

In the mean time, you can work the issue by removing this type annotation. Only the inference interfaces are affected, the new torchaudio features are only necessary for the ffmpeg streaming functionality.

@asumagic
Copy link
Collaborator

As for the training speed issue, this might be relevant: https://pytorch.org/blog/pytorch-1.12-released/#changes-to-float32-matrix-multiplication-precision-on-ampere-and-later-cuda-hardware

Using fp16/bf16 autocast as described should resolve the issue. For fp32 training, torch.backends.cuda.matmul.allow_tf32 = True would restore the 1.11 behavior.

@Adel-Moumen
Copy link
Collaborator

Using fp16/bf16 autocast as described should resolve the issue. For fp32 training,

Interesting. I guess we should re-introduce torch.backends.cuda.matmul.allow_tf32 = True ? I never really understood why we weren't using it.

@asumagic
Copy link
Collaborator

Using fp16/bf16 autocast as described should resolve the issue. For fp32 training,

Interesting. I guess we should re-introduce torch.backends.cuda.matmul.allow_tf32 = True ? I never really understood why we weren't using it.

I don't think there was an explicit decision not to do it in SpeechBrain, more that it was never brought up.
It lowers the precision of the matmul in a hardware-dependent way, which seems to be PyTorch's rationale for making this change.
I think it makes more sense to recommend defaulting to fp16/bf16, but I don't know if we might have any models that are not tolerant to fp16 autocast. And if so, I don't know if they would work with tf32 matmul (presumably, they would work).

@Craya
Copy link
Author

Craya commented Apr 29, 2024

Test with precision: fp16:
100%|██████████| 7879/7879 [33:26<00:00, 3.93it/s, train_loss=3.14]

Test with precision: bf16:
100%|██████████| 7879/7879 [31:58<00:00, 4.11it/s, train_loss=2.51]

Thanks a lot @Adel-Moumen & @asumagic , you solved my problem faster than 2 epochs!

@Craya Craya closed this as completed Apr 29, 2024
@Adel-Moumen
Copy link
Collaborator

Test with precision: fp16: 100%|██████████| 7879/7879 [33:26<00:00, 3.93it/s, train_loss=3.14]

Test with precision: bf16: 100%|██████████| 7879/7879 [31:58<00:00, 4.11it/s, train_loss=2.51]

Thanks a lot @Adel-Moumen & @asumagic , you solved my problem faster than 2 epochs!

Np. Keep me posted about the final results. And could you please let me know if you are still experiencing bad results on SB 1.0 on CV CTC? If so, I can take a deeper look.

@Craya
Copy link
Author

Craya commented May 2, 2024

@Adel-Moumen As described in the issue, we are performing a custom FT with our own dataset, not the default CV FT training.

Now the full training is ended, I can confirm that:

  • performances are better on SB1.0 wrt SB 0.5.15 (probably due to the improved data augmentation and the kenlm model)
  • there is no performance differences between precision : fp32 and precision: bf16

Thanks a lot for your help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working important
Projects
None yet
Development

No branches or pull requests

3 participants