-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fastconformer-CTC crashing with Watchdog caught collective operation timeout #9563
Comments
I have the same issue. Have you solved it? How can we avoid it? |
First, to triage whether it's the model or data store as the problem, run with a subset of data, maybe 50 hours of so. What is the max duration of the data ? Reduce it to at most 40 seconds, preferably 30 sec. We have some tools to segment data automatically. Next, nccl timeout is hard to debug because NeMo code mostly uses pytorch, we don't do much at nccl level so it can be due to many different reasons. See if model fine-tuning on single gpu with small bs is working first then try two gpus. LR and optimizer State is preserved in the ckpt files saved by Lightning during training. If you use exp manager, resuming a job is quite easy, see the docs for exp manager and tutorials showcasing training with it (just run the same script again with same output dir if you have set the two resume flags in exp manager). We don't have much information about hardware effects on certain operation in our team, we rely on pytorch and pytorch lightning to provide stable training engine |
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days. |
It might be a memory leak, it happened sometime when the system RAM is
full.
…On Sat, Aug 10, 2024 at 9:56 PM github-actions[bot] < ***@***.***> wrote:
This issue is stale because it has been open for 30 days with no activity.
Remove stale label or comment or this will be closed in 7 days.
—
Reply to this email directly, view it on GitHub
<#9563 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACERJIKBOGY46SDYCTO5Z3TZQ3ADTAVCNFSM6AAAAABKCCCZV2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOBSGM2DOOBYGY>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days. |
This issue was closed because it has been inactive for 7 days since being marked as stale. |
Hi,
We're trying to finetune the stt_en fastconformer_ctc large model on around 20k hours of data on 2 h100s. We're using 128 batch size and 8 num_workers and we trained the tokenizer with 1024 vocab_size. The training is taking very long, more than 30 hours per epoch and after around 70-80% of the first epoch it's crashing with the following error:
How do we avoid this issue? Should we consider reducing the finetuning data size? If we save intermediate checkpoints, is there a way to also save the lr scheduler state to effectively resume the training if it crashes? Any guidance regarding this would be of great help.
Also, unrelated to the issue, we noticed we didn't get much boost by using h100 instead of a100, and sometimes using bf16-mixed was slower on h100 than using fp16 on h100, on the other hand, bf16-mixed is almost always faster on a100 than fp16, is this expected?
Thank you!
The text was updated successfully, but these errors were encountered: