-
Notifications
You must be signed in to change notification settings - Fork 271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem with max-duration during training #119
Comments
You might want to filter out some cuts that are too long. You likely have outliers with large duration. |
Yeah, I got it. I think if I set max-duration, the total duration in a batch will not exceed this value, right? |
Yeah something like that, I’d say too long audio/features. BTW The total duration of supervised chunks won’t exceed max duration. It means there could be actually more due to padding. But with BucketingSampler the amount of padding is negligible. |
So for the same data, icefall will need more gpu memory compared to kaldi, right? If I remember correctly, in kaldi's training, there is a param egs.chunk-width (such as 150,110,100) to control the frames per batch. I usually use minibatch=64, so the max duration per batch may be larger than 64s. And I never see OOM, and the gpu usage is usually > 90%. |
In general any alignment-free training will require more memory due to padding. I think that Kaldi was able to optimize the memory usage because alignment gives you frame-level supervision, so you can take chunks of utterances. On the other hand, for CTC or attention decoder you have to use the whole utterance. |
Thanks, I'll close this issue. By the way, for docker user, --shm-size is also needed to set for parallel training. |
I'm using V100 with 16G Mem and training using world-size = 4.
If I use max-duration = 50, error will occur after some batches of training and show oom problem.
If I use max-duration = 30, training will be done. But I see gpu usage is usually less than 60%, which may need longer training time.
What's the main contribution to gpu memory? Or any advice?
The text was updated successfully, but these errors were encountered: