Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with max-duration during training #119

Closed
tz301 opened this issue Nov 14, 2021 · 6 comments
Closed

Problem with max-duration during training #119

tz301 opened this issue Nov 14, 2021 · 6 comments

Comments

@tz301
Copy link

tz301 commented Nov 14, 2021

I'm using V100 with 16G Mem and training using world-size = 4.

If I use max-duration = 50, error will occur after some batches of training and show oom problem.
If I use max-duration = 30, training will be done. But I see gpu usage is usually less than 60%, which may need longer training time.

What's the main contribution to gpu memory? Or any advice?

@pzelasko
Copy link
Collaborator

You might want to filter out some cuts that are too long. You likely have outliers with large duration.

@tz301
Copy link
Author

tz301 commented Nov 14, 2021

You might want to filter out some cuts that are too long. You likely have outliers with large duration.

Yeah, I got it.

I think if I set max-duration, the total duration in a batch will not exceed this value, right?
So maybe some transcript in this batch is too long?

@pzelasko
Copy link
Collaborator

Yeah something like that, I’d say too long audio/features.

BTW The total duration of supervised chunks won’t exceed max duration. It means there could be actually more due to padding. But with BucketingSampler the amount of padding is negligible.

@tz301
Copy link
Author

tz301 commented Nov 14, 2021

Yeah something like that, I’d say too long audio/features.

BTW The total duration of supervised chunks won’t exceed max duration. It means there could be actually more due to padding. But with BucketingSampler the amount of padding is negligible.

So for the same data, icefall will need more gpu memory compared to kaldi, right?

If I remember correctly, in kaldi's training, there is a param egs.chunk-width (such as 150,110,100) to control the frames per batch. I usually use minibatch=64, so the max duration per batch may be larger than 64s. And I never see OOM, and the gpu usage is usually > 90%.

@pzelasko
Copy link
Collaborator

In general any alignment-free training will require more memory due to padding. I think that Kaldi was able to optimize the memory usage because alignment gives you frame-level supervision, so you can take chunks of utterances. On the other hand, for CTC or attention decoder you have to use the whole utterance.

@tz301
Copy link
Author

tz301 commented Nov 20, 2021

Thanks, I'll close this issue.

By the way, for docker user, --shm-size is also needed to set for parallel training.

@tz301 tz301 closed this as completed Nov 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants