Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PyTorch CUDA allocator optimization for dynamic batch shape dataloading in ASR #9061

Merged
merged 5 commits into from
May 2, 2024

Conversation

pzelasko
Copy link
Collaborator

@pzelasko pzelasko commented Apr 29, 2024

What does this PR do ?

I was profiling a particularly unlucky run that had dynamic batch shapes and operated close to max GPU RAM. The profile revealed it was re-allocating memory for every mini-batch, generating about 30% overhead in training. This can be resolved gracefully by turning on expandable_segments option in PyTorch CUDA allocator which instead of reallocating, extends existing allocations as needed, removing this significant overhead.

In this PR I'm proposing to automatically set this option during dataloader instantiation. It can be disabled via configuration.

For documentation purposes, the profile before the change (red blocks in CUDA API timelines indicates malloc/free):
image

and the profile after the fix:

image

The blue bars at the top of the profiles in CUDA HW kernel utilization timelines are more condensed in the new profile, indicating improved GPU utilization.

Collection: ASR

Changelog

  • PyTorch CUDA allocator optimization for dynamic batch shape dataloading in ASR

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this 

Jenkins CI

The Jenkins CI system has been replaced by GitHub Actions self-hosted runners.

There's no need to comment jenkins on the PR to trigger Jenkins CI.
The GitHub Actions CI will run automatically when the PR is opened.
To run CI on an untrusted fork, a NeMo user with write access must click "Approve and run".

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Signed-off-by: Piotr Żelasko <petezor@gmail.com>
Copy link
Collaborator

@galv galv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Beautiful. I appreciate that you print a warning when someone has already set the environment variable since this is a global configuration.

However, that is not the only way that someone may have set these options in the (that is, someone could have called _set_allocator_settings() on their own). Of course, this is not really expected here. It would be appreciated if you could take a look into whether there is a more robust way to check if this option is already set or not than by checking the environment variable.

Also, it's not clear to me how this setting might interact with a non-default allocator (for example, the RMM torch allocator provided by nvidia). Presumably setting this config has no effect in this case.

Approving anyway since I trust you to adequately investigating if any of my concerns above are real concerns.

@galv galv added Run CICD and removed Run CICD labels May 1, 2024
@pzelasko
Copy link
Collaborator Author

pzelasko commented May 2, 2024

It would be appreciated if you could take a look into whether there is a more robust way to check if this option is already set or not than by checking the environment variable.

I grepped through pytorch 2.3 code. Unfortunately it looks like only _set_allocator_settings is exposed to python API, and there is no _get equivalent.

Also, it's not clear to me how this setting might interact with a non-default allocator (for example, the RMM torch allocator provided by nvidia). Presumably setting this config has no effect in this case.

Good point, I didn't know about RMM. It turns out it's available in our containers so I just tested it out on a 1-GPU training run. It seems to "just work" (although I had to decrease the batch size to avoid CUDA OOM after a ~100 steps) so I assume these options are discarded for custom allocators.

@pzelasko pzelasko added Run CICD and removed Run CICD labels May 2, 2024
@pzelasko pzelasko merged commit 9100cfd into main May 2, 2024
133 checks passed
@pzelasko pzelasko deleted the expandable-segments branch May 2, 2024 18:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants