Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix Distributed Fused Adam Issues #8880

Merged
merged 5 commits into from
Apr 12, 2024

Conversation

alpha0422
Copy link
Contributor

@alpha0422 alpha0422 commented Apr 11, 2024

What does this PR do ?

This PR fixes distributed optimizer issues:

  1. Support NHWC layout, as required by Diffusion models;
  2. Fix complex zero_grad() being not captured by CUDA graph;
  3. Add option to distribute states within node;

Collection: [Note which collection this PR will affect]

Changelog

  • Add specific line by line info of high level changes in this PR.

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this 

Jenkins CI

To run Jenkins, a NeMo User with write access must comment jenkins on the PR.

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you add or update any necessary documentation?
  • Does the PR affect components that are optional to install? (Ex: Numba, Pynini, Apex etc)
    • Reviewer: Does the PR have correct import guards for all optional libraries?

PR Type:

  • New Feature
  • Bugfix
  • Documentation

If you haven't finished some of the above items you can still open "Draft" PR.

Who can review?

Anyone in the community is free to review the PR once the checks have passed.
Contributor guidelines contains specific people who can review PRs to various areas.

Additional Information

  • Related to # (issue)

@github-actions github-actions bot added the core Changes to NeMo Core label Apr 11, 2024
@alpha0422 alpha0422 changed the title Fix Distributed Fused Adam Issue with NHWC Layout Fix Distributed Fused Adam Issues Apr 11, 2024
@alpha0422 alpha0422 marked this pull request as draft April 11, 2024 14:42
@alpha0422 alpha0422 marked this pull request as ready for review April 11, 2024 16:01
Copy link
Collaborator

@timmoon10 timmoon10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good, although we are still hashing out the design in NVIDIA/apex#1794. As discussed in NVIDIA/apex#1794 (comment), I think we should set MegatronDistributedFusedAdam._step_support_amp_scaling=False to signal that the NeMo grad scaler can accommodate the distributed optimizer (unlike the plain PyTorch grad scaler). As a bonus, this approach fixes the grad scaling issue even without needing to update Apex.

@alpha0422
Copy link
Contributor Author

Overall looks good, although we are still hashing out the design in NVIDIA/apex#1794. As discussed in NVIDIA/apex#1794 (comment), I think we should set MegatronDistributedFusedAdam._step_support_amp_scaling=False to signal that the NeMo grad scaler can accommodate the distributed optimizer (unlike the plain PyTorch grad scaler). As a bonus, this approach fixes the grad scaling issue even without needing to update Apex.

You are probably right, to make sure it won't hurt perf I need to confirm with our usecases. Anyway, I think that relates to the changes in APEX, the changes here have no relation to gradient clipping.

Copy link
Collaborator

@ericharper ericharper left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks!

_step_support_amp_scaling=False will be considered in a future PR once perf is verified.

@ericharper ericharper dismissed timmoon10’s stale review April 12, 2024 15:13

Agreed offline it could be dismissed.

@ericharper
Copy link
Collaborator

jenkins

@ericharper
Copy link
Collaborator

jenkins

@ericharper ericharper merged commit 08ea4cb into NVIDIA:main Apr 12, 2024
12 of 124 checks passed
alxzhang-amazon pushed a commit to alxzhang-amazon/NeMo that referenced this pull request Apr 26, 2024
* Fix distributed fused adam issue with NHWC layout.

* Fix the CUDA graph issue if there's kernel in zero_grad.

* Add option to distribute adam states within node.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
suiyoubi pushed a commit that referenced this pull request May 2, 2024
* Fix distributed fused adam issue with NHWC layout.

* Fix the CUDA graph issue if there's kernel in zero_grad.

* Add option to distribute adam states within node.

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
Co-authored-by: Eric Harper <complex451@gmail.com>
Signed-off-by: Ao Tang <aot@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core Changes to NeMo Core
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants