Skip to content

Comments

fix: skip non-tensor optimizer state entries in distrib_optimizer sav…#3537

Merged
janEbert merged 1 commit intoNVIDIA:mainfrom
ahmadki:ahmadki/dist_optim_non_tensor_fix
Feb 24, 2026
Merged

fix: skip non-tensor optimizer state entries in distrib_optimizer sav…#3537
janEbert merged 1 commit intoNVIDIA:mainfrom
ahmadki:ahmadki/dist_optim_non_tensor_fix

Conversation

@ahmadki
Copy link
Member

@ahmadki ahmadki commented Feb 23, 2026

What does this PR do ?

This is a fix for the TE compatibility issue with the precision-aware optimizer.

_get_main_param_and_optimizer_states (save path) iterates all keys in the optimizer state including non-tensor entries (like found_inf: bool). The save succeeds because get_unscaled_state returns the bool as-is. But on load, _set_main_param_and_optimizer_states passes that same bool to set_scaled_state, which assumes it's a tensor.

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

flowchart LR
    A[Pre-checks] --> B[PR Tests]
    subgraph Code Review/Approval
        C1[Expert Review] --> C2[Final Review]
    end
    B --> C1
    C2 --> D[Merge]
Loading

Pre-checks

  • I want this PR in a versioned release and have added the appropriate Milestone (e.g., Core 0.8)
  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

The following process is enforced via the CODEOWNERS file for changes into megatron/core. For changes outside of megatron/core, it is up to the PR author whether or not to tag the Final Reviewer team.

For MRs into `main` branch

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

(Step 1): Add PR label Expert Review

(Step 2): Collect the expert reviewers reviews

  1. Attach the Expert Review label when your PR is ready for review.
  2. GitHub auto-assigns expert reviewers based on your changes. They will get notified and pick up your PR soon.

⚠️ Only proceed to the next step once all reviewers have approved, merge-conflict are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

(Step 3): Final Review

  1. Add Final Review label
  2. GitHub auto-assigns final reviewers based on your changes. They will get notified and pick up your PR soon.

(Optional Step 4): Cherry-pick into release branch

If this PR also needs to be merged into core_r* release branches, after this PR has been merged, select Cherry-pick to open a new PR into the release branch.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

Merging your PR

Any member of core-adlr and core-nemo will be able to merge your PR.

@ahmadki ahmadki requested review from a team as code owners February 23, 2026 08:12
@copy-pr-bot
Copy link

copy-pr-bot bot commented Feb 23, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team February 23, 2026 08:13
@janEbert
Copy link
Contributor

This semantically includes some of the changes in #3521, but handles more error cases and adds test. I'd thus add this instead of #3521 for the distrib_optimizer stuff, but keep the backward-compatibility tokenizer addition from #3521.

@ericharper ericharper added the Final Review Apply this label to indicate that your PR is ready for final review. label Feb 23, 2026
@ahmadki ahmadki force-pushed the ahmadki/dist_optim_non_tensor_fix branch from e0dd642 to 92aa58b Compare February 23, 2026 17:06
@ahmadki ahmadki force-pushed the ahmadki/dist_optim_non_tensor_fix branch from 92aa58b to 964204e Compare February 23, 2026 17:15
@janEbert janEbert enabled auto-merge February 23, 2026 17:21
@ahmadki ahmadki force-pushed the ahmadki/dist_optim_non_tensor_fix branch 2 times, most recently from 4792c5d to 6e1d6fa Compare February 23, 2026 17:30
@ahmadki
Copy link
Member Author

ahmadki commented Feb 23, 2026

/ok to test 6e1d6fa

@svcnvidia-nemo-ci svcnvidia-nemo-ci added this to the Core 0.16 milestone Feb 23, 2026
@ahmadki ahmadki force-pushed the ahmadki/dist_optim_non_tensor_fix branch from 6e1d6fa to 9a0a354 Compare February 23, 2026 17:49
@ahmadki
Copy link
Member Author

ahmadki commented Feb 23, 2026

/ok to test 9a0a354

…e/load

Signed-off-by: Ahmad Kiswani <kiswani.ahmad@gmail.com>
@ahmadki ahmadki force-pushed the ahmadki/dist_optim_non_tensor_fix branch from 9a0a354 to f2d903f Compare February 23, 2026 22:40
@ahmadki
Copy link
Member Author

ahmadki commented Feb 23, 2026

/ok to test f2d903f

@janEbert janEbert added this pull request to the merge queue Feb 23, 2026
@svcnvidia-nemo-ci
Copy link

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/22329594444

Merged via the queue into NVIDIA:main with commit 23dd639 Feb 24, 2026
47 of 48 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Final Review Apply this label to indicate that your PR is ready for final review.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants