Skip to content

Remove invalid timeout argument for dist.barrier#4512

Merged
maanug-nv merged 3 commits intoNVIDIA:mainfrom
zhaoyinglia:fix_barrier
May 5, 2026
Merged

Remove invalid timeout argument for dist.barrier#4512
maanug-nv merged 3 commits intoNVIDIA:mainfrom
zhaoyinglia:fix_barrier

Conversation

@zhaoyinglia
Copy link
Copy Markdown
Contributor

What does this PR do ?

Remove invalid parameters for torch.distributed.barrier()

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Issue tracking

For PRs from open-source community contributors:

  • Small updates (bug fixes, minor improvements): a linked issue is recommended and will accelerate the PR review process.

Linked issue: Related to #4480

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@zhaoyinglia zhaoyinglia requested review from a team as code owners April 29, 2026 02:53
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Apr 29, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@svcnvidia-nemo-ci svcnvidia-nemo-ci marked this pull request as draft April 29, 2026 02:53
@github-actions
Copy link
Copy Markdown
Contributor

This PR has been automatically converted to draft because all PRs must start as drafts.

When you are ready for review, click Ready for Review to begin the review process. This will:

  1. Add the oncall reviewer (optional reviewer)
  2. Add required review teams based on your changes

See the contribution guide for more details.

Copy link
Copy Markdown
Contributor

@maanug-nv maanug-nv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, thanks for the fix

Copy link
Copy Markdown
Contributor

@ko3n1g ko3n1g left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We added this to ensure we don’t hang on failing tests for too long. Is this still guaranteed if we remove the timeout?

@maanug-nv
Copy link
Copy Markdown
Contributor

We added this to ensure we don’t hang on failing tests for too long. Is this still guaranteed if we remove the timeout?

'timeout' is not an argument for torch.distributed.barrier(): https://docs.pytorch.org/docs/2.11/distributed.html#torch.distributed.barrier

Perhaps the barriers in unit tests should be switched to torch.distributed.monitored_barrier()?

@svcnvidia-nemo-ci svcnvidia-nemo-ci added the waiting-on-customer Waiting on the original author to respond label May 1, 2026
@deepakn94 deepakn94 changed the title fix invalid parameters for dist.barrier Remove time.delta argument for dist.barrier May 1, 2026
@deepakn94 deepakn94 changed the title Remove time.delta argument for dist.barrier Remove invalid timeout argument for dist.barrier May 1, 2026
@ko3n1g
Copy link
Copy Markdown
Contributor

ko3n1g commented May 1, 2026

/ok to test

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 1, 2026

/ok to test

@ko3n1g, there was an error processing your request: E1

See the following link for more information: https://docs.gha-runners.nvidia.com/cpr/e/1/

@svcnvidia-nemo-ci svcnvidia-nemo-ci added the Approved All necessary approvals have been made label May 1, 2026
@maanug-nv
Copy link
Copy Markdown
Contributor

/ok to test bac00ff

@asolergi-nv
Copy link
Copy Markdown
Contributor

/ok to test e8d6619

@svcnvidia-nemo-ci svcnvidia-nemo-ci added the waiting-on-maintainers Waiting on maintainers to respond label May 3, 2026
@svcnvidia-nemo-ci svcnvidia-nemo-ci removed the waiting-on-customer Waiting on the original author to respond label May 3, 2026
@maanug-nv
Copy link
Copy Markdown
Contributor

/ok to test 552bad8

@maanug-nv maanug-nv added this pull request to the merge queue May 5, 2026
@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/25376362717

Merged via the queue into NVIDIA:main with commit 0b2b572 May 5, 2026
62 checks passed
@svcnvidia-nemo-ci svcnvidia-nemo-ci removed the waiting-on-maintainers Waiting on maintainers to respond label May 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Approved All necessary approvals have been made community-request complexity: low

Projects

None yet

Development

Successfully merging this pull request may close these issues.

there is not keyword argument 'timeout' in torch.distributed.barrier()

6 participants