Skip to content

FA4 Inference#4186

Merged
wdykas merged 23 commits intoNVIDIA:mainfrom
wdykas:fa4-inference
Apr 18, 2026
Merged

FA4 Inference#4186
wdykas merged 23 commits intoNVIDIA:mainfrom
wdykas:fa4-inference

Conversation

@wdykas
Copy link
Copy Markdown
Contributor

@wdykas wdykas commented Apr 7, 2026

What does this PR do ?

Adding FA4 for inference

⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Apr 7, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@wdykas
Copy link
Copy Markdown
Contributor Author

wdykas commented Apr 7, 2026

/ok to test db6fe05

@svcnvidia-nemo-ci svcnvidia-nemo-ci added this to the Core 0.16 milestone Apr 7, 2026
@wdykas
Copy link
Copy Markdown
Contributor Author

wdykas commented Apr 7, 2026

/ok to test ba8f399

@wdykas
Copy link
Copy Markdown
Contributor Author

wdykas commented Apr 7, 2026

/ok to test 481be87

@wdykas
Copy link
Copy Markdown
Contributor Author

wdykas commented Apr 7, 2026

/ok to test 481be87

@wdykas wdykas removed the Run tests label Apr 7, 2026
@wdykas
Copy link
Copy Markdown
Contributor Author

wdykas commented Apr 7, 2026

/ok to test 439e670

@wdykas wdykas marked this pull request as ready for review April 8, 2026 14:58
@wdykas wdykas requested review from a team as code owners April 8, 2026 14:58
@wdykas
Copy link
Copy Markdown
Contributor Author

wdykas commented Apr 8, 2026

/ok to test bab7ba2

@svcnvidia-nemo-ci svcnvidia-nemo-ci requested a review from a team April 8, 2026 14:59
@svcnvidia-nemo-ci svcnvidia-nemo-ci added the Final Review PR is in the "final review" stage label Apr 8, 2026
Comment thread megatron/core/transformer/attention.py
@wdykas
Copy link
Copy Markdown
Contributor Author

wdykas commented Apr 17, 2026

/ok to test 66cb837

@wdykas
Copy link
Copy Markdown
Contributor Author

wdykas commented Apr 17, 2026

/ok to test c68d142

@wdykas wdykas added this pull request to the merge queue Apr 17, 2026
@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/24578808058

@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/24581972526

@github-merge-queue github-merge-queue Bot removed this pull request from the merge queue due to failed status checks Apr 17, 2026
@wdykas wdykas added this pull request to the merge queue Apr 17, 2026
@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/24585060368

@github-merge-queue github-merge-queue Bot removed this pull request from the merge queue due to failed status checks Apr 17, 2026
@wdykas wdykas enabled auto-merge April 17, 2026 22:18
@wdykas
Copy link
Copy Markdown
Contributor Author

wdykas commented Apr 17, 2026

/ok to test 2bfbabe2bfbabe

@bbuschkaemper bbuschkaemper mentioned this pull request Apr 18, 2026
5 tasks
@wdykas
Copy link
Copy Markdown
Contributor Author

wdykas commented Apr 18, 2026

/ok to test ff8a941ff8a941

@wdykas
Copy link
Copy Markdown
Contributor Author

wdykas commented Apr 18, 2026

/ok to test ff8a941

@wdykas wdykas added this pull request to the merge queue Apr 18, 2026
@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/24613340876

Merged via the queue into NVIDIA:main with commit 76ac7c2 Apr 18, 2026
183 of 184 checks passed
@wdykas wdykas deleted the fa4-inference branch April 18, 2026 21:08
Victarry added a commit to yanring/Megatron-LM that referenced this pull request Apr 20, 2026
* origin/main: (286 commits)
  Rename MambaModel/MambaStack to HybridModel/HybridStack (NVIDIA#4099)
  Fix Megatron initialization with extra_args_provider (NVIDIA#4327)
  Fix RL to once again work with --skip-train (NVIDIA#4249)
  Add activation logging and tokens per expert logging (NVIDIA#3842)
  Make param_index_map always use unpacked (full numel) offsets (NVIDIA#4328)
  FA4 Inference (NVIDIA#4186)
  Fix RL reward due to stop token (NVIDIA#4096)
  cp: Fix UT timeout (NVIDIA#4310) (NVIDIA#4373)
  feat(ckpt): add --async-ckpt-use-cpu-shm argument (NVIDIA#4355)
  Update copy-pr-bot.yaml [skip ci]
  Docs: improve docstrings and comments in example training loop (NVIDIA#4041)
  Add QK layernorm support for dot-product attention in MambaModel (NVIDIA#4067)
  Fix bug with non-partial rollouts (NVIDIA#3964)
  [docs] ci: use parent-relative json_url for version picker (NVIDIA#4367)
  Add tables and histogram for RL staleness (NVIDIA#4097)
  Port DeepSeek Sparse Attention to `MambaModel` (NVIDIA#3553)
  docs: bump versions1.json to 0.17.0 (latest) (NVIDIA#4360)
  Fix potential coredump issue that occurs when saving a checkpoint (NVIDIA#1871)
  ci(gb200): add 1-node mr-github functional test variants (NVIDIA#4334)
  fix: wait for async P2P send before deallocating output tensor (NVIDIA#4047)
  ...

# Conflicts:
#	megatron/core/transformer/cuda_graphs.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Approved All necessary approvals have been made complexity: medium Run functional tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants