Skip to content

Mamba inference opt#4414

Merged
wdykas merged 5 commits intoNVIDIA:mainfrom
wdykas:mamba-inference-opt
Apr 22, 2026
Merged

Mamba inference opt#4414
wdykas merged 5 commits intoNVIDIA:mainfrom
wdykas:mamba-inference-opt

Conversation

@wdykas
Copy link
Copy Markdown
Contributor

@wdykas wdykas commented Apr 21, 2026

What does this PR do ?

Inference optimizations for nano

Cached -exp(A_log.float()) in the Mamba decode path so the three elementwise kernels (float-cast, exp, neg) that ran per-layer per-token are now computed once per inference session. Moved the batch_indices int64 upcast from the per-layer forward into the MambaMetadata buffer allocation, eliminating a fourth small kernel from the hot path.

edit: we added more on here:

Config Run 1 Run 2 Run 3 Mean vs baseline vs previous
Baseline 115.04 114.76 115.64 115.15
Opt1 (cached A + int64 buf) 116.64 116.94 116.56 116.71 +1.35% +1.35%
Opt2 (+ pre-expanded A) 117.63 118.47 118.47 118.19 +2.64% +1.27%

MoE optimization not part of this graph
⚠️ For major changes (either in lines of code or in its impact), please make sure to first share a design doc with the team. If you're unsure what's the best way to do so, contact the @mcore-oncall.

Contribution process

Pre-checks

  • I have added relevant unit tests
  • I have added relevant functional tests
  • I have added proper typing to my code Typing guidelines
  • I have added relevant documentation
  • I have run the autoformatter.sh on my PR

Code review

Feel free to message or comment the @mcore-oncall to help accelerate your merge into main. The less complex your PR is, the faster it will be approved and merged!

All PRs start as draft. If you open a non-draft PR, it will be automatically converted to draft.

Step 1: Mark PR as "Ready for Review"

  1. When your PR is ready, click Ready for Review.
  2. An oncall reviewer is auto-assigned and expert reviewers are notified based on your changes.
    • Some PRs may jump straight to step 2. This is determined by .github/CODEOWNERS.

⚠️ Only mark as ready once merge-conflicts are resolved and the CI is passing.
Final Review might get declined if these requirements are not fulfilled.

Step 2: Final Review

For PRs that change megatron/core, once all expert reviewers have approved, the Final Review label is applied automatically and final reviewers are assigned.

For PRs outside megatron/core, this step is skipped.

Step 3: Approved

Once all required reviewers have approved, the Approved label is applied automatically.

Merge

Any member of mcore-engineers will be able to merge your PR.

For MRs into `dev` branch The proposed review process for `dev` branch is under active discussion.

MRs are mergable after one approval by either eharper@nvidia.com or zijiey@nvidia.com.

@wdykas wdykas requested review from a team as code owners April 21, 2026 21:02
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented Apr 21, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@svcnvidia-nemo-ci svcnvidia-nemo-ci marked this pull request as draft April 21, 2026 21:03
@github-actions
Copy link
Copy Markdown
Contributor

This PR has been automatically converted to draft because all PRs must start as drafts.

When you are ready for review, click Ready for Review to begin the review process. This will:

  1. Add the oncall reviewer (optional reviewer)
  2. Add required review teams based on your changes

See the contribution guide for more details.

@wdykas wdykas marked this pull request as ready for review April 21, 2026 21:03
@wdykas
Copy link
Copy Markdown
Contributor Author

wdykas commented Apr 21, 2026

/ok to test a0dda2c

Copy link
Copy Markdown
Contributor

@santhnm2 santhnm2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@svcnvidia-nemo-ci svcnvidia-nemo-ci added the Final Review PR is in the "final review" stage label Apr 21, 2026
@wdykas wdykas requested review from a team as code owners April 21, 2026 23:43
@svcnvidia-nemo-ci svcnvidia-nemo-ci removed the Final Review PR is in the "final review" stage label Apr 21, 2026
Comment thread megatron/core/transformer/moe/moe_utils.py Outdated
@wdykas
Copy link
Copy Markdown
Contributor Author

wdykas commented Apr 22, 2026

/ok to test b67fd23

@svcnvidia-nemo-ci svcnvidia-nemo-ci added the Final Review PR is in the "final review" stage label Apr 22, 2026
@wdykas
Copy link
Copy Markdown
Contributor Author

wdykas commented Apr 22, 2026

/ok to test 8d27f1d

@svcnvidia-nemo-ci svcnvidia-nemo-ci added Approved All necessary approvals have been made and removed Final Review PR is in the "final review" stage labels Apr 22, 2026
@wdykas wdykas enabled auto-merge April 22, 2026 17:51
@wdykas wdykas added this pull request to the merge queue Apr 22, 2026
@svcnvidia-nemo-ci
Copy link
Copy Markdown

🔄 Merge queue validation started!

You can track the progress here: https://github.com/NVIDIA/Megatron-LM/actions/runs/24798603343

Merged via the queue into NVIDIA:main with commit 40627d0 Apr 22, 2026
115 of 118 checks passed
@wdykas wdykas deleted the mamba-inference-opt branch April 22, 2026 22:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Approved All necessary approvals have been made complexity: low Run tests

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants