Skip to content

[https://nvbugs/5732958][bug] Fix TestLlama4MinLatency::test_llama_allclose_to_hf failure#10191

Merged
nvpohanh merged 1 commit intoNVIDIA:mainfrom
nvpohanh:dev-pohanh-llama4-min-latency-test-failure
Mar 9, 2026
Merged

[https://nvbugs/5732958][bug] Fix TestLlama4MinLatency::test_llama_allclose_to_hf failure#10191
nvpohanh merged 1 commit intoNVIDIA:mainfrom
nvpohanh:dev-pohanh-llama4-min-latency-test-failure

Conversation

@nvpohanh
Copy link
Collaborator

@nvpohanh nvpohanh commented Dec 22, 2025

Description

This test was fixed in #7478 but was broken by #7993 because the latter PR moved the next_layer_layernorm setting logic from load_weights() to post_load_weights() but the test definition was not updated. The failure was hidden because we did not upgrade transformers version yet.

To fix this, remember to call post_load_weights() after load_weights() in test definition (post_load_weights() is called automatically if the users are using pyexecutor). Also, handle the case where post_load_weights() is never called.

By Claude Code:

Make Llama/Llama4 forward pass work correctly both with and without
post_load_weights() being called, by making the layernorm fusion
gracefully degrade:

- In post_load_weights(), when moving layernorms between layers, set
  the source layernorm to None to indicate it has been absorbed.
- In DecoderLayer.forward(), if next_layer_layernorm is None (i.e.
  post_load_weights was not called), fall back to simple residual add
  instead of raising an error.
- In DecoderLayer.forward(), if input_layernorm is still present (not
  absorbed by previous layer), apply it normally.
- In Model.forward(), guard self.norm call since it may be None after
  being moved to the last decoder layer.
- Remove the transformers>=4.57.1 skip in the test, since the root
  cause (missing post_load_weights) is now fixed.

Summary by CodeRabbit

  • Bug Fixes

    • Added explicit error messages when post_load_weights() is not called after load_weights(), improving failure clarity.
    • Removed outdated transformer version skip in tests.
  • Tests

    • Updated test workflows to include required post-load weights initialization after weight loading.

✏️ Tip: You can customize this high-level summary in your review settings.

Test Coverage

TestLlama4MinLatency::test_llama_allclose_to_hf

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@nvpohanh nvpohanh requested review from a team as code owners December 22, 2025 06:20
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 22, 2025

📝 Walkthrough

Walkthrough

Adds explicit RuntimeError handling to the LlaMA model when layernorm chaining is missing during forward passes, and introduces required post_load_weights() calls in test files to ensure proper weight initialization after model loading. Removes a transformer version-based skip condition.

Changes

Cohort / File(s) Change Summary
Model error handling
tensorrt_llm/_torch/models/modeling_llama.py
Adds explicit RuntimeError exceptions in two code paths when next_layer_layernorm is unexpectedly None, directing users to call model.post_load_weights() after model.load_weights().
Test weight initialization
tests/unittest/_torch/modeling/test_modeling_llama.py, tests/unittest/_torch/modeling/test_modeling_llama_min_latency.py
Adds post_load_weights() invocation after loading HF weights in test paths; removes transformer version-based skip condition (4.55.0–4.56.1).

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10–15 minutes

  • Verify RuntimeError conditions and messages correctly identify missing layernorm chaining scenarios
  • Confirm post_load_weights() calls are placed at the correct initialization points in all test paths
  • Validate that the removed version skip no longer applies to the transformer version handling
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title clearly and specifically describes the bug being fixed: the test failure in TestLlama4MinLatency::test_llama_allclose_to_hf, which is the main change.
Description check ✅ Passed PR description clearly explains the root cause (PR #7993 moved layernorm logic without updating tests), the solution (add post_load_weights() calls and error handling), provides test coverage, and includes a Claude Code summary with implementation details.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@nvpohanh
Copy link
Collaborator Author

@coderabbitai generate docstrings

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 22, 2025

✅ Actions performed

Initiated docstring generation; will generate only if new commits exist.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 22, 2025

Caution

Docstrings generation - FAILED

An unexpected error occurred while opening a pull request: Reference update failed - https://docs.github.com/rest/git/refs#create-a-reference

@nvpohanh nvpohanh force-pushed the dev-pohanh-llama4-min-latency-test-failure branch 2 times, most recently from e2627b7 to 67c34be Compare December 23, 2025 08:40
@nvpohanh
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #29574 [ run ] triggered by Bot. Commit: 67c34be

@tensorrt-cicd
Copy link
Collaborator

PR_Github #29574 [ run ] completed with state SUCCESS. Commit: 67c34be
/LLM/main/L0_MergeRequest_PR pipeline #22742 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Copy link
Collaborator

@yechank-nvidia yechank-nvidia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thx for the fix!

@nvpohanh
Copy link
Collaborator Author

My current PR does not work for test_llama_sanity tests because those tests do not even load weights! I will need to modify by PR such that Llama/Llama4 works even when load_weights()/post_load_weights() are not called at all, instead of raising an error to tell users to call post_load_weights()

@nvpohanh nvpohanh force-pushed the dev-pohanh-llama4-min-latency-test-failure branch 3 times, most recently from 8b9634a to cc6c98d Compare March 2, 2026 08:12
@nvpohanh
Copy link
Collaborator Author

nvpohanh commented Mar 2, 2026

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #37305 [ run ] triggered by Bot. Commit: cc6c98d Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #37305 [ run ] completed with state ABORTED. Commit: cc6c98d
/LLM/main/L0_MergeRequest_PR pipeline #28871 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@nvpohanh
Copy link
Collaborator Author

nvpohanh commented Mar 2, 2026

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #37332 [ run ] triggered by Bot. Commit: cc6c98d Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #37332 [ run ] completed with state SUCCESS. Commit: cc6c98d
/LLM/main/L0_MergeRequest_PR pipeline #28893 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@nvpohanh nvpohanh force-pushed the dev-pohanh-llama4-min-latency-test-failure branch from cc6c98d to 3160459 Compare March 3, 2026 06:38
@nvpohanh
Copy link
Collaborator Author

nvpohanh commented Mar 3, 2026

/bot run

1 similar comment
@nvpohanh
Copy link
Collaborator Author

nvpohanh commented Mar 3, 2026

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #37479 [ run ] triggered by Bot. Commit: 3160459 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #37479 [ run ] completed with state FAILURE. Commit: 3160459

Link to invocation

@nvpohanh
Copy link
Collaborator Author

nvpohanh commented Mar 3, 2026

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #37510 [ run ] completed with state SUCCESS. Commit: 3160459
/LLM/main/L0_MergeRequest_PR pipeline #29021 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@nvpohanh
Copy link
Collaborator Author

nvpohanh commented Mar 4, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #37599 [ run ] triggered by Bot. Commit: 3160459 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #37599 [ run ] completed with state SUCCESS. Commit: 3160459
/LLM/main/L0_MergeRequest_PR pipeline #29095 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@nvpohanh
Copy link
Collaborator Author

nvpohanh commented Mar 4, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #37618 [ run ] triggered by Bot. Commit: 3160459 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #37618 [ run ] completed with state SUCCESS. Commit: 3160459
/LLM/main/L0_MergeRequest_PR pipeline #29109 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

…lclose_to_hf failure

Make Llama/Llama4 forward pass work correctly both with and without
post_load_weights() being called, by making the layernorm fusion
gracefully degrade:

- In post_load_weights(), when moving layernorms between layers, set
  the source layernorm to None to indicate it has been absorbed.
- In DecoderLayer.forward(), if next_layer_layernorm is None (i.e.
  post_load_weights was not called), fall back to simple residual add
  instead of raising an error.
- In DecoderLayer.forward(), if input_layernorm is still present (not
  absorbed by previous layer), apply it normally.
- In Model.forward(), guard self.norm call since it may be None after
  being moved to the last decoder layer.
- Remove the transformers>=4.57.1 skip in the test, since the root
  cause (missing post_load_weights) is now fixed.

Signed-off-by: Po-Han Huang <pohanh@nvidia.com>
@nvpohanh nvpohanh force-pushed the dev-pohanh-llama4-min-latency-test-failure branch from 3160459 to 2a3c372 Compare March 5, 2026 15:18
@nvpohanh
Copy link
Collaborator Author

nvpohanh commented Mar 5, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #37880 [ run ] triggered by Bot. Commit: 2a3c372 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #37880 [ run ] completed with state SUCCESS. Commit: 2a3c372
/LLM/main/L0_MergeRequest_PR pipeline #29330 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@nvpohanh
Copy link
Collaborator Author

nvpohanh commented Mar 6, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #37939 [ run ] triggered by Bot. Commit: 2a3c372 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #37939 [ run ] completed with state SUCCESS. Commit: 2a3c372
/LLM/main/L0_MergeRequest_PR pipeline #29383 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@nvpohanh
Copy link
Collaborator Author

nvpohanh commented Mar 6, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38036 [ run ] triggered by Bot. Commit: 2a3c372 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38036 [ run ] completed with state SUCCESS. Commit: 2a3c372
/LLM/main/L0_MergeRequest_PR pipeline #29464 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@nvpohanh
Copy link
Collaborator Author

nvpohanh commented Mar 7, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38104 [ run ] triggered by Bot. Commit: 2a3c372 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38104 [ run ] completed with state DISABLED
CI server is currently disabled for scheduled maintenance. Estimated completion time: 8 PM PST on 3/7.

Link to invocation

@nvpohanh
Copy link
Collaborator Author

nvpohanh commented Mar 7, 2026

Test comment from Claude Code - please ignore.

@nvpohanh
Copy link
Collaborator Author

nvpohanh commented Mar 8, 2026

/bot run --disable-fail-fast

2 similar comments
@nvpohanh
Copy link
Collaborator Author

nvpohanh commented Mar 8, 2026

/bot run --disable-fail-fast

@nvpohanh
Copy link
Collaborator Author

nvpohanh commented Mar 8, 2026

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38121 [ run ] triggered by Bot. Commit: 2a3c372 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38122 [ run ] triggered by Bot. Commit: 2a3c372 Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38121 [ run ] completed with state ABORTED. Commit: 2a3c372

Link to invocation

@tensorrt-cicd
Copy link
Collaborator

PR_Github #38122 [ run ] completed with state SUCCESS. Commit: 2a3c372
/LLM/main/L0_MergeRequest_PR pipeline #29532 completed with status: 'SUCCESS'

Link to invocation

@nvpohanh nvpohanh merged commit 4c15db0 into NVIDIA:main Mar 9, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants