Skip to content

Conversation

yechank-nvidia
Copy link
Collaborator

@yechank-nvidia yechank-nvidia commented Sep 29, 2025

Summary by CodeRabbit

  • New Features

    • Enhanced multimodal context handling for models using MROPE, improving readiness for multimodal inputs in supported configurations.
  • Bug Fixes

    • Increased stability when initializing requests with MROPE enabled, reducing errors during context construction and improving reliability for affected workloads.

@yechank-nvidia yechank-nvidia requested a review from a team as a code owner September 29, 2025 01:44
@yechank-nvidia
Copy link
Collaborator Author

/bot run

@yechank-nvidia yechank-nvidia self-assigned this Sep 29, 2025
Copy link
Contributor

coderabbitai bot commented Sep 29, 2025

📝 Walkthrough

Walkthrough

Adds a conditional branch in dummy context request creation to attach a multimodal payload when MROPE is enabled. The payload sets mrope_config with zeroed int32 position IDs (3×1×input_seq_len) and position deltas (1×1). This occurs before the request is appended.

Changes

Cohort / File(s) Summary
MROPE payload in dummy requests
tensorrt_llm/_torch/pyexecutor/_util.py
In _create_dummy_context_requests, when model_engine.use_mrope is true, constructs a multimodal payload and assigns request.py_multimodal_data with mrope_config (mrope_position_ids: 3×1×input_seq_len int32 zeros; mrope_position_deltas: 1×1 int32 zeros) before appending the request.

Sequence Diagram(s)

sequenceDiagram
    actor Caller
    participant Util as _create_dummy_context_requests
    participant Engine as model_engine
    participant Req as request

    Caller->>Util: create dummy context requests(input_seq_len)
    Util->>Engine: check use_mrope
    alt MROPE enabled
        Util->>Util: build multimodal payload\nmrope_config:\n- position_ids: 3×1×L int32 zeros\n- position_deltas: 1×1 int32 zeros
        Util->>Req: set py_multimodal_data(payload)
        Util->>Util: append Req to list
    else
        Util->>Util: append Req to list (no multimodal payload)
    end
    Util-->>Caller: return list of requests
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Pre-merge checks and finishing touches

❌ Failed checks (3 warnings)
Check name Status Explanation Resolution
Title Check ⚠️ Warning The title “[None][fix] Fix CUDA graph for Qwen2.5-VL” suggests the pull request addresses CUDA graph issues for the Qwen2.5-VL model, but the actual changes focus on modifying the dummy context request utility to attach a multimodal payload for MROPE configuration. This discrepancy makes the title misleading and not representative of the core update. Therefore, the title fails to accurately summarize the main change in the pull request. Please update the pull request title to clearly reflect the actual change, for example by describing the addition of MROPE multimodal payload handling in the _create_dummy_context_requests function.
Description Check ⚠️ Warning No description was provided for this pull request, leaving all required sections from the repository template—including a summary of the issue, details of the solution, test coverage, and the PR checklist—entirely empty. Without any descriptive content, reviewers cannot understand the purpose of the changes or verify that they are adequately tested. This fails to meet the description template requirements. Please complete the pull request description by following the provided template, adding a concise summary, a detailed explanation of the change, test coverage information, and confirmation of checklist items.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 28b9a81 and eeba612.

📒 Files selected for processing (1)
  • tensorrt_llm/_torch/pyexecutor/_util.py (1 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
**/*.{h,hpp,hh,hxx,cpp,cxx,cc,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Use only spaces, no tabs; indent with 4 spaces.

Files:

  • tensorrt_llm/_torch/pyexecutor/_util.py
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Python code must target Python 3.8+.
Indent Python code with 4 spaces; do not use tabs.
Maintain module namespace when importing; prefer 'from package.subpackage import foo' then 'foo.SomeClass()' instead of importing the class directly.
Python filenames should be snake_case (e.g., some_file.py).
Python classes use PascalCase names.
Functions and methods use snake_case names.
Local variables use snake_case; prefix 'k' for variables that start with a number (e.g., k_99th_percentile).
Global variables use upper SNAKE_CASE prefixed with 'G' (e.g., G_MY_GLOBAL).
Constants use upper SNAKE_CASE (e.g., MY_CONSTANT).
Avoid shadowing variables from an outer scope.
Initialize all externally visible members of a class in the constructor.
Prefer docstrings for interfaces that may be used outside a file; comments for in-function or file-local interfaces.
Use Google-style docstrings for classes and functions (Sphinx-parsable).
Document attributes and variables inline so they render under the class/function docstring.
Avoid reflection when a simpler, explicit approach suffices (e.g., avoid dict(**locals()) patterns).
In try/except, catch the most specific exceptions possible.
For duck-typing try/except, keep the try body minimal and use else for the main logic.

Files:

  • tensorrt_llm/_torch/pyexecutor/_util.py
**/*.{cpp,cxx,cc,h,hpp,hh,hxx,cu,cuh,py}

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

Prepend the NVIDIA Apache-2.0 copyright header with current year to the top of all source files (e.g., .cpp, .h, .cu, .py).

Files:

  • tensorrt_llm/_torch/pyexecutor/_util.py
🧠 Learnings (1)
📚 Learning: 2025-08-19T12:45:11.997Z
Learnt from: amitz-nv
PR: NVIDIA/TensorRT-LLM#7033
File: tensorrt_llm/_torch/pyexecutor/model_engine.py:0-0
Timestamp: 2025-08-19T12:45:11.997Z
Learning: In tensorrt_llm/_torch/pyexecutor/model_engine.py, DoRA (Delta Orthogonal Rank Adaptation) functionality was removed from the PyTorch flow to eliminate issues with inverted DoRA detection logic. The original is_dora condition was checking if scaling_vec_pointer == 0, which was potentially incorrect.

Applied to files:

  • tensorrt_llm/_torch/pyexecutor/_util.py
🧬 Code graph analysis (1)
tensorrt_llm/_torch/pyexecutor/_util.py (1)
tensorrt_llm/_torch/pyexecutor/model_engine.py (1)
  • use_mrope (500-508)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Pre-commit Check

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20194 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20194 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15230 completed with status: 'FAILURE'

Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
@yechank-nvidia
Copy link
Collaborator Author

/bot run

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20239 [ run ] triggered by Bot

@tensorrt-cicd
Copy link
Collaborator

PR_Github #20239 [ run ] completed with state SUCCESS
/LLM/main/L0_MergeRequest_PR pipeline #15261 completed with status: 'SUCCESS'

@byshiue byshiue merged commit 948b8b9 into NVIDIA:main Sep 30, 2025
5 checks passed
faradawn pushed a commit to faradawn/TensorRT-LLM that referenced this pull request Oct 2, 2025
Signed-off-by: yechank <161688079+yechank-nvidia@users.noreply.github.com>
Signed-off-by: Faradawn Yang <faradawny@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants