Skip to content

[None][feat] llmc: standalone package improvements and enforce import discipline#13466

Merged
lucaslie merged 7 commits intoNVIDIA:mainfrom
nv-auto-deploy:ll/llmc_rename
May 1, 2026
Merged

[None][feat] llmc: standalone package improvements and enforce import discipline#13466
lucaslie merged 7 commits intoNVIDIA:mainfrom
nv-auto-deploy:ll/llmc_rename

Conversation

@lucaslie
Copy link
Copy Markdown
Member

@lucaslie lucaslie commented Apr 25, 2026

Summary

  • Renames the standalone AutoDeploy distribution to llmc (PyPI: nvidia-llmc); the AutoDeploy source tree itself stays put. The renaming happens at standalone-package generation time only.
  • Adds a pre-commit hook (auto-deploy-import-discipline) that enforces relative-only imports inside tensorrt_llm/_torch/auto_deploy/, so the source tree can be copied verbatim into the standalone repo without rewriting in-package imports.
  • Splits the example READMEs/CONTRIBUTING for in-tree (TRT-LLM) vs standalone (llmc) usage; relocates create_standalone_package.py to examples/auto_deploy/llmc/.
  • Makes TRT-LLM the source of truth for the standalone repo's OSS compliance metadata: CODE_OF_CONDUCT.md, SECURITY.md, ATTRIBUTIONS-Python.md, .editorconfig, and a .github/ tree (issue/PR templates) are now copied into the standalone package on every regen.

Notable changes

  • Lint hook + scriptscripts/check_auto_deploy_imports.py (AST-based). Two rules: (A) imports resolving inside tensorrt_llm._torch.auto_deploy must be relative; (B) relative imports must not escape the package — use absolute tensorrt_llm.X for that.
  • Source fixes for the hook to pass — 7 absolute self-imports in models/custom/modeling_*_ir.py and transform/library/moe_routing.py converted to relative; 9 escaping relative imports in llm.py, llm_args.py, shim/{ad_executor,demollm,interface}.py, models/eagle.py, models/custom/modeling_eagle.py, several custom-ops files, and utils/quantization_utils.py flipped to absolute from tensorrt_llm.X. llm_args.py resolves the bundled config dir via files(_ad_config_pkg) instead of a hardcoded "tensorrt_llm._torch.auto_deploy.config" string so it works under both flavors.
  • Standalone generator — moved to examples/auto_deploy/llmc/create_standalone_package.py. Output package is now llmc/ with distribution name nvidia-llmc (Python import: import llmc). Source-side import rewriting is dropped (no longer needed); test files keep absolute imports and are still rewritten on copy.
  • OSS compliance copy-over — the generator additionally pulls CODE_OF_CONDUCT.md, SECURITY.md, ATTRIBUTIONS-Python.md, and .editorconfig from the TRT-LLM repo root, plus a .github/ tree (issue/PR templates) from examples/auto_deploy/llmc/.github_for_llmc/ (stored under that non-.github name to avoid colliding with TRT-LLM's own .github/). The issue-template config.yml disables blank issues and redirects bug reports / feature requests / discussions / security reports back to NVIDIA/TensorRT-LLM (or PSIRT). The PR template likewise points contributors at NVIDIA/TensorRT-LLM. All copied paths are added to _MANAGED_PATHS so the regen stays idempotent.
  • READMEsexamples/auto_deploy/README.md reverted to TRT-LLM-only and links to llmc/. New examples/auto_deploy/llmc/README.md adds install instructions and a comprehensive ModelFactory + InferenceOptimizer + CachedSequenceInterface example showing how to build a custom inference pipeline.
  • CONTRIBUTING — new examples/auto_deploy/llmc/CONTRIBUTING.md documents that the standalone repo is read-only and PRs must land on TensorRT-LLM (you can fork llmc to experiment, but the upstream source of truth is here).

The torch op namespace stays as torch.ops.auto_deploy (no rename). No public-API changes.

Test plan

  • scripts/check_auto_deploy_imports.py passes on the entire tensorrt_llm/_torch/auto_deploy/ tree.
  • pre-commit run --all-files passes for the new auto-deploy-import-discipline hook.
  • pre-commit run on every changed file passes (ruff, ruff-format, mdformat, codespell, the new hook, etc.).
  • Smoke import: python -c "import tensorrt_llm._torch.auto_deploy; from tensorrt_llm._torch.auto_deploy.llm_args import LlmArgs; from tensorrt_llm._torch.auto_deploy.shim.demollm import DemoEngine; from tensorrt_llm._torch.auto_deploy.shim.ad_executor import ADEngine".
  • pytest tests/unittest/auto_deploy/standalone/ — 12/12 passed (~4m30s, re-run after OSS compliance changes). This includes the nested run of the standalone package's own unit-test suite (test_run_unit_tests installs nvidia-llmc into an isolated venv and runs the copied tests).
  • Regen smoke: python examples/auto_deploy/llmc/create_standalone_package.py --output-dir /tmp/<...> writes the new files (CODE_OF_CONDUCT.md, SECURITY.md, ATTRIBUTIONS-Python.md, .editorconfig, .github/ISSUE_TEMPLATE/config.yml, .github/PULL_REQUEST_TEMPLATE.md) byte-identical to their TRT-LLM sources.
  • CI must include the AutoDeploy stages: please run with /bot run --extra-stage "DGX_B200-4_GPUs-AutoDeploy-1, DGX_H100-4_GPUs-AutoDeploy-1".

Summary by CodeRabbit

  • Documentation

    • Added comprehensive guides for the standalone llmc package, including installation, usage examples, and contributing guidelines.
  • Developer Tools

    • Introduced import validation enforcement via pre-commit hook for improved code consistency.
  • Updates

    • Standalone package rebranded as llmc with updated generation and packaging configuration.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 25, 2026

📝 Walkthrough

Walkthrough

Restructures the AutoDeploy source to enforce import discipline within the auto_deploy package while preparing a standalone llmc package distribution. Adds a pre-commit hook and validation script to enforce relative imports within auto_deploy and absolute imports elsewhere. Updates documentation and test suite accordingly.

Changes

Cohort / File(s) Summary
Pre-commit Configuration & Import Discipline
.pre-commit-config.yaml, scripts/check_auto_deploy_imports.py
Adds pre-commit hook to enforce import discipline. New script statically analyzes Python files in tensorrt_llm/_torch/auto_deploy/ to verify relative imports within the package and absolute imports for external modules, exiting with code 1 if violations found.
Documentation & Standalone Package Generation
examples/auto_deploy/README.md, examples/auto_deploy/llmc/CONTRIBUTING.md, examples/auto_deploy/llmc/README.md, examples/auto_deploy/llmc/create_standalone_package.py
Rescopes AutoDeploy documentation to examples-focused guidance. Adds comprehensive contributing guide and README for the llmc standalone package. Updates package generation script to produce nvidia-llmc distribution with llmc as top-level package directory instead of auto_deploy.
Import Refactoring - Custom Ops
tensorrt_llm/_torch/auto_deploy/custom_ops/attention/flashinfer_attention.py, tensorrt_llm/_torch/auto_deploy/custom_ops/normalization/*
Updates imports for get_env_enable_pdl from relative paths to absolute tensorrt_llm._torch.flashinfer_utils references.
Import Refactoring - Core Modules
tensorrt_llm/_torch/auto_deploy/llm.py, tensorrt_llm/_torch/auto_deploy/llm_args.py, tensorrt_llm/_torch/auto_deploy/models/eagle.py
Converts relative imports to absolute tensorrt_llm.* paths for public type dependencies and YAML config resource loading via importlib.resources.
Import Refactoring - Custom Models
tensorrt_llm/_torch/auto_deploy/models/custom/modeling_*.py
Switches between absolute and relative imports: some files convert to relative imports within auto_deploy (e.g., custom_ops, AutoModelForCausalLMFactory), while others adopt absolute imports to external modules.
Import Refactoring - Shim & Utilities
tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py, tensorrt_llm/_torch/auto_deploy/shim/demollm.py, tensorrt_llm/_torch/auto_deploy/shim/interface.py, tensorrt_llm/_torch/auto_deploy/transform/library/moe_routing.py, tensorrt_llm/_torch/auto_deploy/utils/quantization_utils.py
Updates imports for executor components, cache managers, utilities, and custom op registration to use consistent absolute tensorrt_llm.* paths.
Test Updates
tests/unittest/auto_deploy/standalone/test_standalone_package.py
Refocuses standalone package tests to target llmc distribution instead of auto_deploy. Renames test method and updates import assertions and package generation paths accordingly.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Description check ✅ Passed The PR description is comprehensive and follows the template structure with clear sections covering Summary, Notable Changes, and Test Plan.
Title check ✅ Passed The title accurately describes the main changes: introducing the llmc standalone package and enforcing import discipline in auto_deploy, which aligns with the core objectives of the PR.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
scripts/check_auto_deploy_imports.py (1)

25-25: Replace legacy typing generics with built-in types (list/tuple)

Per the coding guidelines, use built-in generics instead of typing.List and typing.Tuple. The file currently imports and uses these legacy types at multiple locations; switch to the modern syntax.

♻️ Suggested fix
-from typing import List, Tuple
+from typing import Iterable

-def _file_package_parts(path: pathlib.Path) -> List[str]:
+def _file_package_parts(path: pathlib.Path) -> list[str]:

-def _check_file(path: pathlib.Path) -> List[Tuple[int, str]]:
+def _check_file(path: pathlib.Path) -> list[tuple[int, str]]:

-    violations: List[Tuple[int, str]] = []
+    violations: list[tuple[int, str]] = []

-def main(argv: List[str]) -> int:
+def main(argv: list[str]) -> int:

-    failures: List[Tuple[pathlib.Path, int, str]] = []
+    failures: list[tuple[pathlib.Path, int, str]] = []

Also applies to: 32–33, 39, 50, 104, 111

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/check_auto_deploy_imports.py` at line 25, Replace the legacy typing
generics import and annotations: remove "from typing import List, Tuple" and
update all type annotations that use List[...] and Tuple[...] to use built-in
generics list[...] and tuple[...]; update function signatures and variable
annotations (wherever List/ Tuple are referenced, e.g., the imports line and any
functions or variables around the previous uses on lines noted) and ensure no
other typing-only imports are required; delete the now-unused List/Tuple import.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@examples/auto_deploy/README.md`:
- Line 5: Remove the stale standalone import snippet in
examples/auto_deploy/README.md (the example that imports torch_export_to_gm,
TransformRegistry, and ModelFactoryRegistry around lines ~133-140); replace that
block with a short pointer to ./llmc/README.md (or simply delete the outdated
code example) so the in-tree README no longer contradicts the redirect on line 5
and standalone usage is documented only in llmc/README.md.

In `@scripts/check_auto_deploy_imports.py`:
- Around line 44-47: The current code silently swallows a SyntaxError from
ast.parse and returns an empty list; change the except SyntaxError block to
capture the exception (except SyntaxError as e) and treat it as a check failure
by returning a non-empty violation (e.g., a message or violation object that
includes str(path) and the exception text) or by re-raising a clear exception so
the hook fails; update the handler around ast.parse(source, filename=str(path))
to include the error details in the returned result (or raised error) so
malformed files do not pass the check.

In `@tensorrt_llm/_torch/auto_deploy/llm.py`:
- Around line 6-10: This file is missing the required NVIDIA copyright/license
header; add the standard NVIDIA copyright header (with the year of latest
meaningful modification) as the very first lines of
tensorrt_llm._torch.auto_deploy.llm before any imports, preserving the existing
imports (e.g., CompletionOutput, DefaultInputProcessor, _TorchLLM,
TokenizerBase/TransformersTokenizer/tokenizer_factory, SamplingParams) and file
contents unchanged otherwise.

In `@tensorrt_llm/_torch/auto_deploy/shim/interface.py`:
- Around line 13-15: This file (module
tensorrt_llm._torch.auto_deploy.shim.interface) is missing the required NVIDIA
copyright/license header at the top; add the standard NVIDIA header block to the
very top of the file and ensure the copyright year is updated for a modified
file, leaving the existing imports (MambaHybridCacheManager, KVCacheManager,
torch_dtype_to_binding) intact so signatures like MambaHybridCacheManager and
KVCacheManager remain unchanged.

---

Nitpick comments:
In `@scripts/check_auto_deploy_imports.py`:
- Line 25: Replace the legacy typing generics import and annotations: remove
"from typing import List, Tuple" and update all type annotations that use
List[...] and Tuple[...] to use built-in generics list[...] and tuple[...];
update function signatures and variable annotations (wherever List/ Tuple are
referenced, e.g., the imports line and any functions or variables around the
previous uses on lines noted) and ensure no other typing-only imports are
required; delete the now-unused List/Tuple import.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: c42cc167-48df-4aef-b6a6-1eb9d2f6dba2

📥 Commits

Reviewing files that changed from the base of the PR and between c10954b and 53c9f0a.

📒 Files selected for processing (25)
  • .pre-commit-config.yaml
  • examples/auto_deploy/README.md
  • examples/auto_deploy/llmc/CONTRIBUTING.md
  • examples/auto_deploy/llmc/README.md
  • examples/auto_deploy/llmc/create_standalone_package.py
  • scripts/check_auto_deploy_imports.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/attention/flashinfer_attention.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/normalization/flashinfer_fused_add_rms_norm.py
  • tensorrt_llm/_torch/auto_deploy/custom_ops/normalization/rms_norm.py
  • tensorrt_llm/_torch/auto_deploy/llm.py
  • tensorrt_llm/_torch/auto_deploy/llm_args.py
  • tensorrt_llm/_torch/auto_deploy/models/custom/modeling_deepseek_ir.py
  • tensorrt_llm/_torch/auto_deploy/models/custom/modeling_eagle.py
  • tensorrt_llm/_torch/auto_deploy/models/custom/modeling_llama3_ir.py
  • tensorrt_llm/_torch/auto_deploy/models/custom/modeling_minimax_m2.py
  • tensorrt_llm/_torch/auto_deploy/models/custom/modeling_nemotron_h_ir.py
  • tensorrt_llm/_torch/auto_deploy/models/custom/modeling_qwen3_5_moe_ir.py
  • tensorrt_llm/_torch/auto_deploy/models/custom/modeling_qwen3_ir.py
  • tensorrt_llm/_torch/auto_deploy/models/eagle.py
  • tensorrt_llm/_torch/auto_deploy/shim/ad_executor.py
  • tensorrt_llm/_torch/auto_deploy/shim/demollm.py
  • tensorrt_llm/_torch/auto_deploy/shim/interface.py
  • tensorrt_llm/_torch/auto_deploy/transform/library/moe_routing.py
  • tensorrt_llm/_torch/auto_deploy/utils/quantization_utils.py
  • tests/unittest/auto_deploy/standalone/test_standalone_package.py

Comment thread examples/auto_deploy/README.md Outdated
Comment thread scripts/check_auto_deploy_imports.py Outdated
Comment thread tensorrt_llm/_torch/auto_deploy/llm.py
Comment thread tensorrt_llm/_torch/auto_deploy/shim/interface.py Outdated
Copy link
Copy Markdown
Member Author

@lucaslie lucaslie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a few comments...

Comment thread examples/auto_deploy/llmc/CONTRIBUTING.md Outdated
Comment thread examples/auto_deploy/llmc/CONTRIBUTING.md Outdated
Comment thread examples/auto_deploy/llmc/README.md Outdated
Comment thread examples/auto_deploy/llmc/README.md Outdated
Comment thread examples/auto_deploy/llmc/README.md Outdated
Comment thread examples/auto_deploy/llmc/README.md Outdated
@lucaslie lucaslie moved this from Backlog to In review in AutoDeploy Board Apr 25, 2026
Comment thread examples/auto_deploy/README.md Outdated
Comment thread examples/auto_deploy/llmc/README.md Outdated
Comment thread examples/auto_deploy/llmc/README.md Outdated
@lucaslie
Copy link
Copy Markdown
Member Author

/bot run --stage-list "A10-Build_Docs, A10-PackageSanityCheck-PY310-UB2204, A100X-PackageSanityCheck-PY312-UB2404, A30-AutoDeploy-1, H100_PCIe-AutoDeploy-1, DGX_B200-AutoDeploy-1, A100X-PyTorch-1, DGX_H100-4_GPUs-AutoDeploy-1, DGX_B200-4_GPUs-AutoDeploy-1"

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #45518 [ run ] triggered by Bot. Commit: 19fcb2f Link to invocation

@lucaslie
Copy link
Copy Markdown
Member Author

/bot help

@github-actions
Copy link
Copy Markdown

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental) --high-priority]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Supports wildcard * for pattern matching (e.g., "*PerfSanity*" matches all stages containing PerfSanity). Examples: "A10-PyTorch-1, xxx", "PerfSanity". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Supports wildcard * for pattern matching. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx", --extra-stage "Post-Merge".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

--high-priority (OPTIONAL) : Run the pipeline with high priority. This option is restricted to authorized users only and will route the job to a high-priority queue.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@lucaslie
Copy link
Copy Markdown
Member Author

/bot kill

@lucaslie lucaslie closed this Apr 26, 2026
@github-project-automation github-project-automation Bot moved this from In review to Done in AutoDeploy Board Apr 26, 2026
@lucaslie lucaslie reopened this Apr 26, 2026
@github-project-automation github-project-automation Bot moved this from Done to Ready in AutoDeploy Board Apr 26, 2026
@lucaslie lucaslie moved this from Ready to In review in AutoDeploy Board Apr 26, 2026
@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #45522 [ kill ] triggered by Bot. Commit: 74ba91b Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #45522 [ kill ] completed with state SUCCESS. Commit: 74ba91b
Successfully killed previous jobs for commit 74ba91b

Link to invocation

@lucaslie
Copy link
Copy Markdown
Member Author

/bot run --stage-list "A10-Build_Docs, A10-PackageSanityCheck-PY310-UB2204, A100X-PackageSanityCheck-PY312-UB2404, A30-AutoDeploy-1, H100_PCIe-AutoDeploy-1, DGX_B200-AutoDeploy-1, A100X-PyTorch-1, DGX_H100-4_GPUs-AutoDeploy-1, DGX_B200-4_GPUs-AutoDeploy-1"

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #45527 [ run ] triggered by Bot. Commit: ab00ced Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #45527 [ run ] completed with state SUCCESS. Commit: ab00ced
/LLM/main/L0_MergeRequest_PR pipeline #35749 (Partly Tested) completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #46064 [ run ] completed with state FAILURE. Commit: f51b105
/LLM/main/L0_MergeRequest_PR pipeline #36209 (Partly Tested) completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

CI Agent Failure Analysis

Link to invocation

@lucaslie
Copy link
Copy Markdown
Member Author

/bot run --stage-list "A10-Build_Docs, A10-PackageSanityCheck-PY310-UB2204, A100X-PackageSanityCheck-PY312-UB2404, A30-AutoDeploy-1, H100_PCIe-AutoDeploy-1, DGX_B200-AutoDeploy-1, A100X-PyTorch-1, DGX_H100-4_GPUs-AutoDeploy-1, DGX_B200-4_GPUs-AutoDeploy-1"

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #46210 [ run ] triggered by Bot. Commit: edae8ab Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #46210 [ run ] completed with state SUCCESS. Commit: edae8ab
/LLM/main/L0_MergeRequest_PR pipeline #36323 (Partly Tested) completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

CI Agent Failure Analysis

Link to invocation

@lucaslie
Copy link
Copy Markdown
Member Author

/bot run --stage-list "A10-Build_Docs, A10-PackageSanityCheck-PY310-UB2204, A100X-PackageSanityCheck-PY312-UB2404, A30-AutoDeploy-1, H100_PCIe-AutoDeploy-1, DGX_B200-AutoDeploy-1, A100X-PyTorch-1, DGX_H100-4_GPUs-AutoDeploy-1, DGX_B200-4_GPUs-AutoDeploy-1" --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #46242 [ run ] triggered by Bot. Commit: edae8ab Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #46242 [ run ] completed with state SUCCESS. Commit: edae8ab
/LLM/main/L0_MergeRequest_PR pipeline #36351 (Partly Tested) completed with status: 'SUCCESS'

CI Report

Link to invocation

…ipline

Renames the standalone AutoDeploy distribution to `llmc` (PyPI:
`nvidia-llmc`), splits the example READMEs/CONTRIBUTING for in-tree
vs standalone usage, and adds a pre-commit hook that enforces import
discipline inside `tensorrt_llm/_torch/auto_deploy/` so the source
tree can be copied verbatim into the standalone repo.

Highlights:
- New pre-commit hook `auto-deploy-import-discipline` (AST-based) plus
  `scripts/check_auto_deploy_imports.py`. Rules: in-package imports
  must be relative; relative imports must not escape the package
  (use absolute `tensorrt_llm.X` for that).
- Source fixes to make the hook pass: 7 absolute self-imports
  converted to relative (modeling_*_ir.py, moe_routing.py); 9
  escaping relative imports flipped to absolute `from tensorrt_llm.X`
  (llm.py, llm_args.py, shim/{ad_executor,demollm,interface}.py,
  models/eagle.py, models/custom/modeling_eagle.py, several custom
  ops, utils/quantization_utils.py). `llm_args.py` now resolves the
  config dir via `files(_ad_config_pkg)` instead of a hardcoded
  string so it works in both `tensorrt_llm._torch.auto_deploy` and
  `llmc` flavors.
- `create_standalone_package.py` moved to
  `examples/auto_deploy/llmc/`. Output package is now `llmc/` with
  distribution name `nvidia-llmc`. Source-side import rewriting is
  dropped (the lint hook guarantees no rewriting is needed); test
  files keep their absolute imports and are still rewritten on copy.
- `examples/auto_deploy/README.md` reverted to TRT-LLM-only and
  links to `llmc/`. New `examples/auto_deploy/llmc/README.md` adds
  install instructions and a comprehensive ModelFactory +
  InferenceOptimizer + CachedSequenceInterface example. New
  `examples/auto_deploy/llmc/CONTRIBUTING.md` documents that the
  standalone repo is read-only and PRs must land on TensorRT-LLM.
- Standalone test suite (`tests/unittest/auto_deploy/standalone/`)
  updated for the new package name; full suite passes locally
  (12/12, ~4m30s, including the nested run of the standalone
  package's own unit tests).

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
- README/CONTRIBUTING: use long names ("LLM Compiler" / "llm-compiler")
  in titles; drop PyPI/wheels references; simplify install to clone +
  `uv pip install -e ".[dev]"` plus a `pip install git+…` one-liner;
  drop the internal "Regenerating the standalone repo" section.
- examples/auto_deploy/README.md: remove a stale standalone code
  snippet that still referenced `auto_deploy.X` imports — the in-tree
  README is now TRT-LLM-only and points to llmc/README.md.
- scripts/check_auto_deploy_imports.py: modernize typing
  (`List`/`Tuple` → `list`/`tuple`); treat `SyntaxError` from
  ast.parse as a violation instead of silently passing.
- tensorrt_llm/_torch/auto_deploy/{llm,shim/interface}.py: add the
  missing NVIDIA copyright/license header.

Standalone test suite re-verified locally (12/12 pass, ~4m35s).

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
Plug in the real standalone-repo URL where the previous round had
placeholders, per review feedback:

- examples/auto_deploy/README.md: top-level redirect now points to
  github.com/NVIDIA/llm-compiler instead of the local llmc/README.md.
- examples/auto_deploy/llmc/README.md: drop the "we don't publish
  wheels yet" wording; install instructions now show both the https
  and ssh variants of `pip install git+…` and `git clone`.

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
Make TRT-LLM the source of truth for OSS compliance metadata in the
standalone llmc package. The standalone-package generator now copies
the following from this repo on every regen:

- CODE_OF_CONDUCT.md, SECURITY.md, ATTRIBUTIONS-Python.md (from repo root)
- .editorconfig (from repo root)
- .github/ tree (issue/PR templates) sourced from
  examples/auto_deploy/llmc/.github_for_llmc/ — stored under that
  non-".github" name so it does not interfere with TRT-LLM's own .github/

The issue template config disables blank issues and redirects bug
reports, feature requests, discussions, and security reports to
NVIDIA/TensorRT-LLM (or PSIRT) since the standalone repo is regenerated
and read-only. The PR template likewise points contributors back to
NVIDIA/TensorRT-LLM.

All copied paths are added to _MANAGED_PATHS so the regen stays
idempotent.

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
Test files that needed KvCacheConfig or ActivationType were importing
them from their canonical TRT-LLM paths and the standalone packaging
script translated those imports to llmc._compat on copy. Source the
symbols from tensorrt_llm._torch.auto_deploy._compat directly so the
generic tensorrt_llm._torch.auto_deploy -> llmc rewrite handles them,
and drop the now-dead special cases from create_standalone_package.py.

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
Reset SPDX years on shim/interface.py and llm.py from 2022-2026 to
2025-2026 to reflect the actual content authoring date per reviewer
feedback.

Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
Signed-off-by: Lucas Liebenwein <11156568+lucaslie@users.noreply.github.com>
@lucaslie
Copy link
Copy Markdown
Member Author

/bot run --stage-list "A10-Build_Docs, A10-PackageSanityCheck-PY310-UB2204, A100X-PackageSanityCheck-PY312-UB2404, A30-AutoDeploy-1, H100_PCIe-AutoDeploy-1, DGX_B200-AutoDeploy-1, A100X-PyTorch-1, DGX_H100-4_GPUs-AutoDeploy-1, DGX_B200-4_GPUs-AutoDeploy-1" --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #46434 [ run ] triggered by Bot. Commit: 8fcc5b9 Link to invocation

@lucaslie
Copy link
Copy Markdown
Member Author

lucaslie commented May 1, 2026

/bot skip --comment "AD tests are passing in CI and locally

@lucaslie lucaslie enabled auto-merge (squash) May 1, 2026 03:21
@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #46464 Bot args parsing error: Traceback (most recent call last):
File "bot/bin/parse_args.py", line 20, in main
args = note_handler.parse_args(shlex.split(cmd))
File "/usr/local/lib/python3.8/shlex.py", line 311, in split
return list(lex)
File "/usr/local/lib/python3.8/shlex.py", line 300, in next
token = self.get_token()
File "/usr/local/lib/python3.8/shlex.py", line 109, in get_token
raw = self.read_token()
File "/usr/local/lib/python3.8/shlex.py", line 191, in read_token
raise ValueError("No closing quotation")
ValueError: No closing quotation

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "bot/bin/parse_args.py", line 29, in
main()
File "bot/bin/parse_args.py", line 22, in main
e.with_traceback()
TypeError: with_traceback() takes exactly one argument (0 given)

Link to invocation

@lucaslie
Copy link
Copy Markdown
Member Author

lucaslie commented May 1, 2026

/bot skip --comment "AD tests are passing in CI and locally"

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #46476 [ skip ] triggered by Bot. Commit: 8fcc5b9 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #46476 [ skip ] completed with state SUCCESS. Commit: 8fcc5b9
Skipping testing for commit 8fcc5b9

Link to invocation

@lucaslie lucaslie merged commit 483ef68 into NVIDIA:main May 1, 2026
7 checks passed
@github-project-automation github-project-automation Bot moved this from In review to Done in AutoDeploy Board May 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

5 participants