Skip to content

[None][feat] KVConnector shorthand paths for "lmcache" and "kvbm" with examples#12626

Merged
richardhuo-nv merged 8 commits intoNVIDIA:mainfrom
sammshen:example/lmcache-trtllm-connector
Apr 13, 2026
Merged

[None][feat] KVConnector shorthand paths for "lmcache" and "kvbm" with examples#12626
richardhuo-nv merged 8 commits intoNVIDIA:mainfrom
sammshen:example/lmcache-trtllm-connector

Conversation

@sammshen
Copy link
Copy Markdown
Contributor

@sammshen sammshen commented Mar 31, 2026

LMCache side co-PR: LMCache/LMCache#2920 (merge LMCache side first to not have faulty code in TRT-LLM)

Keep changes minimal and as non-intrusive as possible. This PR avoids touching any core TRT files and only engages with configurations and examples.

The high level goal is just:

what exactly each change is doing:

  • tensorrt_llm/connectors/registry.py -- add a connector preset registry that maps short names like "lmcache" to their module/class import paths, similar to vLLM's connector factory. new connectors can be added by adding an entry
  • add --kv-connector CLI option to trtllm-serve (e.g. --kv-connector lmcache). Automatically sets enable_block_reuse=True.
  • add LMCache example script and YAML config

Summary by CodeRabbit

Release Notes

  • New Features
    • Added LMCache support as a KV cache connector backend with block reuse capabilities.
    • Introduced --kv-connector CLI option to dynamically select cache connectors in the serve command.
    • Implemented preset-based connector configuration for easier setup and management.
    • Added example script demonstrating LMCache KV cache reuse across consecutive generation calls.

@sammshen sammshen requested review from a team as code owners March 31, 2026 10:23
Comment thread examples/llm-api/configs/trtllm_lmcache_connector_extra.yaml
Comment thread examples/llm-api/llm_lmcache_connector.py
Comment thread tensorrt_llm/commands/serve.py
Comment thread tensorrt_llm/_torch/pyexecutor/connectors/__init__.py
Comment thread tensorrt_llm/_torch/pyexecutor/connectors/registry.py
Comment thread tensorrt_llm/llmapi/llm_args.py
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 31, 2026

📝 Walkthrough

Walkthrough

This pull request adds LMCache as a configurable KV cache connector backend for TensorRT-LLM. It introduces a connector registry system, updates the KvCacheConnectorConfig to support preset-based configuration, adds a CLI option to the serve command, and provides example configuration and demonstration scripts.

Changes

Cohort / File(s) Summary
Example Files
examples/llm-api/configs/trtllm_lmcache_connector_extra.yaml, examples/llm-api/llm_lmcache_connector.py
New YAML configuration file demonstrating extra LLM API options for LMCache KV connector setup, and new example script showing how to use LMCache as a KV cache backend via TensorRT-LLM's KV Cache Connector interface with block reuse enabled.
Connector Registry
tensorrt_llm/connectors/__init__.py, tensorrt_llm/connectors/registry.py
New connector package with centralized registry mapping preset names (e.g., "lmcache") to their corresponding module paths and class names for scheduler and worker components.
Configuration & CLI
tensorrt_llm/llmapi/llm_args.py, tensorrt_llm/commands/serve.py
Updated KvCacheConnectorConfig to support optional preset-based configuration via new connector field, with Pydantic validator that auto-resolves preset values from registry. Added --kv-connector CLI option to serve command with conditional logic to construct and assign connector config with block reuse enabled.

Sequence Diagram

sequenceDiagram
    participant User
    participant CLI as serve CLI
    participant Config as KvCacheConnectorConfig
    participant Registry as CONNECTOR_REGISTRY
    participant LLM as TensorRT-LLM
    participant Connector as LMCache Connector

    User->>CLI: invoke serve --kv-connector lmcache
    CLI->>Config: create KvCacheConnectorConfig(connector="lmcache")
    Config->>Registry: resolve preset "lmcache"
    Registry-->>Config: return connector_module, scheduler_class, worker_class
    Config->>Config: validate and populate fields
    Config-->>CLI: return resolved config
    CLI->>LLM: initialize with kv_connector_config + block_reuse=True
    LLM->>Connector: initialize LMCache connector
    Connector-->>LLM: ready for KV cache operations
    LLM-->>User: serve ready
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

🚥 Pre-merge checks | ✅ 1 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Description check ⚠️ Warning The PR description clearly explains the high-level goal, rationale for minimal changes, and details what each change does. However, the description template sections (Test Coverage and PR Checklist) are completely missing. Add the 'Test Coverage' and 'PR Checklist' sections from the template to document test strategy and confirm adherence to coding guidelines and design standards.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly describes the main feature: introducing shorthand paths for KV connectors ('lmcache' and 'kvbm') with supporting examples, which aligns with the core changes of adding a connector registry and CLI option.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@examples/llm-api/llm_lmcache_connector.py`:
- Around line 78-102: The teardown call destroy_engine() can be skipped if
LLM.generate or the assertion fails; wrap the generation, prints and assertion
(the block using LLM, generate, output0/output1, text0/text1 and the assert) in
a try/finally so destroy_engine() is always executed in the finally block;
re-raise any caught exception after teardown to preserve failing behavior.
Ensure you reference the existing LLM instance, the generate calls, and the
assert when moving them into the try block and place destroy_engine() only in
finally.

In `@tensorrt_llm/commands/serve.py`:
- Around line 902-906: The current injection of KvCacheConnectorConfig
unconditionally sets llm_args['kv_connector_config'] when kv_connector is not
None, which can break non-PyTorch backends; before creating
KvCacheConnectorConfig (the block referencing kv_connector,
KvCacheConnectorConfig, and llm_args), add a guard that checks the configured
backend (e.g., the variable or llm_args['backend'] / backend_name used in this
module) and only inject the connector when the backend is a supported one (e.g.,
"pytorch"); if the backend is unsupported, raise a clear CLI error or exit with
a helpful message instead of setting llm_args['kv_connector_config'].
- Around line 907-915: The current code uses
kv_cc.setdefault('enable_block_reuse', True) which will not override an explicit
False and can leave enable_block_reuse disabled; change this to explicitly set
kv_cc['enable_block_reuse'] = True after converting/ensuring kv_cc is a dict so
that enable_block_reuse is always enforced when preparing
llm_args['kv_cache_config']; update the block handling around llm_args, kv_cc
and the KvCacheConfig conversion accordingly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: c96e9196-75c3-4b02-b7f6-6f5480637a95

📥 Commits

Reviewing files that changed from the base of the PR and between 481e946 and 57c2965.

📒 Files selected for processing (6)
  • examples/llm-api/configs/trtllm_lmcache_connector_extra.yaml
  • examples/llm-api/llm_lmcache_connector.py
  • tensorrt_llm/commands/serve.py
  • tensorrt_llm/connectors/__init__.py
  • tensorrt_llm/connectors/registry.py
  • tensorrt_llm/llmapi/llm_args.py

Comment thread examples/llm-api/llm_lmcache_connector.py
Comment thread tensorrt_llm/commands/serve.py Outdated
Comment thread tensorrt_llm/commands/serve.py Outdated
@sammshen
Copy link
Copy Markdown
Contributor Author

addressed all coderabbit comments too

@richardhuo-nv
Copy link
Copy Markdown
Collaborator

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #40982 [ run ] triggered by Bot. Commit: 57c2965 Link to invocation

@sammshen sammshen changed the title [External KV Offloading]: LMCache + Future Integration Paths [None][feat]: KVConnector LMCache + Future Integration Paths Mar 31, 2026
@sammshen sammshen force-pushed the example/lmcache-trtllm-connector branch from 57c2965 to f425bc5 Compare March 31, 2026 17:03
@richardhuo-nv richardhuo-nv changed the title [None][feat]: KVConnector LMCache + Future Integration Paths [None][feat] KVConnector LMCache + Future Integration Paths Mar 31, 2026
@richardhuo-nv
Copy link
Copy Markdown
Collaborator

/bot help

@github-actions
Copy link
Copy Markdown

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental) --high-priority]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

--high-priority (OPTIONAL) : Run the pipeline with high priority. This option is restricted to authorized users only and will route the job to a high-priority queue.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

@richardhuo-nv
Copy link
Copy Markdown
Collaborator

/bot reuse-pipeline

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #40987 [ reuse-pipeline ] triggered by Bot. Commit: f425bc5 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #40982 [ run ] completed with state ABORTED. Commit: 57c2965

Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #40987 [ reuse-pipeline ] completed with state SUCCESS. Commit: f425bc5
Can't reuse PR_Github #40982 with status: ABORTED

Link to invocation

@richardhuo-nv
Copy link
Copy Markdown
Collaborator

/bot run

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #40989 [ run ] triggered by Bot. Commit: f425bc5 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #40989 [ run ] completed with state FAILURE. Commit: f425bc5
/LLM/main/L0_MergeRequest_PR pipeline #31971 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42157 [ run ] completed with state SUCCESS. Commit: 5416317
/LLM/main/L0_MergeRequest_PR pipeline #32986 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

Comment thread examples/llm-api/configs/trtllm_kvbm_connector_extra.yaml
Comment thread tensorrt_llm/_torch/pyexecutor/connectors/registry.py
@sammshen sammshen force-pushed the example/lmcache-trtllm-connector branch from ed7895e to b300924 Compare April 7, 2026 21:33
@sammshen
Copy link
Copy Markdown
Contributor Author

sammshen commented Apr 8, 2026

sorry for the ping again, could a maintainer trigger the CI please, thank you!

Ubuntu and others added 7 commits April 8, 2026 20:06
Signed-off-by: Ubuntu <ubuntu@g294.voltagepark.net>
…nector

Signed-off-by: samuel <slshen@uchicago.edu>
… kvbm

- Remove --kv-connector CLI option from trtllm-serve (YAML-only config)
- Move connectors/ to tensorrt_llm/_torch/pyexecutor/connectors/
- Add kvbm preset to connector registry (dynamo KVBM)
- Add trtllm_kvbm_connector_extra.yaml example

Signed-off-by: samuel <slshen@uchicago.edu>
Move kv_cache_connector.py into tensorrt_llm/_torch/pyexecutor/connectors/
alongside registry.py, as requested in review. Update all import paths.
Fix pre-existing line-length lint violations in the moved file.

Signed-off-by: samuel <slshen@uchicago.edu>
After moving kv_cache_connector.py into the connectors/ subdirectory,
the relative imports for llm_request, scheduler, and resource_manager
need to reference the parent package (..) instead of the current one (.).

Signed-off-by: samuel <slshen@uchicago.edu>
Register "lmcache-mp" preset in the connector registry pointing to
LMCache's multi-process adapter (tensorrt_mp_adapter). This enables
process-isolated KV caching via a standalone LMCache ZMQ server.

Usage:
  kv_connector_config:
    connector: lmcache-mp
Signed-off-by: samuel <slshen@uchicago.edu>
@Shixiaowei02 Shixiaowei02 force-pushed the example/lmcache-trtllm-connector branch from ee23d85 to ef93cf5 Compare April 8, 2026 12:06
@Shixiaowei02
Copy link
Copy Markdown
Collaborator

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42332 [ run ] triggered by Bot. Commit: ef93cf5 Link to invocation

Optional field for connectors that run in multi-process mode
(e.g. lmcache-mp). Allows specifying the cache server URL
directly in the YAML config instead of environment variables.

Usage:
  kv_connector_config:
    connector: lmcache-mp
    server_url: tcp://localhost:5555
Signed-off-by: samuel <slshen@uchicago.edu>
@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42332 [ run ] completed with state SUCCESS. Commit: ef93cf5
/LLM/main/L0_MergeRequest_PR pipeline #33120 completed with status: 'FAILURE'

CI Report

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

Link to invocation

@pcastonguay
Copy link
Copy Markdown
Collaborator

/bot run --disable-fail-fast

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42548 [ run ] triggered by Bot. Commit: d8f2a04 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #42548 [ run ] completed with state ABORTED. Commit: d8f2a04

Link to invocation

@sammshen
Copy link
Copy Markdown
Contributor Author

surely something in this PR is offensive to the CI? 😅

@sammshen
Copy link
Copy Markdown
Contributor Author

/bot run --disable-fail-fast

1 similar comment
@pcastonguay
Copy link
Copy Markdown
Collaborator

/bot run --disable-fail-fast

@pcastonguay
Copy link
Copy Markdown
Collaborator

surely something in this PR is offensive to the CI? 😅

Our CI has been flaky recently, sorry for the inconvenience. Hopefully we can get this merged soon. Thx for your contribution.

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #43037 [ run ] triggered by Bot. Commit: d8f2a04 Link to invocation

@tensorrt-cicd
Copy link
Copy Markdown
Collaborator

PR_Github #43037 [ run ] completed with state SUCCESS. Commit: d8f2a04
/LLM/main/L0_MergeRequest_PR pipeline #33685 completed with status: 'SUCCESS'

CI Report

Link to invocation

@richardhuo-nv richardhuo-nv merged commit 968f397 into NVIDIA:main Apr 13, 2026
5 checks passed
chienchunhung pushed a commit to chienchunhung/TensorRT-LLM that referenced this pull request Apr 16, 2026
…h examples (NVIDIA#12626)

Signed-off-by: Ubuntu <ubuntu@g294.voltagepark.net>
Signed-off-by: samuel <slshen@uchicago.edu>
Co-authored-by: Ubuntu <ubuntu@g294.voltagepark.net>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Community want to contribute PRs initiated from Community

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants