Skip to content

[ckpt] feat: add kimi ckpt engine backend#4954

Open
kip-cxj wants to merge 11 commits intoverl-project:mainfrom
kip-cxj:kimi_ckpt_engine
Open

[ckpt] feat: add kimi ckpt engine backend#4954
kip-cxj wants to merge 11 commits intoverl-project:mainfrom
kip-cxj:kimi_ckpt_engine

Conversation

@kip-cxj
Copy link

@kip-cxj kip-cxj commented Jan 16, 2026

What does this PR do?

Based on ckpt engine abstraction add checkpoint-engine abstraction, in this PR, we add kimi_ckpt_engine backend to support both GPU and huawei Ascend NPU.

Since establishing communication domains across trainer and rollout workers is required, this PR also depends on the newly added communication domain support in kimi_ckpt_engine.

TODO:

  • Add detailed performance testing results in checkpoint engine README.

Checklist Before Starting

  • Search for similar PRs. Paste at least one query link here: add Hccl ckpt engine backend
  • Format the PR title as [{modules}] {type}: {description} (This will be checked by the CI)
    • {modules} include fsdp, megatron, veomni, sglang, vllm, rollout, trainer, ci, training_utils, recipe, hardware, deployment, ray, worker, single_controller, misc, perf, model, algo, env, tool, ckpt, doc, data, cfg, reward
    • If this PR involves multiple modules, separate them with , like [megatron, fsdp, doc]
    • {type} is in feat, fix, refactor, chore, test
    • If this PR breaks any API (CLI arguments, config, function signature, etc.), add [BREAKING] to the beginning of the title.
    • Example: [BREAKING][fsdp, megatron] feat: dynamic batching

Test

We have verified the functionality on both GPU and NPU. Performance benchmarks on a 32 NPU environment show promising results; however, due to a lack of available GPU resources, performance data for GPU is still pending.

Checklist Before Submitting

Important

Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new checkpoint engine backend, kimi_ckpt_engine, designed to support both GPU and Huawei Ascend NPU environments. The implementation is comprehensive, including the core engine logic, integration with the existing checkpointing framework, and a new test suite. My review focuses on ensuring correctness and maintainability. I've identified a critical thread-safety issue in the weight sending logic that could lead to data corruption. Additionally, I've suggested a minor but important rename in the test suite to improve clarity.

with concurrent.futures.ThreadPoolExecutor(max_workers=32) as executor:
futures = [
executor.submit(
offload_cpu,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why we need to offload model to CPU first? Does kimi ckpt engine support D2D?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The kimi_checkpoint_engine only supports reading weights from the CPU. It first register all weights and all gather the metadata across all ranks.

@wuxibin89 wuxibin89 mentioned this pull request Jan 21, 2026
35 tasks
kip-cxj and others added 2 commits February 3, 2026 20:05
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@wuxibin89
Copy link
Collaborator

Please format code first.

|nixl|NIXL|all_gather+ring p2p|Various transport backends (D2D, H2H, H2D, etc)<br>- UCX<br>- UCCL<br>- Mooncacke|Medium/High|High: dynamic adjust ring topology|Off-policy training<br>- Trainer/rollout disaggregated<br>- Elastic rollout<br>- Rollout fault tolerance<br>- Heterogeneous hardware rollout
|kimi_ckpt_engine|MOONCAKE+NCCL/HCCL|p2p+broadcast|NVIDIA/Ascend|High|Low: rebuild communication group|Off-policy training<br>- Trainer/rollout disaggregated<br>- Save checkpoint each time

PS: kimi_ckpt_engine first offloads all weights to the CPU. Then, using Mooncake transfer engine, these weights are transmitted via P2P to a specific worker in the rollout, followed by a broadcast to all other rollout workers. This mode requires the P2P feature of checkpoint_engine. Please ensure you have installed it via pip install 'checkpoint-engine[p2p]' and that your version is 0.4.0 or higher.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you provide more details about how kimi_ckpt_engine works? For example, give an image to show the communication topology between trainer and rollout workers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants