[ckpt] feat: add kimi ckpt engine backend#4954
[ckpt] feat: add kimi ckpt engine backend#4954kip-cxj wants to merge 11 commits intoverl-project:mainfrom
Conversation
There was a problem hiding this comment.
Code Review
This pull request introduces a new checkpoint engine backend, kimi_ckpt_engine, designed to support both GPU and Huawei Ascend NPU environments. The implementation is comprehensive, including the core engine logic, integration with the existing checkpointing framework, and a new test suite. My review focuses on ensuring correctness and maintainability. I've identified a critical thread-safety issue in the weight sending logic that could lead to data corruption. Additionally, I've suggested a minor but important rename in the test suite to improve clarity.
| with concurrent.futures.ThreadPoolExecutor(max_workers=32) as executor: | ||
| futures = [ | ||
| executor.submit( | ||
| offload_cpu, |
There was a problem hiding this comment.
Why we need to offload model to CPU first? Does kimi ckpt engine support D2D?
There was a problem hiding this comment.
The kimi_checkpoint_engine only supports reading weights from the CPU. It first register all weights and all gather the metadata across all ranks.
34b6bab to
5f985f9
Compare
5f985f9 to
ab708eb
Compare
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
Please format code first. |
| |nixl|NIXL|all_gather+ring p2p|Various transport backends (D2D, H2H, H2D, etc)<br>- UCX<br>- UCCL<br>- Mooncacke|Medium/High|High: dynamic adjust ring topology|Off-policy training<br>- Trainer/rollout disaggregated<br>- Elastic rollout<br>- Rollout fault tolerance<br>- Heterogeneous hardware rollout | ||
| |kimi_ckpt_engine|MOONCAKE+NCCL/HCCL|p2p+broadcast|NVIDIA/Ascend|High|Low: rebuild communication group|Off-policy training<br>- Trainer/rollout disaggregated<br>- Save checkpoint each time | ||
|
|
||
| PS: kimi_ckpt_engine first offloads all weights to the CPU. Then, using Mooncake transfer engine, these weights are transmitted via P2P to a specific worker in the rollout, followed by a broadcast to all other rollout workers. This mode requires the P2P feature of checkpoint_engine. Please ensure you have installed it via pip install 'checkpoint-engine[p2p]' and that your version is 0.4.0 or higher. |
There was a problem hiding this comment.
Could you provide more details about how kimi_ckpt_engine works? For example, give an image to show the communication topology between trainer and rollout workers.
What does this PR do?
Based on ckpt engine abstraction add checkpoint-engine abstraction, in this PR, we add kimi_ckpt_engine backend to support both GPU and huawei Ascend NPU.
Since establishing communication domains across trainer and rollout workers is required, this PR also depends on the newly added communication domain support in kimi_ckpt_engine.
TODO:
Checklist Before Starting
[{modules}] {type}: {description}(This will be checked by the CI){modules}includefsdp,megatron,veomni,sglang,vllm,rollout,trainer,ci,training_utils,recipe,hardware,deployment,ray,worker,single_controller,misc,perf,model,algo,env,tool,ckpt,doc,data,cfg,reward,like[megatron, fsdp, doc]{type}is infeat,fix,refactor,chore,test[BREAKING]to the beginning of the title.[BREAKING][fsdp, megatron] feat: dynamic batchingTest
We have verified the functionality on both GPU and NPU. Performance benchmarks on a 32 NPU environment show promising results; however, due to a lack of available GPU resources, performance data for GPU is still pending.
Checklist Before Submitting
Important
Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review.
pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=alwaysci-requestchannel in theverlSlack workspace. (If not accessible, please try the Feishu group (飞书群).)recipesubmodule, please also update the reference to the submodule commit viagit submodule update --remoteorcd recipe && git pull origin main.