Skip to content

Conversation

@ddchenhao66
Copy link
Collaborator

@ddchenhao66 ddchenhao66 commented Nov 18, 2025

Motivation

XPU支持PD分离

Modifications

  • 增加init_signal_layerwise算子、修改open_shm_and_get_meta_signal算子接口
  • 增加get_peer_mem_addr算子,适配xpu RDMA地址空间注册
  • xpu_model_runner修改v0调度的插入逻辑,适配PD分离的情形;增加xpu create_kv_signal_sender/destroy_kv_signal_sender算子,实现异步写共享变量
  • prefix_cache_manager起cache进程时强制重置XPU_VISIBLE_DEVICES,解决非0卡起服务报错问题

Usage or Command

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@paddle-bot
Copy link

paddle-bot bot commented Nov 18, 2025

Thanks for your contribution!

@CLAassistant
Copy link

CLAassistant commented Nov 18, 2025

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


ddchenhao66 seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@ddchenhao66 ddchenhao66 changed the title XPU support pd PD disaggregation with V0 scheduler XPU support pd PD disaggregation Nov 19, 2025
@juncaipeng juncaipeng requested a review from Copilot November 19, 2025 12:21
Copilot finished reviewing on behalf of juncaipeng November 19, 2025 12:23
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds XPU hardware support for PD (Prefill-Decode) disaggregation feature, which allows splitting prefill and decode phases across different compute nodes. The implementation mirrors existing CUDA/GPU support by adding XPU-specific code paths throughout the attention, cache management, and worker execution layers.

  • Extends attention backends to support XPU with PD disaggregation modes ("per_chunk" and "per_query")
  • Adds XPU signal handling and inter-process communication for KV cache coordination
  • Refactors platform-specific operation imports to support both CUDA and XPU through a unified interface
  • Updates cache manager to handle XPU-specific memory addressing and device visibility

Reviewed Changes

Copilot reviewed 16 out of 16 changed files in this pull request and generated 12 comments.

Show a summary per file
File Description
fastdeploy/worker/xpu_model_runner.py Adds PD disaggregation mode support in XPU worker, including decode node handling and KV signal sender lifecycle management
fastdeploy/model_executor/layers/attention/xpu_attn_backend.py Implements XPU attention backend with PD disaggregation initialization and signal handling
fastdeploy/model_executor/layers/attention/utils.py Adds XPU device visibility environment variable support (XPU_VISIBLE_DEVICES)
fastdeploy/model_executor/forward_meta.py Adds kv_signal_sender field to XPUForwardMeta for PD disaggregation
fastdeploy/model_executor/layers/attention/ops/*.py Adds XPU platform branches to signal operation wrappers
fastdeploy/cache_manager/*.py Refactors ops imports to platform-agnostic interface and adds XPU memory address handling
custom_ops/xpu_ops/src/ops/*.cc Implements XPU-specific C++ operations for signal handling and shared memory

@ddchenhao66 ddchenhao66 changed the title XPU support pd PD disaggregation XPU support PD disaggregation Nov 20, 2025
@ddchenhao66 ddchenhao66 changed the title XPU support PD disaggregation [PD Disaggregation][XPU] Add XPU support for PD disaggregation Nov 20, 2025
DDDivano
DDDivano previously approved these changes Nov 20, 2025
hong19860320
hong19860320 previously approved these changes Nov 20, 2025
Copy link
Collaborator

@hong19860320 hong19860320 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

yuanlehome
yuanlehome previously approved these changes Nov 20, 2025
@Jiang-Jia-Jun Jiang-Jia-Jun merged commit e70e227 into PaddlePaddle:develop Nov 21, 2025
11 of 17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants