Skip to content

Conversation

@ddchenhao66
Copy link
Collaborator

  1. 封装XPU/GPU硬件相关代码,保持公共代码耦合性
  2. 统一xpu关于device_id定义

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.


ddchenhao66 seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

@paddle-bot
Copy link

paddle-bot bot commented Oct 16, 2025

Thanks for your contribution!

yuanlehome
yuanlehome previously approved these changes Oct 16, 2025
hong19860320
hong19860320 previously approved these changes Oct 16, 2025
Copy link
Collaborator

@hong19860320 hong19860320 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@ddchenhao66 ddchenhao66 changed the title [XPU] add interface op files and specify xpu device id definition [XPU] abstract a hardware-agnostic operator wrapper for prefix cache and specify xpu device id definition Oct 16, 2025
@ddchenhao66 ddchenhao66 dismissed stale reviews from hong19860320 and yuanlehome via 0197387 October 17, 2025 03:12
Copy link
Collaborator

@hong19860320 hong19860320 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Collaborator

@hong19860320 hong19860320 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@EmmonsCurse EmmonsCurse merged commit 14785eb into PaddlePaddle:develop Oct 17, 2025
13 of 17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants