Skip to content

[Model Runner] Refactor execute_model for GPU async scheduling#6176

Merged
zhoutianzi666 merged 10 commits intoPaddlePaddle:developfrom
Sunny-bot1:not_need_stop
Jan 28, 2026
Merged

[Model Runner] Refactor execute_model for GPU async scheduling#6176
zhoutianzi666 merged 10 commits intoPaddlePaddle:developfrom
Sunny-bot1:not_need_stop

Conversation

@Sunny-bot1
Copy link
Collaborator

@Sunny-bot1 Sunny-bot1 commented Jan 22, 2026

Motivation

为实现 execute_model 阶段的 GPU 异步调度优化,需要将原有同步执行流程拆分为 前处理与模型执行、后处理 以及 token_id 返回 三个阶段。同时,not_need_stop 和 sampled_token_ids 在 后处理 阶段通过异步 GPU→CPU 拷贝发起传输,并在 save_output 阶段借助 CUDA event 进行必要的同步,从而减少调度层阻塞、提升 CPU–GPU 并行度。

Modifications

拆分execute_model(与原有逻辑相同):

  1. _preprocess_and_execute_model
  2. _postprocess
  3. _save_model_output (仅在非MTP与非pooling下执行)

增加 execute_model_overlap 接口(此PR下不生效,仅做准备):

  1. _preprocess_and_execute_model (current batch)
  2. _save_model_output (last batch,仅在非MTP与非pooling下执行)
  3. _postprocess(current batch)

share_inputs 增加:

  1. shared_inputs["not_need_stop"].pin_memory
  2. shared_inputs["sampled_token_ids"].pin_memory()
  3. shared_inputs["not_need_stop_device"]
  4. 将 not_need_stop 和 sampled_token_ids 的 DtoH 拷贝放在python侧,在 _postprocess 最后执行,并在 _save_model_output 阶段借助 CUDA event 进行同步

增加 get_stop、set_stop自定义算子:

  • 由于 pin_memory tensor 在 python 侧被访问后会变为 gpu tensor(why?),所以增加get_stop和set_stop两个自定义算子通过指针访问 not_need_stop

Usage or Command

Accuracy Tests

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@paddle-bot
Copy link

paddle-bot bot commented Jan 22, 2026

Thanks for your contribution!

@Sunny-bot1 Sunny-bot1 changed the title [Model Runner] Support not_need_gpu async memcopy [Model Runner] Support not_need_stop async memcopy Jan 22, 2026
@Jiang-Jia-Jun Jiang-Jia-Jun requested a review from Copilot January 23, 2026 07:54
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

此 PR 旨在优化 not_need_stop 标志的内存复制操作,通过引入异步内存复制来提高性能。变更涉及在 GPU 上维护 not_need_stop 标志的单独副本,并使用固定内存(pinned memory)和自定义 CUDA 操作来管理 CPU 和 GPU 之间的数据传输。

Changes:

  • 添加了新的 CUDA 操作 get_stopset_stop 用于管理 GPU 上的 stop 标志
  • ModelOutputData 中添加了 not_need_stop_gpu 字段以支持 GPU 版本的 stop 标志
  • 修改了 update_inputs_v1.cu 以移除同步内存复制逻辑
  • 更新了内存分配策略,使用固定内存(pinned memory)进行异步操作

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 8 comments.

Show a summary per file
File Description
custom_ops/gpu_ops/set_stop.cu 新增 CUDA 操作文件,实现 get_stopset_stop 函数用于访问和设置 GPU 上的 stop 标志
custom_ops/gpu_ops/update_inputs_v1.cu 移除了同步的 GPU-CPU 内存复制代码,现在直接使用 GPU 上的 not_need_stop
custom_ops/setup_ops.py 将新的 set_stop.cu 文件添加到编译列表中
fastdeploy/worker/output.py ModelOutputData 类中添加 not_need_stop_gpu 字段
fastdeploy/worker/gpu_model_runner.py 导入并使用新的 get_stopset_stop 操作;初始化固定内存和 GPU 张量
fastdeploy/model_executor/pre_and_post_process.py 更新后处理逻辑以使用 GPU 版本的 stop 标志,并添加异步内存复制

@codecov-commenter
Copy link

codecov-commenter commented Jan 27, 2026

Codecov Report

❌ Patch coverage is 65.45455% with 19 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@3837841). Learn more about missing BASE report.

Files with missing lines Patch % Lines
fastdeploy/worker/gpu_model_runner.py 69.04% 12 Missing and 1 partial ⚠️
fastdeploy/model_executor/pre_and_post_process.py 45.45% 4 Missing and 2 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #6176   +/-   ##
==========================================
  Coverage           ?   66.96%           
==========================================
  Files              ?      384           
  Lines              ?    50769           
  Branches           ?     7921           
==========================================
  Hits               ?    33996           
  Misses             ?    14290           
  Partials           ?     2483           
Flag Coverage Δ
GPU 66.96% <65.45%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@Sunny-bot1 Sunny-bot1 changed the title [Model Runner] Support not_need_stop async memcopy [Model Runner] Refactor execute_model for GPU async scheduling Jan 27, 2026
#include "helper.h"

paddle::Tensor GetStop(paddle::Tensor& not_need_stop) {
bool* not_need_stop_data = const_cast<bool*>(not_need_stop.data<bool>());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个后续反馈给框架同学修一下

Copy link
Collaborator

@ming1753 ming1753 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

# Transmit the model's output and stop generation signal via message queue.
# In the future, we will abandon this approach.
if envs.FD_USE_GET_SAVE_OUTPUT_V1:
if save_each_rank or model_output.mp_rank == 0:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

如果是走 V1,这里会有同步问题吗?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

如果是走 V1,这里会有同步问题吗?

还没测,如果是CPU上的操作理论上不会有同步

model_output.seq_lens_decoder,
model_output.step_idx,
)
share_inputs["preempted_idx"][:] = 0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里是因为什么加的?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里是因为什么加的?

只是换了个位置,这个操作之前是在post_process最后执行的,现在save_output提取出来了就跟着放在之后,不然会影响调度抢占的逻辑

self._process_mm_features(req_dicts)
if has_prefill_task or has_decode_task:
self.share_inputs["not_need_stop"][0] = True
set_stop(self.share_inputs["not_need_stop"], True)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里是因为直接 index 修改会影响 cpu/gpu place?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里是因为直接 index 修改会影响 cpu/gpu place?

是的,直接修改和读取都会影响,后续会找框架同学咨询一下

Copy link
Collaborator

@EmmonsCurse EmmonsCurse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LTGM for skip coverage~

@zhoutianzi666 zhoutianzi666 merged commit 27f8799 into PaddlePaddle:develop Jan 28, 2026
39 of 46 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants