Skip to content

[Intel HPU] enable chunked prefill#5903

Merged
EmmonsCurse merged 2 commits intoPaddlePaddle:developfrom
fmiao2372:develop_chunked_prefill
Jan 6, 2026
Merged

[Intel HPU] enable chunked prefill#5903
EmmonsCurse merged 2 commits intoPaddlePaddle:developfrom
fmiao2372:develop_chunked_prefill

Conversation

@fmiao2372
Copy link
Contributor

@fmiao2372 fmiao2372 commented Jan 6, 2026

Motivation

enable chunked prefill on intel hpu

💡 If this PR is a Cherry Pick, the PR title needs to follow the format by adding the [Cherry-Pick] label at the very beginning and appending the original PR ID at the end. For example, [Cherry-Pick][CI] Add check trigger and logic(#5191)

💡 如若此PR是Cherry Pick,PR标题需遵循格式,在最开始加上[Cherry-Pick]标签,以及最后面加上原PR ID,例如[Cherry-Pick][CI] Add check trigger and logic(#5191)

depend on PaddlePaddle/PaddleCustomDevice#2324

Modifications

hpu attention backend
hpu forward metadata
hpu model runner

Usage or Command

use these parameters to enable chunked prefill
--enable-chunked-prefill
--max-num-batched-tokens 4096

Accuracy Tests

ERNIE-4.5-21B-A3B-Paddle
Accuracy: 0.920
Invalid: 0.001
Latency: 370.744 s

Checklist

  • [Done] Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • [Done] Format your code, run pre-commit before commit.
  • [Done] Add unit tests. Please write the reason in this PR if no unit tests.
    conducted by local tests
  • [Done] Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

Copilot AI review requested due to automatic review settings January 6, 2026 05:18
@paddle-bot
Copy link

paddle-bot bot commented Jan 6, 2026

Thanks for your contribution!

@paddle-bot paddle-bot bot added the contributor External developers label Jan 6, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR enables chunked prefill support for Intel HPU platform, allowing prefill operations to be split into smaller chunks when processing long sequences alongside decode operations.

Key changes:

  • Enhanced HPU attention backend to support mixed encoder/decoder execution modes
  • Modified forward metadata structure to separate encoder and decoder state management
  • Added chunked prefill warmup and resource allocation logic

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 5 comments.

File Description
fastdeploy/worker/hpu_model_runner.py Implements chunked prefill logic, adds mixed batch warmup, separates encoder/decoder metadata handling
fastdeploy/model_executor/layers/backends/intel_hpu/attention/hpu_attn_backend.py Adds forward_mixed method to handle concurrent encoder/decoder batches, updates forward_extend and forward_decode to use separated metadata
fastdeploy/model_executor/forward_meta.py Restructures HPUForwardMeta to maintain separate encoder/decoder state with dedicated fields for rotary embeddings, block metadata, and batch information
fastdeploy/engine/sched/resource_manager_v1.py Adds HPU-specific token budget alignment logic to ensure chunk sizes are multiples of block_size

self.share_inputs["prompt_lens"][idx : idx + 1] = len(input_ids)
self.share_inputs["is_block_step"][idx : idx + 1] = False
# self.share_inputs["is_chunk_step"][idx : idx + 1] = prefill_end_index < len(input_ids)
self.share_inputs["is_chunk_step"][idx : idx + 1] = prefill_end_index < len(input_ids)
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line was previously commented out. Please ensure all corresponding references to is_chunk_step are properly updated and tested throughout the codebase, particularly in functions that consume this metadata.

Copilot generated this review using guidance from repository custom instructions.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not an issue

Comment on lines 236 to 241
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variable name len shadows the built-in Python function. Consider renaming to seq_chunk_size or encoder_chunk_len for clarity and to avoid shadowing the built-in.

Suggested change
len = int((tmp_out.shape[0] - total_batch_decoder) / total_batch_encoder)
position = 0
for i in range(batch_ids_encoder.shape[0]):
encoder_id = batch_ids_encoder[i].item()
seq_len = seq_lens_encoder[encoder_id].item()
output_data[position] = tmp_out[i * len + seq_len - 1]
encoder_chunk_len = int((tmp_out.shape[0] - total_batch_decoder) / total_batch_encoder)
position = 0
for i in range(batch_ids_encoder.shape[0]):
encoder_id = batch_ids_encoder[i].item()
seq_len = seq_lens_encoder[encoder_id].item()
output_data[position] = tmp_out[i * encoder_chunk_len + seq_len - 1]

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Comment on lines 236 to 245
Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The index calculation i - total_batch_decoder appears incorrect. When i iterates from 0 to batch_ids_decoder.shape[0] - 1, this will produce negative indices for early iterations. The correct index should likely be total_batch_encoder * len + i.

Suggested change
len = int((tmp_out.shape[0] - total_batch_decoder) / total_batch_encoder)
position = 0
for i in range(batch_ids_encoder.shape[0]):
encoder_id = batch_ids_encoder[i].item()
seq_len = seq_lens_encoder[encoder_id].item()
output_data[position] = tmp_out[i * len + seq_len - 1]
position += 1
for i in range(batch_ids_decoder.shape[0]):
output_data[position] = tmp_out[i - total_batch_decoder]
block_len = int((tmp_out.shape[0] - total_batch_decoder) / total_batch_encoder)
position = 0
for i in range(batch_ids_encoder.shape[0]):
encoder_id = batch_ids_encoder[i].item()
seq_len = seq_lens_encoder[encoder_id].item()
output_data[position] = tmp_out[i * block_len + seq_len - 1]
position += 1
decoder_start = total_batch_encoder * block_len
for i in range(batch_ids_decoder.shape[0]):
output_data[position] = tmp_out[decoder_start + i]

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not an issue

Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the forward_mixed method's measurement mode branch for decoder, forward_meta.rotary_embs is used instead of forward_meta.rotary_embs_decoder. This inconsistency with the non-measurement branch (line 683) will cause incorrect rotary embeddings to be applied during measurement mode.

Suggested change
forward_meta.rotary_embs,
forward_meta.rotary_embs_decoder,

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

Copy link

Copilot AI Jan 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line 376 sets forward_mode to ForwardMode.MIXED, then line 377-378 conditionally sets it again to the same value. The initial assignment on line 376 is redundant.

Suggested change
forward_mode = ForwardMode.MIXED

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@codecov-commenter
Copy link

Codecov Report

❌ Patch coverage is 63.33333% with 11 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@ab553b3). Learn more about missing BASE report.

Files with missing lines Patch % Lines
fastdeploy/engine/sched/resource_manager_v1.py 0.00% 6 Missing ⚠️
fastdeploy/model_executor/forward_meta.py 79.16% 5 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #5903   +/-   ##
==========================================
  Coverage           ?   66.70%           
==========================================
  Files              ?      347           
  Lines              ?    44426           
  Branches           ?     6823           
==========================================
  Hits               ?    29636           
  Misses             ?    12609           
  Partials           ?     2181           
Flag Coverage Δ
GPU 66.70% <63.33%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Collaborator

@zoooo0820 zoooo0820 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Collaborator

@EmmonsCurse EmmonsCurse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for skipping coverage.

@EmmonsCurse EmmonsCurse merged commit 1ee285c into PaddlePaddle:develop Jan 6, 2026
15 of 20 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

contributor External developers

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants

Comments