Skip to content

Conversation

@juncaipeng
Copy link
Collaborator

@juncaipeng juncaipeng commented Dec 1, 2025

Motivation

增加时间戳,用于分析系统耗时。

Modifications

全流程打点。

    arrival_time: Optional[float] = None  # api server receives request
    preprocess_start_time: Optional[float] = None  # preprocess start time in api server
    preprocess_end_time: Optional[float] = None  # preprocess end time in api server

    scheduler_recv_req_time: Optional[float] = None  # scheduler receives request and add to scheduler
    engine_get_req_time: Optional[float] = None  # engine gets request from scheduler
    ask_decode_resource_start_time: Optional[float] = None  # engine asks decode resource (only valid for prefill)
    ask_decode_resource_finish_time: Optional[float] = None  # engine has got decode resource (only valid for prefill)
    add_req_to_resource_manager_time: Optional[float] = None  # engine adds request to resource manager
    inference_start_time: Optional[float] = None  # requests are added into the engine work queue
    engine_recv_latest_token_time: Optional[float] = None  # receive the latest token from worker
    engine_recv_first_token_time: Optional[float] = None  # receive first token from worker
    wait_for_sending_cache_time: Optional[float] = None  # wait for sending cache (only valid for prefill)
    send_request_output_to_decode_time: Optional[float] = (
        None  # send request_output to worker (only valid for prefill)
    )

Usage or Command

请求加上 collect_metrics=True 字段

Accuracy Tests

单侧覆盖

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

Copilot AI review requested due to automatic review settings December 1, 2025 12:44
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR refactors the request timing metrics infrastructure to support comprehensive performance analysis of splitwise (prefill/decode disaggregation) deployments. The changes centralize timing attributes from the Request class into a dedicated RequestMetrics dataclass, adding numerous timestamp fields to track the full lifecycle of requests across prefill and decode nodes.

Key Changes

  • Introduced a comprehensive RequestMetrics dataclass with 20+ timestamp fields tracking request flow through the system
  • Migrated timing attributes from Request class to the new metrics object
  • Added timestamp recording at critical points: scheduler receipt, resource allocation, prefill/decode handoff, and token generation
  • Changed default value of FD_ENABLE_CACHE_TASK from enabled to disabled

Reviewed changes

Copilot reviewed 10 out of 10 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
fastdeploy/engine/request.py Added RequestMetrics dataclass with comprehensive timestamp fields and helper methods; removed timing attributes from Request class
fastdeploy/output/token_processor.py Refactored to use metrics object instead of direct timing attributes; updated token processing to record timing via metrics methods
fastdeploy/engine/engine.py Updated to record preprocess timing in metrics object
fastdeploy/engine/async_llm.py Updated request preprocessing to use metrics.preprocess_start_time
fastdeploy/engine/common_engine.py Added timestamp recording at key lifecycle points (engine_get_req_time, ask_decode_resource times, inference_start_time); updated comment to correctly indicate v0 scheduler usage
fastdeploy/engine/sched/resource_manager_v1.py Removed direct inference_start_time assignments; added metrics propagation for decode scenarios
fastdeploy/engine/resource_manager.py Removed direct inference_start_time assignment during resource allocation
fastdeploy/entrypoints/openai/protocol.py Added metrics field to ChatCompletionStreamResponse
fastdeploy/entrypoints/openai/serving_chat.py Updated to include metrics in streaming response chunks
fastdeploy/envs.py Changed FD_ENABLE_CACHE_TASK default from "1" to "0"

@paddle-bot
Copy link

paddle-bot bot commented Dec 1, 2025

Thanks for your contribution!

@codecov-commenter
Copy link

codecov-commenter commented Dec 2, 2025

Codecov Report

❌ Patch coverage is 57.53425% with 62 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@c83dc58). Learn more about missing BASE report.

Files with missing lines Patch % Lines
fastdeploy/engine/common_engine.py 6.06% 31 Missing ⚠️
fastdeploy/output/token_processor.py 58.06% 11 Missing and 2 partials ⚠️
fastdeploy/engine/request.py 82.45% 7 Missing and 3 partials ⚠️
fastdeploy/engine/sched/resource_manager_v1.py 0.00% 4 Missing ⚠️
fastdeploy/entrypoints/openai/serving_chat.py 60.00% 1 Missing and 1 partial ⚠️
fastdeploy/scheduler/splitwise_scheduler.py 0.00% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #5317   +/-   ##
==========================================
  Coverage           ?   59.08%           
==========================================
  Files              ?      327           
  Lines              ?    40651           
  Branches           ?     6168           
==========================================
  Hits               ?    24020           
  Misses             ?    14775           
  Partials           ?     1856           
Flag Coverage Δ
GPU 59.08% <57.53%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 21 out of 21 changed files in this pull request and generated 12 comments.

@EmmonsCurse
Copy link
Collaborator

@juncaipeng run_tests_with_coverage 任务有单测稳定报错,需要 rebase 一下 develop 分支后重新提交解决~

@Jiang-Jia-Jun Jiang-Jia-Jun merged commit 80efe98 into PaddlePaddle:develop Dec 8, 2025
14 of 17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants