Skip to content

Conversation

simondanielsson
Copy link
Contributor

@simondanielsson simondanielsson commented Sep 17, 2025

Purpose

Closes #25071.

Test Plan

  1. When using whisper:
vllm serve openai/whisper-large-v3

logs should no longer mention "Chunked prefill is enabled with ...":

(APIServer pid=3140911) INFO 09-17 12:37:08 [scheduler.py:222] Chunked prefill is enabled with max_num_batched_tokens=8192.
(APIServer pid=3140911) INFO 09-17 12:37:10 [__init__.py:2790] Encoder-decoder models do not support chunked prefill nor prefix caching; disabling both.

Expecting simply

(APIServer pid=3140911) INFO 09-17 12:37:10 [__init__.py:2790] Encoder-decoder models do not support chunked prefill nor prefix caching; disabling both.
  1. Should lead to no changes to the SchedulerConfig nor VllmConfig. Verify with new tests.

Test Result

  1. Command:
  • Tested on GPU: L4.
  • Output from "test" command:
(vllm) danielssonsimon@XXXXXX:~/code/vllm$ vllm serve openai/whisper-large-v3
INFO 09-17 18:43:30 [__init__.py:216] Automatically detected platform cuda.
(APIServer pid=49917) INFO 09-17 18:43:33 [api_server.py:1813] vLLM API server version 0.10.2rc3.dev169+ge3db5ebb6.d20250917
(APIServer pid=49917) INFO 09-17 18:43:33 [utils.py:328] non-default args: {'model_tag': 'openai/whisper-large-v3', 'model': 'openai/whisper-large-v3'}
(APIServer pid=49917) INFO 09-17 18:43:42 [__init__.py:707] Resolved architecture: WhisperForConditionalGeneration
(APIServer pid=49917) `torch_dtype` is deprecated! Use `dtype` instead!
(APIServer pid=49917) INFO 09-17 18:43:42 [__init__.py:1762] Using max model len 448
(APIServer pid=49917) INFO 09-17 18:43:43 [scheduler.py:197] Encoder-decoder models do not support chunked prefill nor prefix caching; disabling both.
Fetching 1 files: 100%|█████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 11915.64it/s]
  1. New tests pass locally.

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

@simondanielsson simondanielsson changed the title [Bug]: Clean up chunked prefill logging when using whisper [Bugfix]: Clean up chunked prefill logging when using whisper Sep 17, 2025
Copy link

mergify bot commented Sep 17, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @simondanielsson.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

Comment on lines 96 to 102
is_encoder_decoder: bool = False
"""True if the model is an encoder-decoder model."""

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this already exists in ModelConfig, why duplicate it here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True, we likely don't want to store it here as well.

Would an InitVar be sufficient here?

Copy link
Member

@hmellor hmellor Sep 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The InitVar solution works.

However, in other cases like this (where two sibling configs interact) I've tended to perform those interactions in the parent's __post_init__, VllmConfig in this case. Would that work in this case?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's where I had it before this change, but we end up with a confusing log message about features being enabled coming from the SchedulerConfig's post_init before VllmConfig's post_init fixed it and disabled them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another option would be to perform the Chunked prefill is enabled... log in the VllmConfig, but not sure it makes sense to put it there

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I see, thankn you for explaining. Let's stick with the initvar

@simondanielsson simondanielsson force-pushed the feature/clean-up-prefill-logging branch from fefc7ab to 4a48dc5 Compare September 18, 2025 13:03
@simondanielsson simondanielsson force-pushed the feature/clean-up-prefill-logging branch from 59b2a17 to 5e0a186 Compare September 26, 2025 20:40
@mergify mergify bot removed the needs-rebase label Sep 26, 2025
@russellb russellb enabled auto-merge (squash) September 26, 2025 20:42
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 26, 2025
@russellb russellb disabled auto-merge September 26, 2025 20:43
@russellb russellb enabled auto-merge (squash) September 26, 2025 20:57
auto-merge was automatically disabled September 27, 2025 07:31

Head branch was pushed to by a user without write access

@simondanielsson simondanielsson force-pushed the feature/clean-up-prefill-logging branch 3 times, most recently from 2abc703 to b721f6c Compare September 29, 2025 18:32
Signed-off-by: simondanielsson <simon.danielsson99@hotmail.com>
Signed-off-by: simondanielsson <simon.danielsson99@hotmail.com>
Signed-off-by: simondanielsson <simon.danielsson99@hotmail.com>
Signed-off-by: simondanielsson <simon.danielsson99@hotmail.com>
Signed-off-by: simondanielsson <simon.danielsson99@hotmail.com>
Signed-off-by: simondanielsson <simon.danielsson99@hotmail.com>
Signed-off-by: simondanielsson <simon.danielsson99@hotmail.com>
Signed-off-by: simondanielsson <simon.danielsson99@hotmail.com>
@simondanielsson simondanielsson force-pushed the feature/clean-up-prefill-logging branch from b721f6c to 46594df Compare September 30, 2025 06:34
Signed-off-by: simondanielsson <simon.danielsson99@hotmail.com>
@simondanielsson
Copy link
Contributor Author

@russelb conflicts fixed now - should be good to go after CI. Thanks!

@hmellor hmellor enabled auto-merge (squash) September 30, 2025 07:30
@hmellor hmellor merged commit e23cacd into vllm-project:main Sep 30, 2025
45 checks passed
@simondanielsson simondanielsson deleted the feature/clean-up-prefill-logging branch September 30, 2025 08:36
pdasigi pushed a commit to pdasigi/vllm that referenced this pull request Oct 2, 2025
…roject#25075)

Signed-off-by: simondanielsson <simon.danielsson99@hotmail.com>
yewentao256 pushed a commit that referenced this pull request Oct 3, 2025
Signed-off-by: simondanielsson <simon.danielsson99@hotmail.com>
Signed-off-by: yewentao256 <zhyanwentao@126.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: Clean up chunked prefill logging when using whisper
3 participants