Skip to content

support --no-enable-chunked-prefill for V1 #19975

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

liuyumoye
Copy link

@liuyumoye liuyumoye commented Jun 23, 2025

support --no-enable-chunked-prefill for V1

When executing the vllm service, the chunked prefill feature can be turned off by passing the --no-enable-chunked-prefill parameter

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @liuyumoye, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the configurability of the vLLM service by providing a mechanism to disable the chunked_prefill feature for V1 via a command-line flag. It modifies the argument parsing logic to respect user-specified preferences over the default settings.

Highlights

  • Configuration Control: This pull request introduces the ability to explicitly disable the chunked_prefill feature in vLLM's V1 service by allowing the --no-enable-chunked-prefill command-line argument to override the default behavior.
  • Argument Parsing Logic: The _set_default_args_v1 method in vllm/engine/arg_utils.py was modified. Previously, self.enable_chunked_prefill was unconditionally set to True for non-pooling tasks. Now, it is only set to True if its value is None, ensuring that a user-provided argument can prevent this default assignment.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The pull request adds support for disabling chunked prefill via the --no-enable-chunked-prefill parameter for V1 of the vllm service. The code change ensures that the enable_chunked_prefill attribute is only set to True if it hasn't been explicitly set to False by the user. I've suggested adding a comment to clarify the purpose of the conditional check.

Comment on lines +1548 to +1549
if self.enable_chunked_prefill is None:
self.enable_chunked_prefill = True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This change introduces a check for self.enable_chunked_prefill being None before setting it to True. This is good, but it would be clearer to have a comment explaining why this check is necessary, especially in the context of the --no-enable-chunked-prefill parameter. Consider adding a comment to clarify the purpose of this conditional check.

Suggested change
if self.enable_chunked_prefill is None:
self.enable_chunked_prefill = True
# Only enable chunked prefill if it hasn't been explicitly disabled via --no-enable-chunked-prefill
if self.enable_chunked_prefill is None:
self.enable_chunked_prefill = True

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Signed-off-by: liuyumoye <adeline_ly2023@outlook.com>
@yep96
Copy link

yep96 commented Jun 30, 2025

v0.9.1 cannot turn off the feature correctly

INFO 06-29 19:40:05 [api_server.py:1287] vLLM API server version 0.9.1
INFO 06-29 19:40:06 [cli_args.py:309] non-default args: {'model': '/data/Qwen3-8B/', 'max_model_len': 32768, 'served_model_name': ['qwen8b'], 'tensor_parallel_size': 2, 'enable_prefix_caching': False, 'enable_chunked_prefill': False, 'enable_prompt_tokens_details': True}
INFO 06-29 19:40:17 [config.py:823] This model supports multiple tasks: {'generate', 'classify', 'embed', 'score', 'reward'}. Defaulting to 'generate'.
INFO 06-29 19:40:17 [config.py:1946] Defaulting to use mp for distributed inference
INFO 06-29 19:40:17 [config.py:2195] Chunked prefill is enabled with max_num_batched_tokens=2048.
WARNING 06-29 19:40:20 [env_override.py:17] NCCL_CUMEM_ENABLE is set to 0, skipping override. This may increase memory overhead with cudagraph+allreduce: https://github.com/NVIDIA/nccl/issues/1234
INFO 06-29 19:40:23 [__init__.py:244] Automatically detected platform cuda.
INFO 06-29 19:40:26 [core.py:455] Waiting for init message from front-end.
INFO 06-29 19:40:26 [core.py:70] Initializing a V1 LLM engine (v0.9.1) with config: model='/data/Qwen3-8B/', speculative_config=None, tokenizer='/data/Qwen3-8B/', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=2, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto,  device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=qwen8b, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":512,"local_cache_dir":null}

@liuyumoye
Copy link
Author

v0.9.1 cannot turn off the feature correctly

INFO 06-29 19:40:05 [api_server.py:1287] vLLM API server version 0.9.1
INFO 06-29 19:40:06 [cli_args.py:309] non-default args: {'model': '/data/Qwen3-8B/', 'max_model_len': 32768, 'served_model_name': ['qwen8b'], 'tensor_parallel_size': 2, 'enable_prefix_caching': False, 'enable_chunked_prefill': False, 'enable_prompt_tokens_details': True}
INFO 06-29 19:40:17 [config.py:823] This model supports multiple tasks: {'generate', 'classify', 'embed', 'score', 'reward'}. Defaulting to 'generate'.
INFO 06-29 19:40:17 [config.py:1946] Defaulting to use mp for distributed inference
INFO 06-29 19:40:17 [config.py:2195] Chunked prefill is enabled with max_num_batched_tokens=2048.
WARNING 06-29 19:40:20 [env_override.py:17] NCCL_CUMEM_ENABLE is set to 0, skipping override. This may increase memory overhead with cudagraph+allreduce: https://github.com/NVIDIA/nccl/issues/1234
INFO 06-29 19:40:23 [__init__.py:244] Automatically detected platform cuda.
INFO 06-29 19:40:26 [core.py:455] Waiting for init message from front-end.
INFO 06-29 19:40:26 [core.py:70] Initializing a V1 LLM engine (v0.9.1) with config: model='/data/Qwen3-8B/', speculative_config=None, tokenizer='/data/Qwen3-8B/', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=2, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto,  device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=qwen8b, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={"level":3,"debug_dump_path":"","cache_dir":"","backend":"","custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"inductor_compile_config":{"enable_auto_functionalized_v2":false},"inductor_passes":{},"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"cudagraph_copy_inputs":false,"full_cuda_graph":false,"max_capture_size":512,"local_cache_dir":null}

You should apply this patch to v0.9.1, but this patch is based main branch

@heheda12345
Copy link
Collaborator

To my understanding, v1 engine does not support disable chunk prefill. Will this PR actually turn off chunk prefill?

@liuyumoye
Copy link
Author

To my understanding, v1 engine does not support disable chunk prefill. Will this PR actually turn off chunk prefill?

In this pr(https://github.com/vllm-project/vllm/pull/16188),

image

If disable chunk prefill and current request's num_new_tokens is larger than token_budget, this request will not be scheduled in this round, it will be put into the skipped_waiting_requests queue to be scheduled first in the next round.

image

@heheda12345
Copy link
Collaborator

Sorry for missing that PR. Bring the discussion here as the current scheduler logic is different from that of v0.
https://vllm-dev.slack.com/archives/C087RA55P0D/p1751701047039889

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants