Skip to content

[Misc] Configurable timeout for execute_model RPC calls via env var #19544

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Jun 23, 2025

Conversation

jinqinn
Copy link
Contributor

@jinqinn jinqinn commented Jun 12, 2025

This PR introduces a configurable timeout for execute_model RPC calls in the multiprocessing executor by adding the VLLM_EXECUTE_MODEL_TIMEOUT_SECONDS environment variable.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @jinqinn, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the flexibility of the multiprocessing executor by making the timeout for its core execute_model RPC calls configurable via an environment variable. This change moves away from a fixed constant, allowing users to adjust the timeout based on their specific needs and environment.

Highlights

  • Configuration: Introduced a new environment variable VLLM_EXECUTE_MODEL_TIMEOUT_SECONDS to control the timeout for execute_model RPC calls.
  • Refactoring: Replaced the hardcoded EXECUTE_MODEL_TIMEOUT_S constant in the multiprocessing executor with the value read from the new environment variable.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configureGemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added the v1 label Jun 12, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a configurable timeout for execute_model RPC calls within the MultiprocExecutor by adding the VLLM_EXECUTE_MODEL_TIMEOUT_SECONDS environment variable. This is a valuable enhancement for flexibility.

The changes are clear and well-implemented:

  • A new environment variable VLLM_EXECUTE_MODEL_TIMEOUT_SECONDS is defined in vllm/envs.py with a default value of 300 seconds.
  • The MultiprocExecutor now uses this environment variable, replacing a previously hardcoded timeout.
  • The removal of the hardcoded EXECUTE_MODEL_TIMEOUT_S and the apparently unused POLLING_TIMEOUT_MS and POLLING_TIMEOUT_S constants in multiproc_executor.py is a good cleanup.

The implementation follows existing patterns in the codebase and improves configurability.

To further enhance this change, consider adding tests to verify:

  • The default timeout (300s) is correctly applied when the environment variable is not set.
  • A custom timeout value is respected when VLLM_EXECUTE_MODEL_TIMEOUT_SECONDS is set.
  • The system correctly times out an execute_model call if it exceeds the configured duration (though this might be more suitable for an integration test).

Overall, this is a solid improvement.

@mergify mergify bot added the documentation Improvements or additions to documentation label Jun 13, 2025
@jinqinn
Copy link
Contributor Author

jinqinn commented Jun 13, 2025

@WoosukKwon @ywang96 Could you please review my PR when you have a chance?

vllm/envs.py Outdated
@@ -870,6 +871,10 @@ def get_vllm_port() -> Optional[int]:
# processes via zmq.
"VLLM_MQ_MAX_CHUNK_BYTES_MB":
lambda: int(os.getenv("VLLM_MQ_MAX_CHUNK_BYTES_MB", "16")),

# Timeout in seconds for execute_model RPC calls in multiprocessing executor
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should be explicit in the comment that it only applies for TP > 1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -70,7 +70,7 @@ Try one yourself by passing one of the following models to the `--model` argumen

vLLM supports models that are quantized using GGUF.

Try one yourself by downloading a GUFF quantised model and using the following arguments:
Try one yourself by downloading a quantized GGUF model and using the following arguments:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

potato potato

Comment on lines -40 to -41
POLLING_TIMEOUT_MS = 5000
POLLING_TIMEOUT_S = POLLING_TIMEOUT_MS // 1000
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice, thanks

…variable

Signed-off-by: jinqinn <goodqinjin@163.com>
@jinqinn
Copy link
Contributor Author

jinqinn commented Jun 16, 2025

@njhill Could you please review the PR when you have a chance? I’d really appreciate your feedback. Thank you!

@njhill njhill added the ready ONLY add when PR is ready to merge/full CI is needed label Jun 16, 2025
@njhill njhill changed the title Add configurable timeout for execute_model RPC calls via environment variable [Misc] Configurable timeout for execute_model RPC calls via env var Jun 17, 2025
@njhill
Copy link
Member

njhill commented Jun 17, 2025

@jinqinn could you merge in latest main to resolve conflicts?

Copy link

mergify bot commented Jun 17, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @jinqinn.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jun 17, 2025
@mergify mergify bot removed the needs-rebase label Jun 18, 2025
@jinqinn
Copy link
Contributor Author

jinqinn commented Jun 18, 2025

@jinqinn could you merge in latest main to resolve conflicts?

@njhill I have already resolved the conflicts with the latest main branch. please let me know if you need anything else.

@jinqinn jinqinn requested a review from njhill June 19, 2025 01:19
@jinqinn
Copy link
Contributor Author

jinqinn commented Jun 23, 2025

@njhill I would be greatly appreciated if you could spend some of your time check the pr for me.

@njhill njhill merged commit f39ab2d into vllm-project:main Jun 23, 2025
66 checks passed
juncheoll pushed a commit to juncheoll/vllm that referenced this pull request Jun 23, 2025
…llm-project#19544)

Signed-off-by: jinqinn <goodqinjin@163.com>
Signed-off-by: juncheoll <th6re8e@naver.com>
fhl2000 pushed a commit to fhl2000/vllm that referenced this pull request Jun 25, 2025
…llm-project#19544)

Signed-off-by: jinqinn <goodqinjin@163.com>
Signed-off-by: fhl <2410591650@qq.com>
gmarinho2 pushed a commit to gmarinho2/vllm that referenced this pull request Jun 26, 2025
xjpang pushed a commit to xjpang/vllm that referenced this pull request Jun 30, 2025
wseaton pushed a commit to wseaton/vllm that referenced this pull request Jun 30, 2025
…llm-project#19544)

Signed-off-by: jinqinn <goodqinjin@163.com>
Signed-off-by: Will Eaton <weaton@redhat.com>
wseaton pushed a commit to wseaton/vllm that referenced this pull request Jun 30, 2025
wwl2755-google pushed a commit to wwl2755-google/vllm that referenced this pull request Jul 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants