-
-
Notifications
You must be signed in to change notification settings - Fork 8.9k
[DP] Support external DP Load Balancer mode #19790
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @njhill, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces comprehensive support for an external data parallel load balancer mode, allowing vLLM deployments to integrate more seamlessly with external orchestration systems. It refines the internal communication patterns, configures instances for direct local engine interaction, and adjusts stats reporting to align with an externally managed load balancing strategy.
Highlights
- External Data Parallel Load Balancer Mode: Introduced a new
data_parallel_external_lb
configuration option and a--data-parallel-rank
CLI argument to enable an external load balancer mode for data parallelism. When enabled, vLLM instances are expected to be managed by an external system for request distribution. - Decoupled Coordinator and Engine Communication: In external LB mode, the data parallel coordinator (responsible for internal load balancing and stats) now only runs on the rank 0 instance. Non-rank-0 instances will communicate directly with their local engine, streamlining the setup for external orchestration.
- Refined Engine Handshake Process: The engine startup handshake mechanism has been updated to support the new external LB mode. This includes a two-stage handshake for non-rank-0 instances to retrieve both global coordinator information (from rank 0) and local client I/O addresses.
- Adjusted Stats Reporting: Engines operating in external LB mode will no longer publish detailed request queue statistics to the coordinator. Instead, they will only report high-level wave completion and running state changes, reducing overhead when an external system handles load metrics.
- Client-Side Adaptations for DP Modes: The
MPClient
hierarchy has been refactored to differentiate between internal (load-balancing) and external (direct-to-local-engine) data parallel modes. A newDPLBAsyncMPClient
class was introduced to specifically handle the internal load-balancing logic.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for an external Data Parallel (DP) Load Balancer mode. The changes are comprehensive, touching configuration, argument parsing, server entrypoints, and core engine components. The introduction of a dual handshake mechanism for non-rank-0 external LB instances and the refactoring of client types (DPAsyncMPClient
vs. DPLBAsyncMPClient
) are notable improvements for handling different DP scenarios. The overall logic appears sound. One minor issue with a docstring in vllm/config.py
needs addressing, and a check for data_parallel_rank
in EngineArgs
.
This pull request has merge conflicts that must be resolved before it can be |
wip Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
vllm/v1/engine/utils.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Anything new in here, or is it all moved over from vllm/utils.py
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most is moved but the launch_core_engines
function is new. Sorry, I know moving stuff is crap for reviews. Let me double-check diffs of the parts other than that function.
# For dp ranks > 0 in external DP LB mode, we must delay the | ||
# start of the API servers until the local engine is started | ||
# (after the launcher context manager exits), | ||
# since we get the front-end stats update address from the coordinator | ||
# via the handshake with the local engine. | ||
if dp_rank == 0 or not external_dp_lb: | ||
# Start API servers using the manager. | ||
api_server_manager = APIServerProcessManager( | ||
**api_server_manager_kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just wondering, why can't we also delay for the case that dp_rank == 0
? Do all the stats get published through the 0th API server?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can delay for rank == 0 case too, I guess that might be cleaner. We don't need to on rank 0 because it's where the coordinator is started and so we know in advance the zmq address that the API servers use to connect to the coordinator.
# When in external LB mode, load stats aren't published, only changes | ||
# to request wave / running state, so we don't need to rate-limit the | ||
# updates to the front-end proc(s). | ||
min_stats_update_interval_ms = 0 if external_lb else 100 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we going to be able to remove the every 200ms metrics logging spam in llm-d with this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately no, these stats are the ones for the internal load-balancing, independent of prometheus.
There may actually be a lot more such logs on aggregate since llm-d will need to hit the metrics endpoint on each of the ranks/pods instead of just one ... !
@@ -148,7 +144,7 @@ def signal_handler(signum, frame): | |||
start_index=args.data_parallel_start_rank, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does data_parallel_start_rank
need to be set at all with this approach?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No it doesn't, this line though is in the headless path (in run_headless
function), used for remote nodes in the internal LB mode.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe there should be a warning if it's set just to let the user know it doesn't do anything
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added some validation to fail if invalid CLI arg combos are used.
Signed-off-by: Nick Hill <nhill@redhat.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, nice work!
Signed-off-by: Nick Hill <nhill@redhat.com>
Last-minute bug introduced in vllm-project#19790 Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
@@ -799,10 +867,18 @@ def _init_data_parallel(self, vllm_config: VllmConfig): | |||
from vllm.platforms import current_platform | |||
device_control_env_var = current_platform.device_control_env_var | |||
world_size = vllm_config.parallel_config.world_size | |||
os.environ[device_control_env_var] = ",".join( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
local_dp_rank
is always 0, this may break nixl connector.
# modify the engine_id and append the local_dp_rank to it to ensure
# that the kv_transfer_config is unique for each DP rank.
vllm_config.kv_transfer_config.engine_id = (
f"{vllm_config.kv_transfer_config.engine_id}_dp{local_dp_rank}"
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @hidva. Actually in this case I don't think it's a problem since a separate vllm serve
command / top-level process is run for each DP rank and thus different engine ids will be generated for each.
See https://docs.google.com/document/d/1mSYsWQEbp4Oq50ghFWUun7OgagbuUJivAC8Ds-pu-xU/edit?tab=t.0#heading=h.ggq72ssewhm3
This also includes some de-duplication of the core engine and dp coordinator processorchestration logic, consolidated into a
launch_core_engines
context manager util method.This is called:
run_multi_api_server
inserve.py
when there are multiple API server processes, in which case RPC connection details are passed into the engine core client running within each API server..MPClient
EngineCoreClient superclass otherwise (encapsulated behind the engine core client).Some utility classes/functions related to engine process management moved from
v1/utils.py
tov1/engine/utils.py
to avoid circular imports.