Skip to content

[DP] Support external DP Load Balancer mode #19790

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 14 commits into from
Jul 2, 2025

Conversation

njhill
Copy link
Member

@njhill njhill commented Jun 18, 2025

See https://docs.google.com/document/d/1mSYsWQEbp4Oq50ghFWUun7OgagbuUJivAC8Ds-pu-xU/edit?tab=t.0#heading=h.ggq72ssewhm3

This also includes some de-duplication of the core engine and dp coordinator processorchestration logic, consolidated into a launch_core_engines context manager util method.

This is called:

  • from the run_multi_api_server in serve.py when there are multiple API server processes, in which case RPC connection details are passed into the engine core client running within each API server..
  • within the MPClient EngineCoreClient superclass otherwise (encapsulated behind the engine core client).

Some utility classes/functions related to engine process management moved from v1/utils.py to v1/engine/utils.py to avoid circular imports.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @njhill, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces comprehensive support for an external data parallel load balancer mode, allowing vLLM deployments to integrate more seamlessly with external orchestration systems. It refines the internal communication patterns, configures instances for direct local engine interaction, and adjusts stats reporting to align with an externally managed load balancing strategy.

Highlights

  • External Data Parallel Load Balancer Mode: Introduced a new data_parallel_external_lb configuration option and a --data-parallel-rank CLI argument to enable an external load balancer mode for data parallelism. When enabled, vLLM instances are expected to be managed by an external system for request distribution.
  • Decoupled Coordinator and Engine Communication: In external LB mode, the data parallel coordinator (responsible for internal load balancing and stats) now only runs on the rank 0 instance. Non-rank-0 instances will communicate directly with their local engine, streamlining the setup for external orchestration.
  • Refined Engine Handshake Process: The engine startup handshake mechanism has been updated to support the new external LB mode. This includes a two-stage handshake for non-rank-0 instances to retrieve both global coordinator information (from rank 0) and local client I/O addresses.
  • Adjusted Stats Reporting: Engines operating in external LB mode will no longer publish detailed request queue statistics to the coordinator. Instead, they will only report high-level wave completion and running state changes, reducing overhead when an external system handles load metrics.
  • Client-Side Adaptations for DP Modes: The MPClient hierarchy has been refactored to differentiate between internal (load-balancing) and external (direct-to-local-engine) data parallel modes. A new DPLBAsyncMPClient class was introduced to specifically handle the internal load-balancing logic.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for an external Data Parallel (DP) Load Balancer mode. The changes are comprehensive, touching configuration, argument parsing, server entrypoints, and core engine components. The introduction of a dual handshake mechanism for non-rank-0 external LB instances and the refactoring of client types (DPAsyncMPClient vs. DPLBAsyncMPClient) are notable improvements for handling different DP scenarios. The overall logic appears sound. One minor issue with a docstring in vllm/config.py needs addressing, and a check for data_parallel_rank in EngineArgs.

Copy link

mergify bot commented Jun 24, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @njhill.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jun 24, 2025
@mergify mergify bot removed the needs-rebase label Jun 24, 2025
@njhill njhill marked this pull request as ready for review June 25, 2025 04:41
@njhill njhill added the ready ONLY add when PR is ready to merge/full CI is needed label Jun 25, 2025
njhill added 5 commits June 26, 2025 13:40
wip

Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
@mergify mergify bot added the ci/build label Jun 26, 2025
Signed-off-by: Nick Hill <nhill@redhat.com>
njhill added 6 commits June 26, 2025 21:13
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Anything new in here, or is it all moved over from vllm/utils.py?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most is moved but the launch_core_engines function is new. Sorry, I know moving stuff is crap for reviews. Let me double-check diffs of the parts other than that function.

Comment on lines +217 to +225
# For dp ranks > 0 in external DP LB mode, we must delay the
# start of the API servers until the local engine is started
# (after the launcher context manager exits),
# since we get the front-end stats update address from the coordinator
# via the handshake with the local engine.
if dp_rank == 0 or not external_dp_lb:
# Start API servers using the manager.
api_server_manager = APIServerProcessManager(
**api_server_manager_kwargs)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just wondering, why can't we also delay for the case that dp_rank == 0? Do all the stats get published through the 0th API server?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can delay for rank == 0 case too, I guess that might be cleaner. We don't need to on rank 0 because it's where the coordinator is started and so we know in advance the zmq address that the API servers use to connect to the coordinator.

Comment on lines +74 to +77
# When in external LB mode, load stats aren't published, only changes
# to request wave / running state, so we don't need to rate-limit the
# updates to the front-end proc(s).
min_stats_update_interval_ms = 0 if external_lb else 100
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we going to be able to remove the every 200ms metrics logging spam in llm-d with this change?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately no, these stats are the ones for the internal load-balancing, independent of prometheus.

There may actually be a lot more such logs on aggregate since llm-d will need to hit the metrics endpoint on each of the ranks/pods instead of just one ... !

@@ -148,7 +144,7 @@ def signal_handler(signum, frame):
start_index=args.data_parallel_start_rank,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does data_parallel_start_rank need to be set at all with this approach?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No it doesn't, this line though is in the headless path (in run_headless function), used for remote nodes in the internal LB mode.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe there should be a warning if it's set just to let the user know it doesn't do anything

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added some validation to fail if invalid CLI arg combos are used.

Signed-off-by: Nick Hill <nhill@redhat.com>
Copy link
Collaborator

@tlrmchlsmth tlrmchlsmth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, nice work!

Signed-off-by: Nick Hill <nhill@redhat.com>
@vllm-bot vllm-bot merged commit 657f2f3 into vllm-project:main Jul 2, 2025
101 of 103 checks passed
@njhill njhill deleted the ext-lb-dp branch July 2, 2025 18:12
njhill added a commit to njhill/vllm that referenced this pull request Jul 2, 2025
Last-minute bug introduced in vllm-project#19790

Signed-off-by: Nick Hill <nhill@redhat.com>
huydhn pushed a commit to huydhn/vllm that referenced this pull request Jul 8, 2025
Signed-off-by: Nick Hill <nhill@redhat.com>
@@ -799,10 +867,18 @@ def _init_data_parallel(self, vllm_config: VllmConfig):
from vllm.platforms import current_platform
device_control_env_var = current_platform.device_control_env_var
world_size = vllm_config.parallel_config.world_size
os.environ[device_control_env_var] = ",".join(
Copy link
Contributor

@hidva hidva Jul 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

local_dp_rank is always 0, this may break nixl connector.

# modify the engine_id and append the local_dp_rank to it to ensure
# that the kv_transfer_config is unique for each DP rank.
vllm_config.kv_transfer_config.engine_id = (
    f"{vllm_config.kv_transfer_config.engine_id}_dp{local_dp_rank}"
)

@njhill

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @hidva. Actually in this case I don't think it's a problem since a separate vllm serve command / top-level process is run for each DP rank and thus different engine ids will be generated for each.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/build frontend ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants