Skip to content

Conversation

abmfy
Copy link
Member

@abmfy abmfy commented Sep 10, 2025

Purpose

PR #18343 introduced the Expert Parallelism Load Balancer (EPLB). By replicating a single logical expert into multiple physical experts, we can achieve better load balancing across experts.

However, this replication introduces some inference-time overhead: after the MoE routing module, we must select among multiple replicas of the same logical expert and also record expert load metrics for the rearrangement algorithm.

Previously, torch.rand was used to select expert replicas. Unfortunately, this method is slow and not torch.compile-friendly.

In this PR, we aim to reduce EPLB overhead by:

  1. Switching from torch.rand to a modulo-based pseudo-random selection.
    • The pseudo-random method is intentionally simple, based only on the token index and routing rank k.
  2. Removing unnecessary masking of local experts, since we now collect global physical expert load metrics.
  3. Extracting all EPLB-related logic from select_experts into a torch.compile-friendly function.

Test Plan

To isolate EPLB inference overhead, we test with EPLB enabled but with num_redundant_experts=0, and without rearranging experts. This ensures that any observed differences are solely due to replica selection and load recording overhead.

Test Result

We benchmarked 1000 random prompts with 1000 input tokens and 100 output tokens on DeepSeek-V3-0324, on a DP16 setting. Prefix caching was disabled to measure the raw computational cost.

vllm bench serve \
    --model $MODEL \
    --dataset-name random \
    --ignore-eos \
    --port ${PORT:-8080} \
    --random-input-len 1000 \
    --random-output-len 100 \

w/o EPLB:

============ Serving Benchmark Result ============
Successful requests:                     1000      
Benchmark duration (s):                  34.61     
Total input tokens:                      996701    
Total generated tokens:                  100000    
Request throughput (req/s):              28.89     
Output token throughput (tok/s):         2889.16   
Total Token throughput (tok/s):          31685.45  
---------------Time to First Token----------------
Mean TTFT (ms):                          16177.52  
Median TTFT (ms):                        17730.53  
P99 TTFT (ms):                           28279.18  
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          184.22    
Median TPOT (ms):                        168.75    
P99 TPOT (ms):                           310.01    
---------------Inter-token Latency----------------
Mean ITL (ms):                           184.22    
Median ITL (ms):                         64.62     
P99 ITL (ms):                            3563.61   
==================================================

w/ EPLB, main:

============ Serving Benchmark Result ============
Successful requests:                     1000      
Benchmark duration (s):                  36.04     
Total input tokens:                      996701    
Total generated tokens:                  100000    
Request throughput (req/s):              27.74     
Output token throughput (tok/s):         2774.48   
Total Token throughput (tok/s):          30427.73  
---------------Time to First Token----------------
Mean TTFT (ms):                          16787.26  
Median TTFT (ms):                        15501.58  
P99 TTFT (ms):                           29449.69  
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          192.38    
Median TPOT (ms):                        205.27    
P99 TPOT (ms):                           316.54    
---------------Inter-token Latency----------------
Mean ITL (ms):                           192.38    
Median ITL (ms):                         67.10     
P99 ITL (ms):                            3727.04   
==================================================

w/ EPLB, this PR:

============ Serving Benchmark Result ============
Successful requests:                     1000      
Benchmark duration (s):                  35.19     
Total input tokens:                      996701    
Total generated tokens:                  100000    
Request throughput (req/s):              28.41     
Output token throughput (tok/s):         2841.33   
Total Token throughput (tok/s):          31160.91  
---------------Time to First Token----------------
Mean TTFT (ms):                          16370.60  
Median TTFT (ms):                        15091.95  
P99 TTFT (ms):                           28831.30  
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          188.17    
Median TPOT (ms):                        201.00    
P99 TPOT (ms):                           308.62    
---------------Inter-token Latency----------------
Mean ITL (ms):                           188.17    
Median ITL (ms):                         65.13     
P99 ITL (ms):                            3621.91   
==================================================

Summary:
Not accounting for the benefits of improved expert load balancing, on the main branch, EPLB introduces ~3.97% throughput drop. With this PR, we recover ~2.41%, narrowing the gap to ~1.66% compared to running without EPLB.


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request significantly improves the performance and maintainability of the Expert Parallelism Load Balancer (EPLB) by replacing the slow and non-compilable torch.rand with a deterministic modulo-based replica selection. The refactoring of EPLB logic into a separate, torch.compile-friendly function eplb_map_to_physical_and_record is a great change that enhances code clarity. I've found one critical issue that could lead to a runtime error, which I've detailed in a specific comment.

Although it's safe to pass in `dtype=None`. Makes Gemini happy.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Signed-off-by: Bowen Wang <abmfy@icloud.com>
@mgoin
Copy link
Member

mgoin commented Sep 10, 2025

LGTM - please fix the pre-commit

Signed-off-by: Bowen Wang <abmfy@icloud.com>
@ProExpertProg ProExpertProg mentioned this pull request Sep 15, 2025
4 tasks
@tlrmchlsmth tlrmchlsmth added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 16, 2025
@tlrmchlsmth tlrmchlsmth requested a review from mgoin as a code owner September 22, 2025 14:49
@tlrmchlsmth tlrmchlsmth added this to the v0.10.3 milestone Sep 22, 2025
@tlrmchlsmth tlrmchlsmth enabled auto-merge (squash) September 22, 2025 14:50
@tlrmchlsmth tlrmchlsmth merged commit 06a4133 into vllm-project:main Sep 22, 2025
45 checks passed
FeiDaLI pushed a commit to FeiDaLI/vllm that referenced this pull request Sep 25, 2025
Signed-off-by: Bowen Wang <abmfy@icloud.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
charlifu pushed a commit to ROCm/vllm that referenced this pull request Sep 25, 2025
Signed-off-by: Bowen Wang <abmfy@icloud.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: charlifu <charlifu@amd.com>
yewentao256 pushed a commit that referenced this pull request Oct 3, 2025
Signed-off-by: Bowen Wang <abmfy@icloud.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Tyler Michael Smith <tyler@neuralmagic.com>
Signed-off-by: yewentao256 <zhyanwentao@126.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
eplb ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants