Skip to content

Conversation

@dw2761
Copy link
Contributor

@dw2761 dw2761 commented Nov 13, 2025

Purpose

Fixes a bug in benchmarks/benchmark_prefix_caching.py where the sample_tokens function failed to filter special tokens.

The original code was incorrectly comparing token strings (k) with a set of special token IDs (all_special_ids). This PR fixes the logic to correctly compare the token ID (v) and simplifies the list comprehension by using vocab.values() instead of vocab.items(), as the token string key is not used.

Test Plan

This is a self-contained logic fix within a benchmark script. The test plan is to ensure the script still runs successfully after the change.

I ran the benchmark script locally to confirm that it executes without any new errors.

Test Result

The script runs successfully. The sample_tokens function now correctly filters all special token IDs from the sampling pool, as the original code intended.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

@mergify mergify bot added the performance Performance-related issues label Nov 13, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a bug in the sample_tokens function within benchmarks/benchmark_prefix_caching.py. The original implementation failed to filter special tokens due to an incorrect comparison between token strings and a set of special token IDs. The fix correctly compares the token ID with the set of special IDs, resolving the bug. Furthermore, the code is made more efficient by using vocab.values() instead of vocab.items(), as the dictionary keys were unused. The change is correct, concise, and I have no further suggestions for improvement.

@dw2761 dw2761 force-pushed the fix-prefix-caching-sampler branch from 7af6575 to 93db02b Compare November 13, 2025 06:09
Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch!

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) November 13, 2025 06:15
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Nov 13, 2025
@DarkLight1337 DarkLight1337 merged commit e63fd44 into vllm-project:main Nov 13, 2025
19 checks passed
@dw2761 dw2761 deleted the fix-prefix-caching-sampler branch November 14, 2025 02:51
geodavic pushed a commit to geodavic/vllm that referenced this pull request Nov 16, 2025
…m-project#28615)

Signed-off-by: Di Wu <dw2761@nyu.edu>
Signed-off-by: George D. Torres <gdavtor@gmail.com>
bwasti pushed a commit to bwasti/vllm that referenced this pull request Nov 17, 2025
…m-project#28615)

Signed-off-by: Di Wu <dw2761@nyu.edu>
Signed-off-by: Bram Wasti <bwasti@meta.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

performance Performance-related issues ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants