Skip to content

Conversation

@wangxiyuan
Copy link
Collaborator

@wangxiyuan wangxiyuan commented Dec 10, 2025

VLLM_ASCEND_ENABLE_TOPK_TOPP_OPTIMIZATION is enabled by default for long time. Let's remove it now.

Signed-off-by: wangxiyuan <wangxiyuan1007@gmail.com>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request removes the VLLM_ASCEND_ENABLE_TOPK_TOPP_OPTIMIZATION environment variable, making AscendSampler the default sampler. While this simplifies the configuration, my review found a critical issue. The AscendSampler relies on AscendTopKTopPSampler, which contains a buggy fallback implementation for top-p (nucleus) sampling. By removing the option to use the standard vLLM sampler, this PR makes the bug unavoidable in certain scenarios, potentially leading to incorrect model generation. I have detailed this critical issue in a comment.

from vllm.v1.sample.sampler import Sampler

self.sampler = Sampler()
self.sampler = AscendSampler()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This change makes AscendSampler the default sampler, which is a good simplification. However, this surfaces a critical bug in AscendTopKTopPSampler's fallback logic for top-p sampling.

The implementation in vllm_ascend/sample/sampler.py incorrectly sorts probabilities in ascending order for top-p filtering, which will produce incorrect sampling results.

# In vllm_ascend/sample/sampler.py:
# ... inside AscendTopKTopPSampler._apply_top_k_top_p
probs_sort, _ = probs.sort(dim=-1, descending=False) # This should be descending=True for top-p
cumprob = torch.cumsum(probs_sort, dim=-1) # This is incorrect with ascending sort

This fallback is used on 310P devices, or when only top-p sampling is active. By removing the option to use the base vllm.Sampler, this bug becomes unavoidable in those cases.

Recommendation:
Before removing this feature flag, the bug in AscendTopKTopPSampler should be fixed. A simple fix would be to make the fallback call the superclass's implementation: return super()._apply_top_k_top_p(logits, k, p).

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Member

@Yikun Yikun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it was a TopP TopK accuracy issue, now can be removed.

@wangxiyuan wangxiyuan merged commit 08441ba into vllm-project:main Dec 10, 2025
22 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants