-
-
Notifications
You must be signed in to change notification settings - Fork 8.3k
[Kernel] mark TorchSDPABackend swap_blocks NotImplementedError #19749
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Kernel] mark TorchSDPABackend swap_blocks NotImplementedError #19749
Conversation
Signed-off-by: Andy Xie <andy.xning@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @andyxning, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request clarifies the behavior of the TorchSDPABackend
by explicitly raising a NotImplementedError
for the swap_blocks
method. This is a necessary change because the TorchSDPABackend
is currently designed for CPU usage, and CPU environments do not support the cache swap operations that this method implies. The change prevents unexpected behavior and clearly communicates the current limitations of this specific backend.
Highlights
- Error Handling: Explicitly marked the
swap_blocks
method within theTorchSDPABackend
asNotImplementedError
. - Backend Limitation: This change reflects that the
TorchSDPABackend
is currently CPU-only, and CPU devices do not support the cache swap operations that this method would typically handle.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly addresses an issue with the TorchSDPABackend
by explicitly marking its swap_blocks
method with a NotImplementedError
. This change aligns with the documented limitations of CPU workers, which do not support swap operations. The code modification itself is clear and appropriate.
The main feedback provided focuses on improving the PR process by suggesting the inclusion of a test plan and a corresponding unit test to verify the NotImplementedError
is raised as expected. Additionally, a reminder to complete the PR description checklist was given. These process improvements aim to enhance the robustness and maintainability of the codebase.
@@ -65,7 +65,7 @@ def swap_blocks( | |||
dst_kv_cache: torch.Tensor, | |||
src_to_dst: torch.Tensor, | |||
) -> None: | |||
PagedAttention.swap_blocks(src_kv_cache, dst_kv_cache, src_to_dst) | |||
raise NotImplementedError("Swap is not supported in TorchSDPABackend.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change to raise NotImplementedError
is a good improvement, accurately reflecting that swap operations are not supported for TorchSDPABackend
on CPU. This aligns with the rationale provided in the PR description and the linked comments in vllm/worker/cpu_worker.py
.
To further enhance the robustness of this change and adherence to PR best practices:
-
Test Plan: The PR description's 'Test Plan' section is currently empty. It would be beneficial to include a test plan. For this specific change, a unit test could be added to verify that calling
TorchSDPABackend.swap_blocks()
indeed raisesNotImplementedError
with the expected message. This ensures the intended behavior is explicitly tested and maintained through future refactorings.
Example test sketch:import pytest import torch from vllm.attention.backends.torch_sdpa import TorchSDPABackend def test_torch_sdpa_swap_blocks_not_implemented(): # It's a static method, so no instance is strictly needed for calling, # but let's assume it might be called via an instance context in some cases. # If it's always called statically, TorchSDPABackend.swap_blocks(...) is fine. src_kv_cache = torch.empty(0) dst_kv_cache = torch.empty(0) src_to_dst = torch.empty(0) with pytest.raises(NotImplementedError, match="Swap is not supported in TorchSDPABackend."): TorchSDPABackend.swap_blocks(src_kv_cache, dst_kv_cache, src_to_dst)
-
PR Description Checklist: Please also ensure the checklist items at the top of the PR description template are reviewed and ticked as appropriate.
Adding a unit test would provide greater confidence in this change and help complete the PR description more thoroughly.
Signed-off-by: nie3e <adrcwiek@gmail.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> added notebooks to playground updates remoted verbatim HF secrets from all files updates [custom_op][vllm-plugin] update custom_op class to use op_registry (vllm-project#19164) Signed-off-by: Chendi.Xue <chendi.xue@intel.com> Export NaNs in logits to scheduler_stats if output is corrupted (vllm-project#18777) Signed-off-by: Vlad Mihailescu <vtmihailescu@gmail.com> [CPU][CI] Fallback sliding window to v0 and fix CPU pooling model tests (vllm-project#19901) Signed-off-by: jiang1.li <jiang1.li@intel.com> [Kernel] mark TorchSDPABackend swap_blocks NotImplementedError (vllm-project#19749)
…project#19749) Signed-off-by: juncheoll <th6re8e@naver.com>
…project#19749) Signed-off-by: fhl <2410591650@qq.com>
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.Purpose
TorchSDPABackend
is only used with cpu device and for now and cpu does not support cache swap operations as stated in:vllm/vllm/worker/cpu_worker.py
Lines 91 to 95 in 5a1c2e1
So,
TorchSDPABackend
can raiseNotImplementedError
.Test Plan
Test Result
(Optional) Documentation Update