-
-
Notifications
You must be signed in to change notification settings - Fork 11.6k
[CI/Build] Fix test_prefix_prefill for AMD #28905
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CI/Build] Fix test_prefix_prefill for AMD #28905
Conversation
Signed-off-by: Ryan Rock <ryan.rock@amd.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses an issue with torch.cumsum on AMD platforms by moving the type cast to int32 after the cumsum operation, preventing an unwanted promotion to int64. The change is correct and effectively resolves the issue for the line it modifies. However, the fix is incomplete as several other instances of the same code pattern exist within the same test file. I've added a comment to highlight these other locations that require the same fix to ensure test stability on AMD platforms.
| b_seq_len = torch.tensor(seq_lens, dtype=torch.int32) | ||
| b_ctx_len = torch.tensor(ctx_lens, dtype=torch.int32) | ||
| b_start_loc = torch.cumsum(torch.tensor([0] + query_lens, dtype=torch.int32), dim=0) | ||
| b_start_loc = torch.cumsum(torch.tensor([0] + query_lens), dim=0).to(torch.int32) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While this change correctly fixes the unwanted type promotion for b_start_loc, the same pattern that causes this issue appears to be present in other parts of this file. To ensure all tests pass reliably on AMD platforms, please consider applying a similar fix to:
b_seq_start_locin this function (line 180)b_start_locintest_contexted_kv_attention_alibi(line 420)b_seq_start_locintest_contexted_kv_attention_alibi(line 423)
|
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run You ask your reviewers to trigger select CI tests on top of Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add If you have any questions, please reach out to us on Slack at https://slack.vllm.ai. 🚀 |
yewentao256
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the work! Please also take a look what Gemini suggests and we can land this PR
Signed-off-by: Ryan Rock <ryan.rock@amd.com>
yewentao256
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks for the work!
Signed-off-by: Ryan Rock <ryan.rock@amd.com>
Signed-off-by: Ryan Rock <ryan.rock@amd.com> Signed-off-by: LuminolT <lumischen01@gmail.com>
Signed-off-by: Ryan Rock <ryan.rock@amd.com> Signed-off-by: jiang1.li <jiang1.li@intel.com>
Signed-off-by: Ryan Rock <ryan.rock@amd.com>
Resolves issue #28490.
Purpose
This PR moves the typecast after torch.cumsum, to prevent unwanted promotion to int64.
Test Plan
pytest -s -v 'tests/kernels/attention/test_prefix_prefill.py'Test Result
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.