Skip to content

Conversation

Copy link
Contributor

Copilot AI commented Oct 29, 2025

The test_copy_cache_modifiers test was comparing GPU tensors with CPU tensors, causing torch.allclose() to fail with a device mismatch error across all 30 parameterized test cases.

Changes

  • Added device=results.device parameter to torch.full() call in the assertion to ensure comparison tensors are created on the same device as the results tensor
# Before
assert torch.allclose(results[i], torch.full((BLOCK_SIZE,), expected_value, dtype=torch.float32))

# After  
assert torch.allclose(
    results[i], torch.full((BLOCK_SIZE,), expected_value, dtype=torch.float32, device=results.device)
)
Original prompt

This section details on the original issue you should resolve

<issue_title>Fix cache modifiers unittests bug</issue_title>
<issue_description> FAILED tests/unittests/test_copy_cache_modifiers.py::test_copy_cache_modifiers[None-None] - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
FAILED tests/unittests/test_copy_cache_modifiers.py::test_copy_cache_modifiers[None-] - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
FAILED tests/unittests/test_copy_cache_modifiers.py::test_copy_cache_modifiers[None-.wb] - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
FAILED tests/unittests/test_copy_cache_modifiers.py::test_copy_cache_modifiers[None-.cg] - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
FAILED tests/unittests/test_copy_cache_modifiers.py::test_copy_cache_modifiers[None-.cs] - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
FAILED tests/unittests/test_copy_cache_modifiers.py::test_copy_cache_modifiers[None-.wt] - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
FAILED tests/unittests/test_copy_cache_modifiers.py::test_copy_cache_modifiers[-None] - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
FAILED tests/unittests/test_copy_cache_modifiers.py::test_copy_cache_modifiers[-] - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
FAILED tests/unittests/test_copy_cache_modifiers.py::test_copy_cache_modifiers[-.wb] - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
FAILED tests/unittests/test_copy_cache_modifiers.py::test_copy_cache_modifiers[-.cg] - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
FAILED tests/unittests/test_copy_cache_modifiers.py::test_copy_cache_modifiers[-.cs] - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
FAILED tests/unittests/test_copy_cache_modifiers.py::test_copy_cache_modifiers[-.wt] - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
FAILED tests/unittests/test_copy_cache_modifiers.py::test_copy_cache_modifiers[.ca-None] - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
FAILED tests/unittests/test_copy_cache_modifiers.py::test_copy_cache_modifiers[.ca-] - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
FAILED tests/unittests/test_copy_cache_modifiers.py::test_copy_cache_modifiers[.ca-.wb] - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
FAILED tests/unittests/test_copy_cache_modifiers.py::test_copy_cache_modifiers[.ca-.cg] - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
FAILED tests/unittests/test_copy_cache_modifiers.py::test_copy_cache_modifiers[.ca-.cs] - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
FAILED tests/unittests/test_copy_cache_modifiers.py::test_copy_cache_modifiers[.ca-.wt] - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
FAILED tests/unittests/test_copy_cache_modifiers.py::test_copy_cache_modifiers[.cg-None] - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
assert torch.allclose(results[i], torch.full((BLOCK_SIZE,), expected_value, dtype=torch.float32)), (
E RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cpu!
______________________ test_copy_cache_modifiers[.cg-.cs] ______________________
tests/unittests/test_copy_cache_modifiers.py:102: in test_copy_cache_modifiers
assert torch.allclose(results[i], torch.full((BLOCK_SIZE,), expected_value, dtype=torch.float32)), (
E RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cpu!
______________________ test_copy_cache_modifiers[.cg-.wt] ______________________
tests/unittests/test_copy_cache_modifiers.py:102: in test_copy_cache_modifiers
assert torch.allclose(results[i], torch.full((BLOCK_SIZE,), expected_value, dtype=torch.float32)), (
E RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cpu!
_____________________ test_copy_cache_modifiers[...


💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.

Co-authored-by: mawad-amd <112003944+mawad-amd@users.noreply.github.com>
Copilot AI changed the title [WIP] Fix cache modifiers unittests bug Fix device mismatch in test_copy_cache_modifiers assertions Oct 29, 2025
Copilot AI requested a review from mawad-amd October 29, 2025 22:57
Copilot finished work on behalf of mawad-amd October 29, 2025 22:57
@mawad-amd
Copy link
Collaborator

@copilot Does any of the other test cache modifier files have this bug?

Copy link
Contributor Author

Copilot AI commented Oct 29, 2025

@copilot Does any of the other test cache modifier files have this bug?

No, the other cache modifier test files don't have this bug. I checked all 4 files:

  • test_store_cache_modifiers.py
  • test_get_cache_modifiers.py
  • test_load_cache_modifiers.py
  • test_put_cache_modifiers.py

They all use torch.testing.assert_close and correctly specify device="cuda" when creating expected tensors (e.g., torch.ones(BLOCK_SIZE, dtype=torch.float32, device="cuda")), so they already handle device placement correctly.

Copilot finished work on behalf of mawad-amd October 29, 2025 23:02
@mawad-amd mawad-amd marked this pull request as ready for review October 29, 2025 23:02
@mawad-amd mawad-amd merged commit e74aacd into muhaawad/cache-modifiers Oct 29, 2025
4 checks passed
@mawad-amd mawad-amd deleted the copilot/fix-cache-modifiers-unittests branch October 29, 2025 23:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants