-
Notifications
You must be signed in to change notification settings - Fork 25.4k
Closed
Labels
high prioritymodule: cudaRelated to torch.cuda, and CUDA support in generalRelated to torch.cuda, and CUDA support in generaltriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
🐛 Describe the bug
Previous related issues: #39227 #66495
import torch
# device = 'cpu' # Works
# RuntimeError: linearIndex.numel()*sliceSize*nElemBefore == expandedValue.numel()
# INTERNAL ASSERT FAILED at
# "../aten/src/ATen/native/cuda/Indexing.cu":265, please report a bug to PyTorch.
# number of flattened indices did not match number of elements in the value tensor: 10 vs 5
device = 'cuda' # Errors
t = torch.zeros(5, 5, device=device)
idx = torch.tensor([0, 1], device=device)
v = torch.ones((5,), device=device)
print(torch.index_put(t, (idx, ), v, accumulate=True))
Versions
master
Metadata
Metadata
Assignees
Labels
high prioritymodule: cudaRelated to torch.cuda, and CUDA support in generalRelated to torch.cuda, and CUDA support in generaltriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module