Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add memory overlap check to meta_copy_ #108989

Closed
wants to merge 6 commits into from
Closed
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
1 change: 0 additions & 1 deletion test/test_torch.py
Original file line number Diff line number Diff line change
Expand Up @@ -8274,7 +8274,6 @@ def test_copy_broadcast(self):
# FIXME: Port to a more appropriate test suite
# Fails with inductor (and aot_eager) because functionalization replaces copy_ with copy,
# which doesn't properly error on bad inputs.
@skipIfTorchInductor("FIXME")
def test_copy_many_to_one(self):
# Testing in-place copy where it attempt to write from many memory
# storage to a single storage would cause RuntimeError to be thrown
Expand Down
10 changes: 10 additions & 0 deletions torch/_meta_registrations.py
Original file line number Diff line number Diff line change
Expand Up @@ -289,6 +289,16 @@ def meta_copy_(self, src, non_blocking=False):
# which runs most of the meta checks that we care about.
# In theory, we should make this more robust by carefully
# auditing our C++ copy_() kernel and copying the checks here.

if self.numel() == 0:
int3 marked this conversation as resolved.
Show resolved Hide resolved
return self

for dim in range(self.ndim):
if self.stride(dim) == 0 and self.size(dim) > 1:
raise RuntimeError(
"more than one element of the written-to tensor refers to a single memory location"
)

intermediate = src.to(self, non_blocking)
if self.size() != intermediate.size():
aten.expand_copy.default(intermediate, self.size())
Expand Down