Skip to content

Conversation

kulinseth
Copy link
Collaborator

Fixes #82543, #83230

The current Placeholder code relies to find a gather graph in order to make the data contiguous, otherwise we'll try calling into tensor.contiguous() directly, which for slice elements, won't do anything.

E.g consider the following basic case where we index a 2 element tensor:

tensor_list = torch.tensor([1.2, 1.0], device="mps")

for scalar in tensor_list:
  r_mps = torch.ceil(scalar)
  r_cpu = torch.ceil(scalar.to("cpu"))
  self.assertEqual(r_mps.cpu(), r_cpu)

The second element 1.0 is a contiguous view tensor (similar to slicing), but it has no gather graph created behind. In the placeholder, we won't be able to find the graph, thus relying on the fallback case where we call _tensor = src.contiguous();. For an already contiguous tensor, this won't do anything, thus we end up creating the NDArray with all the values of the tensor (1.2 and 1.0 instead of just 1.0). Doing clone instead of contiguous will actually perform a blit behind and take into consideration the storage_offset of the view when performing the copy.

Similarly, the following basic case is also failing because of this issue:

x = torch.tensor([1.0, 0.49], device="mps")
print(x) # prints 1.0 and 0.0

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Aug 19, 2022

🔗 Helpful links

✅ 1 Base Failures

As of commit 8792ef3 (more details on the Dr. CI page):

Expand to see more

None of the CI failures appear to be your fault 💚



🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@kulinseth kulinseth added the ciflow/mps Run MPS tests (subset of trunk) label Aug 19, 2022
@mikaylagawarecki mikaylagawarecki added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Aug 19, 2022
Copy link
Collaborator

@razarmehr razarmehr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.

@kulinseth
Copy link
Collaborator Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here.
The merge job was triggered without a flag. This means that your change will be merged once all checks on your PR have passed (ETA: 0-4 Hours). If this is not the intended behavior, feel free to use some of the other merge options in the wiki.
Please reach out to the PyTorch DevX Team with feedback or questions!

@pytorchmergebot
Copy link
Collaborator

Merge failed
Reason: Refusing to merge as mandatory check(s) pull failed for rule MPS
Raised by https://github.com/pytorch/pytorch/actions/runs/2905699175

@kulinseth
Copy link
Collaborator Author

@pytorchbot merge -f "All the checks are passing."

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a merge job. Check the current status here.
The merge job was triggered with the force (-f) flag. This means your change will be merged immediately, bypassing any CI checks (ETA: 1-5 minutes). If this is not the intended behavior, feel free to use some of the other merge options in the wiki.
Please reach out to the PyTorch DevX Team with feedback or questions!

@github-actions
Copy link
Contributor

Hey @kulinseth.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

facebook-github-bot pushed a commit that referenced this pull request Aug 24, 2022
Summary:
Fixes #82543, #83230

The current Placeholder code relies to find a gather graph in order to make the data contiguous, otherwise we'll try calling into tensor.contiguous() directly, which for slice elements, won't do anything.

E.g consider the following basic case where we index a 2 element tensor:
```
tensor_list = torch.tensor([1.2, 1.0], device="mps")

for scalar in tensor_list:
  r_mps = torch.ceil(scalar)
  r_cpu = torch.ceil(scalar.to("cpu"))
  self.assertEqual(r_mps.cpu(), r_cpu)
```

The second element 1.0 is a contiguous view tensor (similar to slicing), but it has no gather graph created behind. In the placeholder, we won't be able to find the graph, thus relying on the fallback case where we call _tensor = src.contiguous();. For an already contiguous tensor, this won't do anything, thus we end up creating the NDArray with all the values of the tensor (1.2 and 1.0 instead of just 1.0). Doing clone instead of contiguous will actually perform a blit behind and take into consideration the storage_offset of the view when performing the copy.

Similarly, the following basic case is also failing because of this issue:

```
x = torch.tensor([1.0, 0.49], device="mps")
print(x) # prints 1.0 and 0.0
```

Pull Request resolved: #83744
Approved by: https://github.com/razarmehr

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/a6b75bb0990f2c949bf6de4e6ae58f019d61c0ac

Reviewed By: weiwangmeta

Differential Revision: D38946708

Pulled By: weiwangmeta

fbshipit-source-id: ae93ee9d8f581ae3cf9e8ed5fba159d5dc7f6cdc
Birch-san added a commit to Birch-san/pytorch that referenced this pull request Oct 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/mps Run MPS tests (subset of trunk) cla signed Merged open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[MPS] Incorrect rounded results of add and sub
7 participants