New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make meta tensor data access error message for expressive in assert_close #68802
Conversation
[ghstack-poisoned]
CI Flow Status⚛️ CI FlowRuleset - Version:
You can add a comment to the PR and tag @pytorchbot with the following commands: # ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun
# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow For more information, please take a look at the CI Flow Wiki. |
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit ddd15f2 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
[ghstack-poisoned]
Until #68592 is resolved, we explicitly exclude them to have an expressive error message. [ghstack-poisoned]
Without this patch, the error message of comparing meta tensors looks like this after #68722 was merged: ```python >>> t = torch.empty((), device="meta") >>> assert_close(t, t) NotImplementedError: Could not run 'aten::abs.out' with arguments from the 'Meta' backend. [...] [...] The above exception was the direct cause of the following exception: [...] RuntimeError: Comparing TensorLikePair( id=(), actual=tensor(..., device='meta', size=()), expected=tensor(..., device='meta', size=()), rtol=1.3e-06, atol=1e-05, equal_nan=False, check_device=True, check_dtype=True, check_layout=True, check_stride=False, check_is_coalesced=True, ) resulted in the unexpected exception above. If you are a user and see this message during normal operation please file an issue at https://github.com/pytorch/pytorch/issues. If you are a developer and working on the comparison functions, please except the previous error and raise an expressive `ErrorMeta` instead. ``` Thus, we follow our own advice and turn it into an expected exception until #68592 is resolved: ```python >>> t = torch.empty((), device="meta") >>> assert_close(t, t) ValueError: Comparing meta tensors is currently not supported ``` [ghstack-poisoned]
Without this patch, the error message of comparing meta tensors looks like this after #68722 was merged: ```python >>> t = torch.empty((), device="meta") >>> assert_close(t, t) NotImplementedError: Could not run 'aten::abs.out' with arguments from the 'Meta' backend. [...] [...] The above exception was the direct cause of the following exception: [...] RuntimeError: Comparing TensorLikePair( id=(), actual=tensor(..., device='meta', size=()), expected=tensor(..., device='meta', size=()), rtol=1.3e-06, atol=1e-05, equal_nan=False, check_device=True, check_dtype=True, check_layout=True, check_stride=False, check_is_coalesced=True, ) resulted in the unexpected exception above. If you are a user and see this message during normal operation please file an issue at https://github.com/pytorch/pytorch/issues. If you are a developer and working on the comparison functions, please except the previous error and raise an expressive `ErrorMeta` instead. ``` Thus, we follow our own advice and turn it into an expected exception until #68592 is resolved: ```python >>> t = torch.empty((), device="meta") >>> assert_close(t, t) ValueError: Comparing meta tensors is currently not supported ``` [ghstack-poisoned]
[ghstack-poisoned]
Without this patch, the error message of comparing meta tensors looks like this after #68722 was merged: ```python >>> t = torch.empty((), device="meta") >>> assert_close(t, t) NotImplementedError: Could not run 'aten::abs.out' with arguments from the 'Meta' backend. [...] [...] The above exception was the direct cause of the following exception: [...] RuntimeError: Comparing TensorLikePair( id=(), actual=tensor(..., device='meta', size=()), expected=tensor(..., device='meta', size=()), rtol=1.3e-06, atol=1e-05, equal_nan=False, check_device=True, check_dtype=True, check_layout=True, check_stride=False, check_is_coalesced=True, ) resulted in the unexpected exception above. If you are a user and see this message during normal operation please file an issue at https://github.com/pytorch/pytorch/issues. If you are a developer and working on the comparison functions, please except the previous error and raise an expressive `ErrorMeta` instead. ``` Thus, we follow our own advice and turn it into an expected exception until #68592 is resolved: ```python >>> t = torch.empty((), device="meta") >>> assert_close(t, t) ValueError: Comparing meta tensors is currently not supported ``` [ghstack-poisoned]
Without this patch, the error message of comparing meta tensors looks like this after #68722 was merged: ```python >>> t = torch.empty((), device="meta") >>> assert_close(t, t) NotImplementedError: Could not run 'aten::abs.out' with arguments from the 'Meta' backend. [...] [...] The above exception was the direct cause of the following exception: [...] RuntimeError: Comparing TensorLikePair( id=(), actual=tensor(..., device='meta', size=()), expected=tensor(..., device='meta', size=()), rtol=1.3e-06, atol=1e-05, equal_nan=False, check_device=True, check_dtype=True, check_layout=True, check_stride=False, check_is_coalesced=True, ) resulted in the unexpected exception above. If you are a user and see this message during normal operation please file an issue at https://github.com/pytorch/pytorch/issues. If you are a developer and working on the comparison functions, please except the previous error and raise an expressive `ErrorMeta` instead. ``` Thus, we follow our own advice and turn it into an expected exception until #68592 is resolved: ```python >>> t = torch.empty((), device="meta") >>> assert_close(t, t) ValueError: Comparing meta tensors is currently not supported ``` [ghstack-poisoned]
…in assert_close" Without this patch, the error message of comparing meta tensors looks like this after #68722 was merged: ```python >>> t = torch.empty((), device="meta") >>> assert_close(t, t) NotImplementedError: Could not run 'aten::abs.out' with arguments from the 'Meta' backend. [...] [...] The above exception was the direct cause of the following exception: [...] RuntimeError: Comparing TensorLikePair( id=(), actual=tensor(..., device='meta', size=()), expected=tensor(..., device='meta', size=()), rtol=1.3e-06, atol=1e-05, equal_nan=False, check_device=True, check_dtype=True, check_layout=True, check_stride=False, check_is_coalesced=True, ) resulted in the unexpected exception above. If you are a user and see this message during normal operation please file an issue at https://github.com/pytorch/pytorch/issues. If you are a developer and working on the comparison functions, please except the previous error and raise an expressive `ErrorMeta` instead. ``` Thus, we follow our own advice and turn it into an expected exception until #68592 is resolved: ```python >>> t = torch.empty((), device="meta") >>> assert_close(t, t) ValueError: Comparing meta tensors is currently not supported ``` [ghstack-poisoned]
…in assert_close" Without this patch, the error message of comparing meta tensors looks like this after #68722 was merged: ```python >>> t = torch.empty((), device="meta") >>> assert_close(t, t) NotImplementedError: Could not run 'aten::abs.out' with arguments from the 'Meta' backend. [...] [...] The above exception was the direct cause of the following exception: [...] RuntimeError: Comparing TensorLikePair( id=(), actual=tensor(..., device='meta', size=()), expected=tensor(..., device='meta', size=()), rtol=1.3e-06, atol=1e-05, equal_nan=False, check_device=True, check_dtype=True, check_layout=True, check_stride=False, check_is_coalesced=True, ) resulted in the unexpected exception above. If you are a user and see this message during normal operation please file an issue at https://github.com/pytorch/pytorch/issues. If you are a developer and working on the comparison functions, please except the previous error and raise an expressive `ErrorMeta` instead. ``` Thus, we follow our own advice and turn it into an expected exception until #68592 is resolved: ```python >>> t = torch.empty((), device="meta") >>> assert_close(t, t) ValueError: Comparing meta tensors is currently not supported ``` [ghstack-poisoned]
…in assert_close" Without this patch, the error message of comparing meta tensors looks like this after #68722 was merged: ```python >>> t = torch.empty((), device="meta") >>> assert_close(t, t) NotImplementedError: Could not run 'aten::abs.out' with arguments from the 'Meta' backend. [...] [...] The above exception was the direct cause of the following exception: [...] RuntimeError: Comparing TensorLikePair( id=(), actual=tensor(..., device='meta', size=()), expected=tensor(..., device='meta', size=()), rtol=1.3e-06, atol=1e-05, equal_nan=False, check_device=True, check_dtype=True, check_layout=True, check_stride=False, check_is_coalesced=True, ) resulted in the unexpected exception above. If you are a user and see this message during normal operation please file an issue at https://github.com/pytorch/pytorch/issues. If you are a developer and working on the comparison functions, please except the previous error and raise an expressive `ErrorMeta` instead. ``` Thus, we follow our own advice and turn it into an expected exception until #68592 is resolved: ```python >>> t = torch.empty((), device="meta") >>> assert_close(t, t) ValueError: Comparing meta tensors is currently not supported ``` [ghstack-poisoned]
…in assert_close" Without this patch, the error message of comparing meta tensors looks like this after #68722 was merged: ```python >>> t = torch.empty((), device="meta") >>> assert_close(t, t) NotImplementedError: Could not run 'aten::abs.out' with arguments from the 'Meta' backend. [...] [...] The above exception was the direct cause of the following exception: [...] RuntimeError: Comparing TensorLikePair( id=(), actual=tensor(..., device='meta', size=()), expected=tensor(..., device='meta', size=()), rtol=1.3e-06, atol=1e-05, equal_nan=False, check_device=True, check_dtype=True, check_layout=True, check_stride=False, check_is_coalesced=True, ) resulted in the unexpected exception above. If you are a user and see this message during normal operation please file an issue at https://github.com/pytorch/pytorch/issues. If you are a developer and working on the comparison functions, please except the previous error and raise an expressive `ErrorMeta` instead. ``` Thus, we follow our own advice and turn it into an expected exception until #68592 is resolved: ```python >>> t = torch.empty((), device="meta") >>> assert_close(t, t) ValueError: Comparing meta tensors is currently not supported ``` [ghstack-poisoned]
…in assert_close" Without this patch, the error message of comparing meta tensors looks like this after #68722 was merged: ```python >>> t = torch.empty((), device="meta") >>> assert_close(t, t) NotImplementedError: Could not run 'aten::abs.out' with arguments from the 'Meta' backend. [...] [...] The above exception was the direct cause of the following exception: [...] RuntimeError: Comparing TensorLikePair( id=(), actual=tensor(..., device='meta', size=()), expected=tensor(..., device='meta', size=()), rtol=1.3e-06, atol=1e-05, equal_nan=False, check_device=True, check_dtype=True, check_layout=True, check_stride=False, check_is_coalesced=True, ) resulted in the unexpected exception above. If you are a user and see this message during normal operation please file an issue at https://github.com/pytorch/pytorch/issues. If you are a developer and working on the comparison functions, please except the previous error and raise an expressive `ErrorMeta` instead. ``` Thus, we follow our own advice and turn it into an expected exception until #68592 is resolved: ```python >>> t = torch.empty((), device="meta") >>> assert_close(t, t) ValueError: Comparing meta tensors is currently not supported ``` [ghstack-poisoned]
try: | ||
yield | ||
except NotImplementedError as error: | ||
if "meta" not in str(error).lower(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not in love with this mechanism for detecting if the not impl error is based on the tensor being a meta tensor or not, but it seems OK for now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Me neither. But I don't think there is a better option right now. We should remove this while resolving #68592.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Stack from ghstack:
torch.testing.assert_equal
inTestCase.assertEqual
#67796Without this patch, the error message of comparing meta tensors looks like this after #68722 was merged:
Thus, we follow our own advice and turn it into an expected exception until #68592 is resolved:
Differential Revision: D33542999