-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[pt2] add metas for max_unpool2d and max_unpool3d
#103821
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/103821
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit da9b41f: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Keeping the skips in |
| TORCH_CHECK( | ||
| output_size.size() == 2, | ||
| "There should be exactly two elements (width, height) in output_size, but got ", output_size.size(), " elements."); | ||
| "There should be exactly two elements (height, width) in output_size, but got ", output_size.size(), " elements."); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typo fix. Wrong order of elements.
| .view_symint(size) | ||
| .gather(-1, indices_view) | ||
| .view(indices.sizes()); | ||
| .view_symint(indices.sym_sizes()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For example, used in test_aot_autograd_symbolic_exhaustive_nn_functional_max_unpool1d_cpu_float32.
[ghstack-poisoned]
[ghstack-poisoned]
| from torch._ops import OpOverload | ||
| from torch._prims import _elementwise_meta, ELEMENTWISE_PRIM_TYPE_PROMOTION_KIND | ||
| from torch._prims_common import ( | ||
| corresponding_complex_dtype, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These alerts are needed for, e.g.:
PYTORCH_TEST_WITH_INDUCTOR=1 python -bb test/test_torch.py -v --use-pytest --import-slow-tests --import-disabled-tests -k test_nondeterministic_alert_MaxUnpool1d_cuda_float16 -v --capture=no
| ), | ||
| ) | ||
|
|
||
| self = self_.contiguous() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In core, this uses suggest_memory_format, so slightly wrong. But is this even necessary? Will investigate.
UPD: But it's only for the 2d variant. I'm going to ignore this for now as all tests pass.
| else: | ||
| nbatch = self.size(0) | ||
| nchannels = self.size(1) | ||
| result = self.new_empty((nbatch, nchannels, oheight, owidth)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This takes a memory_format argument in core.
UPD: But it's only for the 2d variant. I'm going to ignore this for now as all tests pass.
| f"determinism just for this operation, or you can use the " | ||
| f"'warn_only=True' option, if that's acceptable for your application. " | ||
| f"You can also file an issue at https://github.com/pytorch/pytorch/issues " | ||
| f"to help us prioritize adding deterministic support for this operation.")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Context::alertNotDeterministic in core.
[ghstack-poisoned]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
[ghstack-poisoned]
[ghstack-poisoned]
Stack from ghstack:
SymIntsupport formax_poolops #103951max_unpool2dandmax_unpool3d#103821