-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MPS] Fix torch.std
/torch.var
default/correction handling
#91203
Conversation
If `torch.std`,`torch.var` are invoked without any arguments, it should be assumed that `unbiased` is `True`. Also, if `correction` parameter is specified it should be use in correction computation. Test by adding `std` and `var` to consistency tests
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/91203
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit e1c07f5: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
@@ -771,7 +770,7 @@ Tensor std_var_common_impl_mps( | |||
name:nil]; | |||
MPSGraphTensor *outputTensor; | |||
|
|||
if (use_correction && correction_value) | |||
if (correction_value) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default for correction_value
is 1, does this condition work as intended?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that's the crux of a bugfix: default value for unbiased
is True rather than false.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still bothers me a bit that now correction_value
is an integer and it's not clear from the condition here and before.
It can only be 0 or 1?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nevermind, it was integer even before, false
in const auto correction_value = use_correction ? correction.value() : false;
was misleading.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Exactly, one of the opinfo test even passes torch.var(x, correlation=2)
and compares the CPU output.
@pytorchbot merge -f "MPS tests are green" |
Merge startedYour change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Use Prims to implement group_norm, group_norm_backward and mean_var Use `torch._ops.ops` instead of `torch.ops` in numerous subpackages in order to be able to make them importable from `torch/backend/mps/__init__.py` as this alias is defined in https://github.com/pytorch/pytorch/blob/15af4b1ceea3b689354ad664675b930486804c8e/torch/__init__.py#L1095 is executed last during init process. Depends on #91203 Fixes #88331 Pull Request resolved: #91190 Approved by: https://github.com/albanD
…mean_var`" Use Prims to implement group_norm, group_norm_backward and mean_var Use `torch._ops.ops` instead of `torch.ops` in numerous subpackages in order to be able to make them importable from `torch/backend/mps/__init__.py` as this alias is defined in https://github.com/pytorch/pytorch/blob/15af4b1ceea3b689354ad664675b930486804c8e/torch/__init__.py#L1095 is executed last during init process. Depends on #91203 Fixes #88331 [ghstack-poisoned]
Use Prims to implement group_norm, group_norm_backward and mean_var Use `torch._ops.ops` instead of `torch.ops` in numerous subpackages in order to be able to make them importable from `torch/backend/mps/__init__.py` as this alias is defined in https://github.com/pytorch/pytorch/blob/15af4b1ceea3b689354ad664675b930486804c8e/torch/__init__.py#L1095 is executed last during init process. Depends on #91203 Fixes #88331 [ghstack-poisoned]
…mean_var` (take 2)" Use Prims to implement group_norm, group_norm_backward and mean_var Use `torch._ops.ops` instead of `torch.ops` in numerous subpackages in order to be able to make them importable from `torch/backend/mps/__init__.py` as this alias is defined in https://github.com/pytorch/pytorch/blob/15af4b1ceea3b689354ad664675b930486804c8e/torch/__init__.py#L1095 is executed last during init process. Depends on #91203 Fixes #88331 [ghstack-poisoned]
Use Prims to implement group_norm, group_norm_backward and mean_var Use `torch._ops.ops` instead of `torch.ops` in numerous subpackages in order to be able to make them importable from `torch/backend/mps/__init__.py` as this alias is defined in https://github.com/pytorch/pytorch/blob/15af4b1ceea3b689354ad664675b930486804c8e/torch/__init__.py#L1095 is executed last during init process. Depends on #91203 Fixes #88331 [ghstack-poisoned]
Use Prims to implement group_norm, group_norm_backward and mean_var Use `torch._ops.ops` instead of `torch.ops` in numerous subpackages in order to be able to make them importable from `torch/backend/mps/__init__.py` as this alias is defined in https://github.com/pytorch/pytorch/blob/15af4b1ceea3b689354ad664675b930486804c8e/torch/__init__.py#L1095 is executed last during init process. Depends on pytorch#91203 Fixes pytorch#88331 Pull Request resolved: pytorch#91190 Approved by: https://github.com/albanD
If
torch.std
,torch.var
are invoked without any arguments, it should be assumed thatunbiased
isTrue
.Also, if
correction
parameter is specified it should be use in correction computation.Test by adding
std
andvar
to consistency testsFixes #91198