-
Notifications
You must be signed in to change notification settings - Fork 22.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix fmod
type promotion
#48278
Fix fmod
type promotion
#48278
Conversation
[ghstack-poisoned]
ghstack-source-id: 692b680ff7106c5e5489ff05eb62870b9e07ca41 Pull Request resolved: #48278
💊 CI failures summary and remediationsAs of commit 2ee21e3 (more details on the Dr. CI page):
2 failures not recognized by patterns:
ci.pytorch.org: 1 failedThis comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. This comment has been revised 134 times. |
Will fix part of #47779 Potentially fix #48130 Remove tests due to no type promotion from #47323 Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245) [ghstack-poisoned]
ghstack-source-id: 29d4470dabd99e79220b59005307d6438430bd79 Pull Request resolved: #48278
Will fix part of #47779 Potentially fix #48130 Remove tests due to no type promotion from #47323 Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245) [ghstack-poisoned]
Will fix part of #47779 Potentially fix #48130 Remove tests due to no type promotion from #47323 Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245) [ghstack-poisoned]
ghstack-source-id: 86937df88bfcd84fc678a77897f3003b2e84a908 Pull Request resolved: #48278
Will fix part of #47779 Potentially fix #48130 Remove tests due to no type promotion from #47323 Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245) [ghstack-poisoned]
ghstack-source-id: 86937df88bfcd84fc678a77897f3003b2e84a908 Pull Request resolved: #48278
Hey @mruberry I am working on the type promotion for pytorch/test/onnx/test_pytorch_onnx_onnxruntime.py Lines 3840 to 3847 in c81f2d9
The error is: It doesn't seem ONNX have type promotion for integer fmod to floating-point scalar. (It will be same issue for remainder as well.)
Is there anyway to work around it? cc: @houseroad |
Will fix part of #47779 Potentially fix #48130 Remove various lines from tests due to no type promotion introduced from #47323 Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245) [ghstack-poisoned]
Cool!
That bites.
When I faced a similar issue the ONNX team implemented a "best effort" type promotion for the operator, see #43787. That was very generous of them. I'd suggest giving them a chance to comment on what they'd like to see happen. Possible ideas could be:
|
Thanks for the advice! |
Will fix part of #47779 Remove various lines from tests due to no type promotion introduced from #47323 Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245) [ghstack-poisoned]
Will fix part of #47779 Remove various lines from tests due to no type promotion introduced from #47323 Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245) [ghstack-poisoned]
Will fix part of #47779 Remove various lines from tests due to no type promotion introduced from #47323 Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245) [ghstack-poisoned]
Will fix part of #47779 Remove various lines from tests due to no type promotion introduced from #47323 Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245) [ghstack-poisoned]
Will fix part of #47779 Remove various lines from tests due to no type promotion introduced from #47323 Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245) [ghstack-poisoned]
Talked to Mike today, will follow the instruction of BC-breaking doc. I need to also update the doc that currently says the divisor can only be Tensor (same shape as dividend) or float. |
interesting question. Maybe a good question for @rgommers for what numpy intends to support. As far as I can tell, this is defined by the types the ufunc defines, e.g. |
Ah ya, |
I understand # Special cases
# tensor & number
# boolean tensor & boolean tensor
...
# General cases
if hasattr(x, 'dtype') and hasattr(y, 'dtype'):
if np.promote_types(x.dtype, y.dtype) != torch.promote_types(n2t[x.dtype], n2t[y.dtype]):
return t2n[torch.promote_types(numpy2torch[x.dtype], numpy2torch[y.dtype])] This is going to work without explicitly running the combination of case 2) in the comment. |
What NumPy does right now seems right to me, and I can't find any discussion about |
// Issue #47779: https://github.com/pytorch/pytorch/issues/47779 | ||
if (isIntegralType(iter.dtype(), /*includeBool=*/ false)) { | ||
AT_DISPATCH_INTEGRAL_TYPES(iter.dtype(), "fmod_cpu", [&]() { | ||
if (isIntegralType(iter.common_dtype(), /*includeBool=*/ true)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since bool dispatch is (slightly) controversial let's just remove it for now. It's true that NumPy promotes to int8 in this case, but for analogous cases, like True + True, NumPy will not promote.
// Issue #47779: https://github.com/pytorch/pytorch/issues/47779 | ||
if (isIntegralType(iter.dtype(), /*includeBool*/ false)) { | ||
AT_DISPATCH_INTEGRAL_TYPES(iter.dtype(), "fmod_cuda", [&]() { | ||
if (isIntegralType(iter.common_dtype(), /*includeBool*/ true)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove bool dispatch here, too. We can revisit it in a later PR.
@@ -349,6 +349,80 @@ def wrapped_fn(x): | |||
|
|||
return wrapped_fn | |||
|
|||
def np_binary_ufunc_type_promotion_wrapper(fn): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I appreciate the attention to detail with this function, and we may bring it back later, but for this PR a simpler approach is probably to remove this function and the following two and, instead, just use torch.result_type and pre-cast all the NumPy inputs to the corresponding NumPy dtype. For example:
dtype = torch.result_type(input0, input1)
np_input0 = input0.to(dtype).cpu().numpy()
np_input1 = input1.to(dtype).cpu().numpy().
Or, even simpler, to just ignore the dtype in the comparison:
self.assertEqual(expected, actual, exact_dtype=False)
I think either of these solutions will work for this PR, and for the remainder PR, too. Let me know if you have questions about this; we can review it together.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will choose the second solution because it's still required some hard code for the corner cases for arguments with tensor and python number.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have two remaining suggestions:
- let's error on bool for now
- let's simplify the type promotion discrepancy (ping me to discuss this further)
Other than that things look good.
Remove various lines from tests due to no type promotion introduced from #47323 ## BC-breaking Note: In order to make `fmod` operator have type promotion, we have to introduce BC breaking. ### 1.7.1: - For two tensor arguments, output dtype is determined by the first input (Tensor). So it won't support floating-point to int casting and raise RuntimeError. ```python >>> x = torch.arange(start=1, end=6, dtype=torch.int32) # tensor([1, 2, 3, 4]) >>> y = torch.arange(start=1.1, end=2.1, step=0.2, dtype=torch.float32) # tensor([1.1, 1.3, 1.5, 1.7, 1.9]) >>> torch.fmod(x, y) RuntimeError: result type Float can't be cast to the desired output type Int >>> z = torch.arange(start=0.2, end=1.1, step=0.2, dtype=torch.float64) # tensor([0.2, 0.4, 0.6, 0.8, 1.], dtype=torch.float64) >>> torch.fmod(y, z).dtype torch.float32 >>> torch.fmod(z, y).dtype torch.float64 ``` - For tensor and scalar arguments, the result is casted to the dtype of the first argument. ```python >>> torch.fmod(x, 1.2) tensor([0, 0, 0, 0, 0], dtype=torch.int32) ``` - Bool Tensor is **not supported** ```python >>> a = b = torch.tensor(True) >>> torch.fmod(a, b) RuntimeError: "fmod_cpu" not implemented for 'Bool' >>> torch.fmod(a, 1) RuntimeError: "fmod_scalar_cpu" not implemented for 'Bool' ``` ### [Prior PR](#47323): - For two tensor arguments, ```python >>> x = torch.arange(start=1, end=6, dtype=torch.int32) # tensor([1, 2, 3, 4]) >>> y = torch.arange(start=1.1, end=2.1, step=0.2, dtype=torch.float32) # tensor([1.1, 1.3, 1.5, 1.7, 1.9]) >>> torch.fmod(x, y) tensor([1.0000, 0.7000, 0.0000, 0.6000, 1.2000]) >>> z = torch.arange(start=0.2, end=1.1, step=0.2, dtype=torch.float64) # tensor([0.2, 0.4, 0.6, 0.8, 1.], dtype=torch.float64) >>> torch.fmod(y, z).dtype torch.float64 >>> torch.fmod(z, y).dtype torch.float64 ``` - For tensor and scalar arguments, the result is casted to the dtype of the first argument. ```python >>> torch.fmod(x, 1.2) tensor([0, 0, 0, 0, 0], dtype=torch.int32) ``` - Bool Tensor is **not supported** ```python >>> a = b = torch.tensor(True) >>> torch.fmod(a, b) RuntimeError: "fmod_cpu" not implemented for 'Bool' >>> torch.fmod(a, 1) RuntimeError: "fmod_cpu" not implemented for 'Bool' ``` ### Current: - Type is determined by both inputs. ```python >>> x = torch.arange(start=1, end=6, dtype=torch.int32) # tensor([1, 2, 3, 4]) >>> y = torch.arange(start=1.1, end=2.1, step=0.2, dtype=torch.float32) # tensor([1.1, 1.3, 1.5, 1.7, 1.9]) >>> torch.fmod(x, y) tensor([1.0000, 0.7000, 0.0000, 0.6000, 1.2000]) >>> z = torch.arange(start=0.2, end=1.1, step=0.2, dtype=torch.float64) # tensor([0.2, 0.4, 0.6, 0.8, 1.], dtype=torch.float64) >>> torch.fmod(y, z).dtype torch.float64 >>> torch.fmod(z, y).dtype torch.float64 ``` - For tensor and scalar arguments, the dtype of result is promoted. ```python >>> torch.fmod(x, 1.2) tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000]) ``` - Bool Tensor is **supported** as inputs ```python >>> a = b = torch.tensor(True) >>> torch.fmod(a, b) tensor(False) >>> torch.fmod(a, 0.7) tensor(0.3000) ``` Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245) [ghstack-poisoned]
Remove various lines from tests due to no type promotion introduced from #47323 ## BC-breaking Note: In order to make `fmod` operator have type promotion, we have to introduce BC breaking. ### 1.7.1: - For two tensor arguments, output dtype is determined by the first input (Tensor). So it won't support floating-point to int casting and raise RuntimeError. ```python >>> x = torch.arange(start=1, end=6, dtype=torch.int32) # tensor([1, 2, 3, 4]) >>> y = torch.arange(start=1.1, end=2.1, step=0.2, dtype=torch.float32) # tensor([1.1, 1.3, 1.5, 1.7, 1.9]) >>> torch.fmod(x, y) RuntimeError: result type Float can't be cast to the desired output type Int >>> z = torch.arange(start=0.2, end=1.1, step=0.2, dtype=torch.float64) # tensor([0.2, 0.4, 0.6, 0.8, 1.], dtype=torch.float64) >>> torch.fmod(y, z).dtype torch.float32 >>> torch.fmod(z, y).dtype torch.float64 ``` - For tensor and scalar arguments, the result is casted to the dtype of the first argument. ```python >>> torch.fmod(x, 1.2) tensor([0, 0, 0, 0, 0], dtype=torch.int32) ``` - Bool Tensor is **not supported** ```python >>> a = b = torch.tensor(True) >>> torch.fmod(a, b) RuntimeError: "fmod_cpu" not implemented for 'Bool' >>> torch.fmod(a, 1) RuntimeError: "fmod_scalar_cpu" not implemented for 'Bool' ``` ### [Prior PR](#47323): - For two tensor arguments, ```python >>> x = torch.arange(start=1, end=6, dtype=torch.int32) # tensor([1, 2, 3, 4]) >>> y = torch.arange(start=1.1, end=2.1, step=0.2, dtype=torch.float32) # tensor([1.1, 1.3, 1.5, 1.7, 1.9]) >>> torch.fmod(x, y) tensor([1.0000, 0.7000, 0.0000, 0.6000, 1.2000]) >>> z = torch.arange(start=0.2, end=1.1, step=0.2, dtype=torch.float64) # tensor([0.2, 0.4, 0.6, 0.8, 1.], dtype=torch.float64) >>> torch.fmod(y, z).dtype torch.float64 >>> torch.fmod(z, y).dtype torch.float64 ``` - For tensor and scalar arguments, the result is casted to the dtype of the first argument. ```python >>> torch.fmod(x, 1.2) tensor([0, 0, 0, 0, 0], dtype=torch.int32) ``` - Bool Tensor is **not supported** ```python >>> a = b = torch.tensor(True) >>> torch.fmod(a, b) RuntimeError: "fmod_cpu" not implemented for 'Bool' >>> torch.fmod(a, 1) RuntimeError: "fmod_cpu" not implemented for 'Bool' ``` ### Current: - Type is determined by both inputs. ```python >>> x = torch.arange(start=1, end=6, dtype=torch.int32) # tensor([1, 2, 3, 4]) >>> y = torch.arange(start=1.1, end=2.1, step=0.2, dtype=torch.float32) # tensor([1.1, 1.3, 1.5, 1.7, 1.9]) >>> torch.fmod(x, y) tensor([1.0000, 0.7000, 0.0000, 0.6000, 1.2000]) >>> z = torch.arange(start=0.2, end=1.1, step=0.2, dtype=torch.float64) # tensor([0.2, 0.4, 0.6, 0.8, 1.], dtype=torch.float64) >>> torch.fmod(y, z).dtype torch.float64 >>> torch.fmod(z, y).dtype torch.float64 ``` - For tensor and scalar arguments, the dtype of result is promoted. ```python >>> torch.fmod(x, 1.2) tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000]) ``` - Bool Tensor is **supported** as inputs ```python >>> a = b = torch.tensor(True) >>> torch.fmod(a, b) tensor(False) >>> torch.fmod(a, 0.7) tensor(0.3000) ``` Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245) [ghstack-poisoned]
Remove various lines from tests due to no type promotion introduced from #47323 ## BC-breaking Note: In order to make `fmod` operator have type promotion, we have to introduce BC breaking. ### 1.7.1: - For two tensor arguments, output dtype is determined by the first input (Tensor). So it won't support floating-point to int casting and raise RuntimeError. ```python >>> x = torch.arange(start=1, end=6, dtype=torch.int32) # tensor([1, 2, 3, 4]) >>> y = torch.arange(start=1.1, end=2.1, step=0.2, dtype=torch.float32) # tensor([1.1, 1.3, 1.5, 1.7, 1.9]) >>> torch.fmod(x, y) RuntimeError: result type Float can't be cast to the desired output type Int >>> z = torch.arange(start=0.2, end=1.1, step=0.2, dtype=torch.float64) # tensor([0.2, 0.4, 0.6, 0.8, 1.], dtype=torch.float64) >>> torch.fmod(y, z).dtype torch.float32 >>> torch.fmod(z, y).dtype torch.float64 ``` - For tensor and scalar arguments, the result is casted to the dtype of the first argument. ```python >>> torch.fmod(x, 1.2) tensor([0, 0, 0, 0, 0], dtype=torch.int32) ``` - Bool Tensor is **not supported** ```python >>> a = b = torch.tensor(True) >>> torch.fmod(a, b) RuntimeError: "fmod_cpu" not implemented for 'Bool' >>> torch.fmod(a, 1) RuntimeError: "fmod_scalar_cpu" not implemented for 'Bool' ``` ### [Prior PR](#47323): - For two tensor arguments, ```python >>> x = torch.arange(start=1, end=6, dtype=torch.int32) # tensor([1, 2, 3, 4]) >>> y = torch.arange(start=1.1, end=2.1, step=0.2, dtype=torch.float32) # tensor([1.1, 1.3, 1.5, 1.7, 1.9]) >>> torch.fmod(x, y) tensor([1.0000, 0.7000, 0.0000, 0.6000, 1.2000]) >>> z = torch.arange(start=0.2, end=1.1, step=0.2, dtype=torch.float64) # tensor([0.2, 0.4, 0.6, 0.8, 1.], dtype=torch.float64) >>> torch.fmod(y, z).dtype torch.float64 >>> torch.fmod(z, y).dtype torch.float64 ``` - For tensor and scalar arguments, the result is casted to the dtype of the first argument. ```python >>> torch.fmod(x, 1.2) tensor([0, 0, 0, 0, 0], dtype=torch.int32) ``` - Bool Tensor is **not supported** ```python >>> a = b = torch.tensor(True) >>> torch.fmod(a, b) RuntimeError: "fmod_cpu" not implemented for 'Bool' >>> torch.fmod(a, 1) RuntimeError: "fmod_cpu" not implemented for 'Bool' ``` ### Current: - Type is determined by both inputs. ```python >>> x = torch.arange(start=1, end=6, dtype=torch.int32) # tensor([1, 2, 3, 4]) >>> y = torch.arange(start=1.1, end=2.1, step=0.2, dtype=torch.float32) # tensor([1.1, 1.3, 1.5, 1.7, 1.9]) >>> torch.fmod(x, y) tensor([1.0000, 0.7000, 0.0000, 0.6000, 1.2000]) >>> z = torch.arange(start=0.2, end=1.1, step=0.2, dtype=torch.float64) # tensor([0.2, 0.4, 0.6, 0.8, 1.], dtype=torch.float64) >>> torch.fmod(y, z).dtype torch.float64 >>> torch.fmod(z, y).dtype torch.float64 ``` - For tensor and scalar arguments, the dtype of result is promoted. ```python >>> torch.fmod(x, 1.2) tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000]) ``` - Bool Tensor is **supported** as inputs ```python >>> a = b = torch.tensor(True) >>> torch.fmod(a, b) tensor(False) >>> torch.fmod(a, 0.7) tensor(0.3000) ``` Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245) [ghstack-poisoned]
Remove various lines from tests due to no type promotion introduced from #47323 ## BC-breaking Note: In order to make `fmod` operator have type promotion, we have to introduce BC breaking. ### 1.7.1: In the case where the second argument is a python number, the result is casted to the dtype of the first argument. ```python >>> torch.fmod(x, 1.2) tensor([0, 0, 0, 0, 0], dtype=torch.int32) ``` ### Prior PR: Check the BC-breaking note of #47323 ### This PR: In the case where the second argument is a python number, the dtype of result is determined by type promotion of both inputs. ```python >>> torch.fmod(x, 1.2) tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000]) ``` Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245) [ghstack-poisoned]
Remove various lines from tests due to no type promotion introduced from #47323 ## BC-breaking Note: In order to make `fmod` operator have type promotion, we have to introduce BC breaking. ### 1.7.1: In the case where the second argument is a python number, the result is casted to the dtype of the first argument. ```python >>> torch.fmod(x, 1.2) tensor([0, 0, 0, 0, 0], dtype=torch.int32) ``` ### Prior PR: Check the BC-breaking note of #47323 ### This PR: In the case where the second argument is a python number, the dtype of result is determined by type promotion of both inputs. ```python >>> torch.fmod(x, 1.2) tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000]) ``` Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245) [ghstack-poisoned]
Remove various lines from tests due to no type promotion introduced from #47323 ## BC-breaking Note: In order to make `fmod` operator have type promotion, we have to introduce BC breaking. ### 1.7.1: In the case where the second argument is a python number, the result is casted to the dtype of the first argument. ```python >>> torch.fmod(x, 1.2) tensor([0, 0, 0, 0, 0], dtype=torch.int32) ``` ### Prior PR: Check the BC-breaking note of #47323 ### This PR: In the case where the second argument is a python number, the dtype of result is determined by type promotion of both inputs. ```python >>> torch.fmod(x, 1.2) tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000]) ``` Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245) [ghstack-poisoned]
Remove various lines from tests due to no type promotion introduced from #47323 ## BC-breaking Note: In order to make `fmod` operator have type promotion, we have to introduce BC breaking. ### 1.7.1: In the case where the second argument is a python number, the result is casted to the dtype of the first argument. ```python >>> torch.fmod(x, 1.2) tensor([0, 0, 0, 0, 0], dtype=torch.int32) ``` ### Prior PR: Check the BC-breaking note of #47323 ### This PR: In the case where the second argument is a python number, the dtype of result is determined by type promotion of both inputs. ```python >>> torch.fmod(x, 1.2) tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000]) ``` Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245) [ghstack-poisoned]
Remove various lines from tests due to no type promotion introduced from #47323 ## BC-breaking Note: In order to make `fmod` operator have type promotion, we have to introduce BC breaking. ### 1.7.1: In the case where the second argument is a python number, the result is casted to the dtype of the first argument. ```python >>> torch.fmod(x, 1.2) tensor([0, 0, 0, 0, 0], dtype=torch.int32) ``` ### Prior PR: Check the BC-breaking note of #47323 ### This PR: In the case where the second argument is a python number, the dtype of result is determined by type promotion of both inputs. ```python >>> torch.fmod(x, 1.2) tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000]) ``` [ghstack-poisoned]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good, nice work, @ejguan. I've made a few small suggestions for the documentation and a simple change to the kernel functions that should be easy to implement. Once this is imported we should run an additional BC-breaking test on the PR, just in case. Ping me and I can talk you through our process for doing that.
if (isIntegralType(iter.dtype(), /*includeBool*/ false)) { | ||
AT_DISPATCH_INTEGRAL_TYPES(iter.dtype(), "fmod_cuda", [&]() { | ||
if (isIntegralType(iter.common_dtype(), /*includeBool*/ false)) { | ||
AT_DISPATCH_INTEGRAL_TYPES(iter.common_dtype(), "fmod_cuda", [&]() { | ||
gpu_kernel_with_scalars(iter, []GPU_LAMBDA(scalar_t a, scalar_t b) -> scalar_t { | ||
return a % b; | ||
}); | ||
}); | ||
} else { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Take this dispatch out of the else branch, like this:
if ... {
}
# dispatch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we still need to keep it within the else block. Otherwise, we will dispatch two times for this operation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I see what you're saying, yes that's fine. Sorry. Please ignore my suggestions to take them out, we can keep them in the branch.
cpu_kernel(iter, [=](scalar_t x, scalar_t d) -> scalar_t { | ||
TORCH_CHECK(d != 0, "ZeroDivisionError"); | ||
return x % d; | ||
}); | ||
}); | ||
} else { | ||
AT_DISPATCH_FLOATING_TYPES_AND(kHalf, iter.dtype(), "fmod_cpu", [&]() { | ||
AT_DISPATCH_FLOATING_TYPES_AND(kHalf, iter.common_dtype(), "fmod_cpu", [&]() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Take this dispatch out of the else branch
Remove various lines from tests due to no type promotion introduced from #47323 ## BC-breaking Note: In order to make `fmod` operator have type promotion, we have to introduce BC breaking. ### 1.7.1: In the case where the second argument is a python number, the result is casted to the dtype of the first argument. ```python >>> torch.fmod(x, 1.2) tensor([0, 0, 0, 0, 0], dtype=torch.int32) ``` ### Prior PR: Check the BC-breaking note of #47323 ### This PR: In the case where the second argument is a python number, the dtype of result is determined by type promotion of both inputs. ```python >>> torch.fmod(x, 1.2) tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000]) ``` Differential Revision: [D25869137](https://our.internmc.facebook.com/intern/diff/D25869137) [ghstack-poisoned]
Stack from ghstack:
fmod
type promotion #48278 Fixfmod
type promotionRemove various lines from tests due to no type promotion introduced from #47323
BC-breaking Note:
In order to make
fmod
operator have type promotion, we have to introduce BC breaking.1.7.1:
In the case where the second argument is a python number, the result is casted to the dtype of the first argument.
Prior PR:
Check the BC-breaking note of #47323
This PR:
In the case where the second argument is a python number, the dtype of result is determined by type promotion of both inputs.
BC-breaking tests:
Nightly
This PR
This PR doesn't introduce any new failure or timeout model.
Differential Revision: D25869137