Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix fmod type promotion #48278

Closed
wants to merge 32 commits into from
Closed

Fix fmod type promotion #48278

wants to merge 32 commits into from

Conversation

ejguan
Copy link
Contributor

@ejguan ejguan commented Nov 19, 2020

Stack from ghstack:

Remove various lines from tests due to no type promotion introduced from #47323

BC-breaking Note:

In order to make fmod operator have type promotion, we have to introduce BC breaking.

1.7.1:

In the case where the second argument is a python number, the result is casted to the dtype of the first argument.

>>> torch.fmod(x, 1.2)
tensor([0, 0, 0, 0, 0], dtype=torch.int32)

Prior PR:

Check the BC-breaking note of #47323

This PR:

In the case where the second argument is a python number, the dtype of result is determined by type promotion of both inputs.

>>> torch.fmod(x, 1.2)
tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000])

BC-breaking tests:

Nightly
This PR
This PR doesn't introduce any new failure or timeout model.

Differential Revision: D25869137

[ghstack-poisoned]
ejguan added a commit that referenced this pull request Nov 19, 2020
ghstack-source-id: 692b680ff7106c5e5489ff05eb62870b9e07ca41
Pull Request resolved: #48278
@ejguan ejguan changed the title Fix fmod type promotion [WIP] Fix fmod type promotion Nov 19, 2020
@dr-ci
Copy link

dr-ci bot commented Nov 19, 2020

💊 CI failures summary and remediations

As of commit 2ee21e3 (more details on the Dr. CI page):


  • 4/4 failures possibly* introduced in this PR
    • 2/4 non-CircleCI failure(s)

2 failures not recognized by patterns:

Job Step Action
CircleCI pytorch_windows_vs2019_py36_cuda10.1_test2 Test 🔁 rerun
CircleCI pytorch_ios_12_0_0_x86_64_build Run Fastlane 🔁 rerun

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

This comment has been revised 134 times.

Will fix part of #47779
Potentially fix #48130
Remove tests due to no type promotion from #47323 

Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245)

[ghstack-poisoned]
ejguan added a commit that referenced this pull request Nov 20, 2020
ghstack-source-id: 29d4470dabd99e79220b59005307d6438430bd79
Pull Request resolved: #48278
Will fix part of #47779
Potentially fix #48130
Remove tests due to no type promotion from #47323 

Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245)

[ghstack-poisoned]
Will fix part of #47779
Potentially fix #48130
Remove tests due to no type promotion from #47323 

Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245)

[ghstack-poisoned]
ejguan added a commit that referenced this pull request Nov 30, 2020
ghstack-source-id: 86937df88bfcd84fc678a77897f3003b2e84a908
Pull Request resolved: #48278
Will fix part of #47779
Potentially fix #48130
Remove tests due to no type promotion from #47323 

Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245)

[ghstack-poisoned]
ejguan added a commit that referenced this pull request Nov 30, 2020
ghstack-source-id: 86937df88bfcd84fc678a77897f3003b2e84a908
Pull Request resolved: #48278
@ejguan
Copy link
Contributor Author

ejguan commented Dec 1, 2020

Hey @mruberry

I am working on the type promotion for fmod (remainder will be in another PR)
All of tests are passes except one for ONNX about conflict between new type promotion and ONNX here:

@skipIfUnsupportedMinOpsetVersion(10)
def test_fmod_scalar(self):
class FModModel(torch.nn.Module):
def forward(self, input):
return torch.fmod(input, 2.55)
x = torch.randint(10, (2, 3))
self.run_test(FModModel(), x)

The error is:
image
It doesn't seem ONNX have type promotion for integer fmod to floating-point scalar. (It will be same issue for remainder as well.)

Is there anyway to work around it?
I am thinking about changing the ONNX test from integral input to floating-point input.

cc: @houseroad

Will fix part of #47779
Potentially fix #48130
Remove various lines from tests due to no type promotion introduced from #47323 

Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245)

[ghstack-poisoned]
@mruberry mruberry added the module: onnx Related to torch.onnx label Dec 2, 2020
@mruberry
Copy link
Collaborator

mruberry commented Dec 2, 2020

I am working on the type promotion for fmod (remainder will be in another PR)

Cool!

All of tests are passes except one for ONNX about conflict between new type promotion and ONNX here:

That bites.

Is there anyway to work around it?
I am thinking about changing the ONNX test from integral input to floating-point input.

When I faced a similar issue the ONNX team implemented a "best effort" type promotion for the operator, see #43787. That was very generous of them.

I'd suggest giving them a chance to comment on what they'd like to see happen. Possible ideas could be:

  • temporarily disabling the int test and filing an issue for follow-up
  • consulting with them to identify an easy solution (like a calling a function in the ONNX exporter that implements the appropriate type promotion)

cc @bzinodev @spandantiwari @lara-hdr @BowenBao @neginraoof

@ejguan
Copy link
Contributor Author

ejguan commented Dec 2, 2020

When I faced a similar issue the ONNX team implemented a "best effort" type promotion for the operator, see #43787. That was very generous of them.

I'd suggest giving them a chance to comment on what they'd like to see happen. Possible ideas could be:

  • temporarily disabling the int test and filing an issue for follow-up
  • consulting with them to identify an easy solution (like a calling a function in the ONNX exporter that implements the appropriate type promotion)

Thanks for the advice!

Will fix part of #47779
Remove various lines from tests due to no type promotion introduced from #47323 

Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245)

[ghstack-poisoned]
Will fix part of #47779
Remove various lines from tests due to no type promotion introduced from #47323 

Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245)

[ghstack-poisoned]
Will fix part of #47779
Remove various lines from tests due to no type promotion introduced from #47323 

Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245)

[ghstack-poisoned]
@ejguan ejguan changed the title [WIP] Fix fmod type promotion Fix fmod type promotion Dec 3, 2020
Will fix part of #47779
Remove various lines from tests due to no type promotion introduced from #47323 

Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245)

[ghstack-poisoned]
Will fix part of #47779
Remove various lines from tests due to no type promotion introduced from #47323 

Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245)

[ghstack-poisoned]
@ejguan
Copy link
Contributor Author

ejguan commented Dec 23, 2020

Bool Tensor is not supported
>>> a = b = torch.tensor(True)
>>> torch.fmod(a, b)
RuntimeError: "fmod_cpu" not implemented for 'Bool'
>>> torch.fmod(a, 1)
RuntimeError: "fmod_scalar_cpu" not implemented for 'Bool'

This is part of the BC breaking note, but this isn't actually a BC breaking change is it?

Talked to Mike today, will follow the instruction of BC-breaking doc. I need to also update the doc that currently says the divisor can only be Tensor (same shape as dividend) or float.

@gchanan
Copy link
Contributor

gchanan commented Dec 23, 2020

I am not sure if I should set the result dtype to int8 as NumPy or keep it as bool, as both dtypes make sense to me. #48278 (comment)

interesting question. Maybe a good question for @rgommers for what numpy intends to support. As far as I can tell, this is defined by the types the ufunc defines, e.g. np.fmod.types is different than np.multiply.types -- if all your "wrap ufunc" code actually uses ufuncs, you could write the function in terms of the types instead of memorizing the combinations.

@gchanan
Copy link
Contributor

gchanan commented Dec 23, 2020

Ah ya, ? represents np.bool_ (see https://numpy.org/devdocs/reference/arrays.scalars.html#numpy.bool_), and np.multiply.types has ??->? but np.fmod.types doesn't. You can also see this by passing dtype=np.bool_ to the operator.

@ejguan
Copy link
Contributor Author

ejguan commented Dec 23, 2020

I am not sure if I should set the result dtype to int8 as NumPy or keep it as bool, as both dtypes make sense to me. #48278 (comment)

interesting question. Maybe a good question for @rgommers for what numpy intends to support. As far as I can tell, this is defined by the types the ufunc defines, e.g. np.fmod.types is different than np.multiply.types -- if all your "wrap ufunc" code actually uses ufuncs, you could write the function in terms of the types instead of memorizing the combinations.

I understand types gives me the rules of NumPy operators. But it won't tell me the difference of type promotion rules. What I can make the wrapper_fn better is:

# Special cases
# tensor & number
# boolean tensor & boolean tensor
...
# General cases
if hasattr(x, 'dtype') and hasattr(y, 'dtype'):
    if np.promote_types(x.dtype, y.dtype) != torch.promote_types(n2t[x.dtype], n2t[y.dtype]):
        return t2n[torch.promote_types(numpy2torch[x.dtype], numpy2torch[y.dtype])]

This is going to work without explicitly running the combination of case 2) in the comment.

@rgommers
Copy link
Collaborator

I am not sure if I should set the result dtype to int8 as NumPy or keep it as bool, as both dtypes make sense to me. #48278 (comment)

interesting question. Maybe a good question for @rgommers for what numpy intends to support. As far as I can tell, this is defined by the types the ufunc defines, e.g. np.fmod.types is different than np.multiply.types -- if all your "wrap ufunc" code actually uses ufuncs, you could write the function in terms of the types instead of memorizing the combinations.

What NumPy does right now seems right to me, and I can't find any discussion about fmod of boolean arrays. Casting to integer for numerical operations sounds reasonable in general, and for fmod in particular the result is meant to be signed so using bool output would be odd I think.

// Issue #47779: https://github.com/pytorch/pytorch/issues/47779
if (isIntegralType(iter.dtype(), /*includeBool=*/ false)) {
AT_DISPATCH_INTEGRAL_TYPES(iter.dtype(), "fmod_cpu", [&]() {
if (isIntegralType(iter.common_dtype(), /*includeBool=*/ true)) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since bool dispatch is (slightly) controversial let's just remove it for now. It's true that NumPy promotes to int8 in this case, but for analogous cases, like True + True, NumPy will not promote.

// Issue #47779: https://github.com/pytorch/pytorch/issues/47779
if (isIntegralType(iter.dtype(), /*includeBool*/ false)) {
AT_DISPATCH_INTEGRAL_TYPES(iter.dtype(), "fmod_cuda", [&]() {
if (isIntegralType(iter.common_dtype(), /*includeBool*/ true)) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove bool dispatch here, too. We can revisit it in a later PR.

@@ -349,6 +349,80 @@ def wrapped_fn(x):

return wrapped_fn

def np_binary_ufunc_type_promotion_wrapper(fn):
Copy link
Collaborator

@mruberry mruberry Dec 28, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I appreciate the attention to detail with this function, and we may bring it back later, but for this PR a simpler approach is probably to remove this function and the following two and, instead, just use torch.result_type and pre-cast all the NumPy inputs to the corresponding NumPy dtype. For example:

dtype = torch.result_type(input0, input1)
np_input0 = input0.to(dtype).cpu().numpy()
np_input1 = input1.to(dtype).cpu().numpy().

Or, even simpler, to just ignore the dtype in the comparison:

self.assertEqual(expected, actual, exact_dtype=False)

I think either of these solutions will work for this PR, and for the remainder PR, too. Let me know if you have questions about this; we can review it together.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will choose the second solution because it's still required some hard code for the corner cases for arguments with tensor and python number.

Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have two remaining suggestions:

  • let's error on bool for now
  • let's simplify the type promotion discrepancy (ping me to discuss this further)

Other than that things look good.

Remove various lines from tests due to no type promotion introduced from #47323 

## BC-breaking Note:
In order to make `fmod` operator have type promotion, we have to introduce BC breaking.
### 1.7.1:
- For two tensor arguments, output dtype is determined by the first input (Tensor). So it won't support floating-point to int casting and raise RuntimeError.
```python
>>> x = torch.arange(start=1, end=6, dtype=torch.int32)  # tensor([1, 2, 3, 4])
>>> y = torch.arange(start=1.1, end=2.1, step=0.2, dtype=torch.float32)  # tensor([1.1, 1.3, 1.5, 1.7, 1.9])
>>> torch.fmod(x, y)
RuntimeError: result type Float can't be cast to the desired output type Int
>>> z = torch.arange(start=0.2, end=1.1, step=0.2, dtype=torch.float64)  # tensor([0.2, 0.4, 0.6, 0.8, 1.], dtype=torch.float64)
>>> torch.fmod(y, z).dtype
torch.float32
>>> torch.fmod(z, y).dtype
torch.float64
```
- For tensor and scalar arguments, the result is casted to the dtype of the first argument.
```python
>>> torch.fmod(x, 1.2)
tensor([0, 0, 0, 0, 0], dtype=torch.int32)
```
- Bool Tensor is **not supported**
```python
>>> a = b = torch.tensor(True)
>>> torch.fmod(a, b)
RuntimeError: "fmod_cpu" not implemented for 'Bool'
>>> torch.fmod(a, 1)
RuntimeError: "fmod_scalar_cpu" not implemented for 'Bool'
```
### [Prior PR](#47323):
- For two tensor arguments,
```python
>>> x = torch.arange(start=1, end=6, dtype=torch.int32)  # tensor([1, 2, 3, 4])
>>> y = torch.arange(start=1.1, end=2.1, step=0.2, dtype=torch.float32)  # tensor([1.1, 1.3, 1.5, 1.7, 1.9])
>>> torch.fmod(x, y)
tensor([1.0000, 0.7000, 0.0000, 0.6000, 1.2000])
>>> z = torch.arange(start=0.2, end=1.1, step=0.2, dtype=torch.float64)  # tensor([0.2, 0.4, 0.6, 0.8, 1.], dtype=torch.float64)
>>> torch.fmod(y, z).dtype
torch.float64
>>> torch.fmod(z, y).dtype
torch.float64
```
- For tensor and scalar arguments, the result is casted to the dtype of the first argument.
```python
>>> torch.fmod(x, 1.2)
tensor([0, 0, 0, 0, 0], dtype=torch.int32)
```
- Bool Tensor is **not supported**
```python
>>> a = b = torch.tensor(True)
>>> torch.fmod(a, b)
RuntimeError: "fmod_cpu" not implemented for 'Bool'
>>> torch.fmod(a, 1)
RuntimeError: "fmod_cpu" not implemented for 'Bool'
```
### Current:
- Type is determined by both inputs.
```python
>>> x = torch.arange(start=1, end=6, dtype=torch.int32)  # tensor([1, 2, 3, 4])
>>> y = torch.arange(start=1.1, end=2.1, step=0.2, dtype=torch.float32)  # tensor([1.1, 1.3, 1.5, 1.7, 1.9])
>>> torch.fmod(x, y)
tensor([1.0000, 0.7000, 0.0000, 0.6000, 1.2000])
>>> z = torch.arange(start=0.2, end=1.1, step=0.2, dtype=torch.float64)  # tensor([0.2, 0.4, 0.6, 0.8, 1.], dtype=torch.float64)
>>> torch.fmod(y, z).dtype
torch.float64
>>> torch.fmod(z, y).dtype
torch.float64
```
- For tensor and scalar arguments, the dtype of result is promoted.
```python
>>> torch.fmod(x, 1.2)
tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000])
```
- Bool Tensor is **supported** as inputs
```python
>>> a = b = torch.tensor(True)
>>> torch.fmod(a, b)
tensor(False)
>>> torch.fmod(a, 0.7)
tensor(0.3000)
```
Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245)

[ghstack-poisoned]
Remove various lines from tests due to no type promotion introduced from #47323 

## BC-breaking Note:
In order to make `fmod` operator have type promotion, we have to introduce BC breaking.
### 1.7.1:
- For two tensor arguments, output dtype is determined by the first input (Tensor). So it won't support floating-point to int casting and raise RuntimeError.
```python
>>> x = torch.arange(start=1, end=6, dtype=torch.int32)  # tensor([1, 2, 3, 4])
>>> y = torch.arange(start=1.1, end=2.1, step=0.2, dtype=torch.float32)  # tensor([1.1, 1.3, 1.5, 1.7, 1.9])
>>> torch.fmod(x, y)
RuntimeError: result type Float can't be cast to the desired output type Int
>>> z = torch.arange(start=0.2, end=1.1, step=0.2, dtype=torch.float64)  # tensor([0.2, 0.4, 0.6, 0.8, 1.], dtype=torch.float64)
>>> torch.fmod(y, z).dtype
torch.float32
>>> torch.fmod(z, y).dtype
torch.float64
```
- For tensor and scalar arguments, the result is casted to the dtype of the first argument.
```python
>>> torch.fmod(x, 1.2)
tensor([0, 0, 0, 0, 0], dtype=torch.int32)
```
- Bool Tensor is **not supported**
```python
>>> a = b = torch.tensor(True)
>>> torch.fmod(a, b)
RuntimeError: "fmod_cpu" not implemented for 'Bool'
>>> torch.fmod(a, 1)
RuntimeError: "fmod_scalar_cpu" not implemented for 'Bool'
```
### [Prior PR](#47323):
- For two tensor arguments,
```python
>>> x = torch.arange(start=1, end=6, dtype=torch.int32)  # tensor([1, 2, 3, 4])
>>> y = torch.arange(start=1.1, end=2.1, step=0.2, dtype=torch.float32)  # tensor([1.1, 1.3, 1.5, 1.7, 1.9])
>>> torch.fmod(x, y)
tensor([1.0000, 0.7000, 0.0000, 0.6000, 1.2000])
>>> z = torch.arange(start=0.2, end=1.1, step=0.2, dtype=torch.float64)  # tensor([0.2, 0.4, 0.6, 0.8, 1.], dtype=torch.float64)
>>> torch.fmod(y, z).dtype
torch.float64
>>> torch.fmod(z, y).dtype
torch.float64
```
- For tensor and scalar arguments, the result is casted to the dtype of the first argument.
```python
>>> torch.fmod(x, 1.2)
tensor([0, 0, 0, 0, 0], dtype=torch.int32)
```
- Bool Tensor is **not supported**
```python
>>> a = b = torch.tensor(True)
>>> torch.fmod(a, b)
RuntimeError: "fmod_cpu" not implemented for 'Bool'
>>> torch.fmod(a, 1)
RuntimeError: "fmod_cpu" not implemented for 'Bool'
```
### Current:
- Type is determined by both inputs.
```python
>>> x = torch.arange(start=1, end=6, dtype=torch.int32)  # tensor([1, 2, 3, 4])
>>> y = torch.arange(start=1.1, end=2.1, step=0.2, dtype=torch.float32)  # tensor([1.1, 1.3, 1.5, 1.7, 1.9])
>>> torch.fmod(x, y)
tensor([1.0000, 0.7000, 0.0000, 0.6000, 1.2000])
>>> z = torch.arange(start=0.2, end=1.1, step=0.2, dtype=torch.float64)  # tensor([0.2, 0.4, 0.6, 0.8, 1.], dtype=torch.float64)
>>> torch.fmod(y, z).dtype
torch.float64
>>> torch.fmod(z, y).dtype
torch.float64
```
- For tensor and scalar arguments, the dtype of result is promoted.
```python
>>> torch.fmod(x, 1.2)
tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000])
```
- Bool Tensor is **supported** as inputs
```python
>>> a = b = torch.tensor(True)
>>> torch.fmod(a, b)
tensor(False)
>>> torch.fmod(a, 0.7)
tensor(0.3000)
```
Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245)

[ghstack-poisoned]
Remove various lines from tests due to no type promotion introduced from #47323 

## BC-breaking Note:
In order to make `fmod` operator have type promotion, we have to introduce BC breaking.
### 1.7.1:
- For two tensor arguments, output dtype is determined by the first input (Tensor). So it won't support floating-point to int casting and raise RuntimeError.
```python
>>> x = torch.arange(start=1, end=6, dtype=torch.int32)  # tensor([1, 2, 3, 4])
>>> y = torch.arange(start=1.1, end=2.1, step=0.2, dtype=torch.float32)  # tensor([1.1, 1.3, 1.5, 1.7, 1.9])
>>> torch.fmod(x, y)
RuntimeError: result type Float can't be cast to the desired output type Int
>>> z = torch.arange(start=0.2, end=1.1, step=0.2, dtype=torch.float64)  # tensor([0.2, 0.4, 0.6, 0.8, 1.], dtype=torch.float64)
>>> torch.fmod(y, z).dtype
torch.float32
>>> torch.fmod(z, y).dtype
torch.float64
```
- For tensor and scalar arguments, the result is casted to the dtype of the first argument.
```python
>>> torch.fmod(x, 1.2)
tensor([0, 0, 0, 0, 0], dtype=torch.int32)
```
- Bool Tensor is **not supported**
```python
>>> a = b = torch.tensor(True)
>>> torch.fmod(a, b)
RuntimeError: "fmod_cpu" not implemented for 'Bool'
>>> torch.fmod(a, 1)
RuntimeError: "fmod_scalar_cpu" not implemented for 'Bool'
```
### [Prior PR](#47323):
- For two tensor arguments,
```python
>>> x = torch.arange(start=1, end=6, dtype=torch.int32)  # tensor([1, 2, 3, 4])
>>> y = torch.arange(start=1.1, end=2.1, step=0.2, dtype=torch.float32)  # tensor([1.1, 1.3, 1.5, 1.7, 1.9])
>>> torch.fmod(x, y)
tensor([1.0000, 0.7000, 0.0000, 0.6000, 1.2000])
>>> z = torch.arange(start=0.2, end=1.1, step=0.2, dtype=torch.float64)  # tensor([0.2, 0.4, 0.6, 0.8, 1.], dtype=torch.float64)
>>> torch.fmod(y, z).dtype
torch.float64
>>> torch.fmod(z, y).dtype
torch.float64
```
- For tensor and scalar arguments, the result is casted to the dtype of the first argument.
```python
>>> torch.fmod(x, 1.2)
tensor([0, 0, 0, 0, 0], dtype=torch.int32)
```
- Bool Tensor is **not supported**
```python
>>> a = b = torch.tensor(True)
>>> torch.fmod(a, b)
RuntimeError: "fmod_cpu" not implemented for 'Bool'
>>> torch.fmod(a, 1)
RuntimeError: "fmod_cpu" not implemented for 'Bool'
```
### Current:
- Type is determined by both inputs.
```python
>>> x = torch.arange(start=1, end=6, dtype=torch.int32)  # tensor([1, 2, 3, 4])
>>> y = torch.arange(start=1.1, end=2.1, step=0.2, dtype=torch.float32)  # tensor([1.1, 1.3, 1.5, 1.7, 1.9])
>>> torch.fmod(x, y)
tensor([1.0000, 0.7000, 0.0000, 0.6000, 1.2000])
>>> z = torch.arange(start=0.2, end=1.1, step=0.2, dtype=torch.float64)  # tensor([0.2, 0.4, 0.6, 0.8, 1.], dtype=torch.float64)
>>> torch.fmod(y, z).dtype
torch.float64
>>> torch.fmod(z, y).dtype
torch.float64
```
- For tensor and scalar arguments, the dtype of result is promoted.
```python
>>> torch.fmod(x, 1.2)
tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000])
```
- Bool Tensor is **supported** as inputs
```python
>>> a = b = torch.tensor(True)
>>> torch.fmod(a, b)
tensor(False)
>>> torch.fmod(a, 0.7)
tensor(0.3000)
```
Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245)

[ghstack-poisoned]
Remove various lines from tests due to no type promotion introduced from #47323 

## BC-breaking Note:
In order to make `fmod` operator have type promotion, we have to introduce BC breaking.
### 1.7.1:
In the case where the second argument is a python number, the result is casted to the dtype of the first argument.
```python
>>> torch.fmod(x, 1.2)
tensor([0, 0, 0, 0, 0], dtype=torch.int32)
```
### Prior PR:
Check the BC-breaking note of #47323

### This PR:
In the case where the second argument is a python number, the dtype of result is determined by type promotion of both inputs.
```python
>>> torch.fmod(x, 1.2)
tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000])
```

Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245)

[ghstack-poisoned]
Remove various lines from tests due to no type promotion introduced from #47323 

## BC-breaking Note:
In order to make `fmod` operator have type promotion, we have to introduce BC breaking.
### 1.7.1:
In the case where the second argument is a python number, the result is casted to the dtype of the first argument.
```python
>>> torch.fmod(x, 1.2)
tensor([0, 0, 0, 0, 0], dtype=torch.int32)
```
### Prior PR:
Check the BC-breaking note of #47323

### This PR:
In the case where the second argument is a python number, the dtype of result is determined by type promotion of both inputs.
```python
>>> torch.fmod(x, 1.2)
tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000])
```

Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245)

[ghstack-poisoned]
Remove various lines from tests due to no type promotion introduced from #47323 

## BC-breaking Note:
In order to make `fmod` operator have type promotion, we have to introduce BC breaking.
### 1.7.1:
In the case where the second argument is a python number, the result is casted to the dtype of the first argument.
```python
>>> torch.fmod(x, 1.2)
tensor([0, 0, 0, 0, 0], dtype=torch.int32)
```
### Prior PR:
Check the BC-breaking note of #47323

### This PR:
In the case where the second argument is a python number, the dtype of result is determined by type promotion of both inputs.
```python
>>> torch.fmod(x, 1.2)
tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000])
```

Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245)

[ghstack-poisoned]
Remove various lines from tests due to no type promotion introduced from #47323 

## BC-breaking Note:
In order to make `fmod` operator have type promotion, we have to introduce BC breaking.
### 1.7.1:
In the case where the second argument is a python number, the result is casted to the dtype of the first argument.
```python
>>> torch.fmod(x, 1.2)
tensor([0, 0, 0, 0, 0], dtype=torch.int32)
```
### Prior PR:
Check the BC-breaking note of #47323

### This PR:
In the case where the second argument is a python number, the dtype of result is determined by type promotion of both inputs.
```python
>>> torch.fmod(x, 1.2)
tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000])
```

Differential Revision: [D25105245](https://our.internmc.facebook.com/intern/diff/D25105245)

[ghstack-poisoned]
@ejguan ejguan requested a review from mruberry January 11, 2021 14:34
Remove various lines from tests due to no type promotion introduced from #47323 

## BC-breaking Note:
In order to make `fmod` operator have type promotion, we have to introduce BC breaking.
### 1.7.1:
In the case where the second argument is a python number, the result is casted to the dtype of the first argument.
```python
>>> torch.fmod(x, 1.2)
tensor([0, 0, 0, 0, 0], dtype=torch.int32)
```
### Prior PR:
Check the BC-breaking note of #47323

### This PR:
In the case where the second argument is a python number, the dtype of result is determined by type promotion of both inputs.
```python
>>> torch.fmod(x, 1.2)
tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000])
```


[ghstack-poisoned]
torch/_torch_docs.py Outdated Show resolved Hide resolved
torch/_torch_docs.py Outdated Show resolved Hide resolved
torch/_torch_docs.py Outdated Show resolved Hide resolved
Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good, nice work, @ejguan. I've made a few small suggestions for the documentation and a simple change to the kernel functions that should be easy to implement. Once this is imported we should run an additional BC-breaking test on the PR, just in case. Ping me and I can talk you through our process for doing that.

if (isIntegralType(iter.dtype(), /*includeBool*/ false)) {
AT_DISPATCH_INTEGRAL_TYPES(iter.dtype(), "fmod_cuda", [&]() {
if (isIntegralType(iter.common_dtype(), /*includeBool*/ false)) {
AT_DISPATCH_INTEGRAL_TYPES(iter.common_dtype(), "fmod_cuda", [&]() {
gpu_kernel_with_scalars(iter, []GPU_LAMBDA(scalar_t a, scalar_t b) -> scalar_t {
return a % b;
});
});
} else {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Take this dispatch out of the else branch, like this:

if ... {
}

# dispatch

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we still need to keep it within the else block. Otherwise, we will dispatch two times for this operation.

Copy link
Collaborator

@mruberry mruberry Jan 12, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I see what you're saying, yes that's fine. Sorry. Please ignore my suggestions to take them out, we can keep them in the branch.

cpu_kernel(iter, [=](scalar_t x, scalar_t d) -> scalar_t {
TORCH_CHECK(d != 0, "ZeroDivisionError");
return x % d;
});
});
} else {
AT_DISPATCH_FLOATING_TYPES_AND(kHalf, iter.dtype(), "fmod_cpu", [&]() {
AT_DISPATCH_FLOATING_TYPES_AND(kHalf, iter.common_dtype(), "fmod_cpu", [&]() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Take this dispatch out of the else branch

Remove various lines from tests due to no type promotion introduced from #47323 

## BC-breaking Note:
In order to make `fmod` operator have type promotion, we have to introduce BC breaking.
### 1.7.1:
In the case where the second argument is a python number, the result is casted to the dtype of the first argument.
```python
>>> torch.fmod(x, 1.2)
tensor([0, 0, 0, 0, 0], dtype=torch.int32)
```
### Prior PR:
Check the BC-breaking note of #47323

### This PR:
In the case where the second argument is a python number, the dtype of result is determined by type promotion of both inputs.
```python
>>> torch.fmod(x, 1.2)
tensor([1.0000, 0.8000, 0.6000, 0.4000, 0.2000])
```

Differential Revision: [D25869137](https://our.internmc.facebook.com/intern/diff/D25869137)

[ghstack-poisoned]
@facebook-github-bot
Copy link
Contributor

@ejguan merged this pull request in a0f7b18.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed Merged module: bc-breaking Related to a BC-breaking change module: onnx Related to torch.onnx
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants