Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disables complex min and max #36377

Closed
wants to merge 2 commits into from
Closed

Conversation

mruberry
Copy link
Collaborator

Partially addresses #36374 by disabling min and max for complex inputs. test_complex_unsupported in test_torch.py is extended to validate this behavior.

@mruberry mruberry added module: complex Related to complex number support in PyTorch module: numpy Related to numpy support, and also numpy compatibility of our operators labels Apr 10, 2020
@dr-ci
Copy link

dr-ci bot commented Apr 10, 2020

💊 CircleCI build failures summary and remediations

As of commit 1ed6766 (more details on the Dr. CI page):


None of the build failures appear to be your fault 💚



❄️ 1 tentatively flaky failure

1 failure tentatively classified as flaky but have not triggered reruns to confirm:

See CircleCI build caffe2_onnx_main_py3_6_clang7_ubuntu16_04_build (1/1)

Step: "Build" (full log | pattern match details | 🔁 rerun) ❄️

Apr 10 08:49:46 Failed to recurse into submodule path 'third_party/ideep'
sys	0m0.041s 
Apr 10 08:49:22 ++ export BUILD_ENVIRONMENT=caffe2-onnx-main-py3.6-clang7-ubuntu16.04-build 
Apr 10 08:49:22 ++ BUILD_ENVIRONMENT=caffe2-onnx-main-py3.6-clang7-ubuntu16.04-build 
Apr 10 08:49:22 ++ git submodule sync 
Apr 10 08:49:22 ++ git submodule update -q --init --recursive 
Apr 10 08:49:46 error: RPC failed; curl 56 GnuTLS recv error (-54): Error in the pull function. 
Apr 10 08:49:46 fatal: The remote end hung up unexpectedly 
Apr 10 08:49:46 fatal: early EOF 
Apr 10 08:49:46 fatal: index-pack failed 
Apr 10 08:49:46 fatal: clone of 'https://github.com/intel/mkl-dnn.git' into submodule path 'mkl-dnn' failed 
Apr 10 08:49:46 Failed to recurse into submodule path 'third_party/ideep' 

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker.

See how this bot performed.

This comment has been revised 1 time.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@mruberry merged this pull request in 7aa6a8f.

@mruberry mruberry deleted the disables_complex_min_max branch April 11, 2020 20:24
ashishfarmer pushed a commit to ashishfarmer/pytorch that referenced this pull request Apr 13, 2020
Summary:
Partially addresses pytorch#36374 by disabling min and max for complex inputs. test_complex_unsupported in test_torch.py is extended to validate this behavior.
Pull Request resolved: pytorch#36377

Differential Revision: D20964661

Pulled By: mruberry

fbshipit-source-id: 79606c2e88c17c702543f4af75847d2460586c2d
facebook-github-bot pushed a commit that referenced this pull request Jan 12, 2021
Summary:
Fixes #50064

**PROBLEM:**
In issue #36377, min/max functions were disabled for complex inputs (via dtype checks).
However, min/max kernels are still being compiled and dispatched for complex.

**FIX:**
The aforementioned dispatch has been disabled & we now rely on errors produced
by dispatch macro to not run those ops on complex, instead of doing redundant dtype checks.

Pull Request resolved: #50347

Reviewed By: zhangguanheng66

Differential Revision: D25870385

Pulled By: anjali411

fbshipit-source-id: 921541d421c509b7a945ac75f53718cd44e77df1
facebook-github-bot pushed a commit that referenced this pull request Jan 18, 2021
…ispatch for CPU min/max pointwise ops (#50465)

Summary:
Fixes #50064

**PROBLEM DESCRIPTION:**
1. Had not removed dtype checks for complex types in the previous PR (#50347) for this issue.
These type-checks were added in #36377, but are no longer necessary,
as we now rely upon dispatch macros to produce error messages.
2. dtype checks in `clamp_max()` and `clamp_min()` for complex inputs had not been removed either.
3. For min/max pointwise ops in TensorCompareKernel.cpp, complex dispatch had not been removed for min/max functions.

### **FIX DESCRIPTION:**
**FIX SUMMARY:**
1. Removed dtype checks added in #36377, and added 3 more in TensorCompare.cpp.
2. Removed dtype checks for complex inputs in `clamp_max()` and `clamp_min()`.
3.  Disabled complex dispatch for min/max pointwise ops in TensorCompareKernel.cpp.
4. Error messages in the exceptions raised due to min/max ops not being implemented are now checked for containing the text _not support_ (which can also be present in _not supported_), or _not implemented_, so one of them should be a part of error messages, in order for them to be informative.

**REASON FOR NOT CHANGING DISPATCH FOR CUDA AND CLAMP OPS**:

As for the CUDA min/max operations, their kernels do not seem to be compiled & dispatched for complex types anyway, so no further changes seem to be required. Basically, the dispatch macros currently being used don't have cases for complex types.

For example,

1. the reduce CUDA ops use [AT_DISPATCH_ALL_TYPES_AND2 (https://github.com/pytorch/pytorch/commit/678fe9f0771a5cd98ead214363d70480ba03000d)](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/Dispatch.h#L548-L575) in [ReduceMinMaxKernel.cu](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cuda/ReduceMinMaxKernel.cu), and that macro doesn't allow complex types.

2. In [MinMaxElementwiseKernel.cu](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cuda/MaxMinElementwiseKernel.cu), the CUDA pointwise ops use [`AT_DISPATCH_FLOATING_TYPES_AND2 (https://github.com/pytorch/pytorch/commit/678fe9f0771a5cd98ead214363d70480ba03000d)`](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/Dispatch.h#L240-L263) for non-integral & non-boolean types, and this marco doesn't have a case for complex types either.

3. [clamp CUDA ops](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cuda/UnaryOpsKernel.cu#L170-L211) use `AT_DISPATCH_ALL_TYPES_AND2 (https://github.com/pytorch/pytorch/commit/678fe9f0771a5cd98ead214363d70480ba03000d)`, which doesn't have a case for complex types.

Similarly, [CPU clamp min/max ops](https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cpu/UnaryOpsKernel.cpp#L428-L458) use the `AT_DISPATCH_ALL_TYPES_AND `dispatch macro, which doesn't have a case for complex types.

**REASON FOR ADDING 3 dtype CHECKS:**
There are a few cases in which the methods corresponding to `min_stub()` or `max_stub()` are not called, so dispatch macros don't get invoked, resulting in no exceptions being raised. Hence, `dtype` checks are necessary at 3 places to raise exceptions:

1. https://github.com/pytorch/pytorch/blob/52dcc7299925de055d330781d2fe0dad71182829/aten/src/ATen/native/TensorCompare.cpp#L342
2. https://github.com/pytorch/pytorch/blob/52dcc7299925de055d330781d2fe0dad71182829/aten/src/ATen/native/TensorCompare.cpp#L422
3. https://github.com/pytorch/pytorch/blob/52dcc7299925de055d330781d2fe0dad71182829/aten/src/ATen/native/TensorCompare.cpp#L389

The first dtype check requirement can be verified from the following example Python code based on `test_complex_unsupported()`:
```
import unittest
import torch

class MyTestCase(unittest.TestCase):

   def test_1(self):
      t = torch.tensor((1 + 1j), device='cpu', dtype=torch.complex128)
      with self.assertRaises(Exception):
         torch.max(t, dim=0)

if __name__ == '__main__':
    unittest.main()
```

Pull Request resolved: #50465

Reviewed By: mruberry

Differential Revision: D25938106

Pulled By: ngimel

fbshipit-source-id: 95e2df02ba8583fa3ce87d4a2fdcd60b912dda46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged module: complex Related to complex number support in PyTorch module: numpy Related to numpy support, and also numpy compatibility of our operators
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants