Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove unnecessary dtype checks for complex types & disable complex dispatch for CPU min/max pointwise ops #50465

Closed
wants to merge 10 commits into from

Conversation

imaginary-person
Copy link
Contributor

@imaginary-person imaginary-person commented Jan 13, 2021

Fixes #50064

PROBLEM DESCRIPTION:

  1. Had not removed dtype checks for complex types in the previous PR (Disable complex dispatch on min/max functions #50347) for this issue.
    These type-checks were added in Disables complex min and max #36377, but are no longer necessary,
    as we now rely upon dispatch macros to produce error messages.
  2. dtype checks in clamp_max() and clamp_min() for complex inputs had not been removed either.
  3. For min/max pointwise ops in TensorCompareKernel.cpp, complex dispatch had not been removed for min/max functions.

FIX DESCRIPTION:

FIX SUMMARY:

  1. Removed dtype checks added in Disables complex min and max #36377, and added 3 more in TensorCompare.cpp.
  2. Removed dtype checks for complex inputs in clamp_max() and clamp_min().
  3. Disabled complex dispatch for min/max pointwise ops in TensorCompareKernel.cpp.
  4. Error messages in the exceptions raised due to min/max ops not being implemented are now checked for containing the text not support (which can also be present in not supported), or not implemented, so one of them should be a part of error messages, in order for them to be informative.

REASON FOR NOT CHANGING DISPATCH FOR CUDA AND CLAMP OPS:

As for the CUDA min/max operations, their kernels do not seem to be compiled & dispatched for complex types anyway, so no further changes seem to be required. Basically, the dispatch macros currently being used don't have cases for complex types.

For example,

  1. the reduce CUDA ops use AT_DISPATCH_ALL_TYPES_AND2 in ReduceMinMaxKernel.cu, and that macro doesn't allow complex types.

  2. In MinMaxElementwiseKernel.cu, the CUDA pointwise ops use AT_DISPATCH_FLOATING_TYPES_AND2 for non-integral & non-boolean types, and this marco doesn't have a case for complex types either.

  3. clamp CUDA ops use AT_DISPATCH_ALL_TYPES_AND2, which doesn't have a case for complex types.

Similarly, CPU clamp min/max ops use the AT_DISPATCH_ALL_TYPES_AND dispatch macro, which doesn't have a case for complex types.

REASON FOR ADDING 3 dtype CHECKS:
There are a few cases in which the methods corresponding to min_stub() or max_stub() are not called, so dispatch macros don't get invoked, resulting in no exceptions being raised. Hence, dtype checks are necessary at 3 places to raise exceptions:

  1. if (_dimreduce_return_trivial_no_ident(max, self, dim, keepdim, "max")) {
  2. if (_dimreduce_return_trivial_no_ident(min, self, dim, keepdim, "min")) {
  3. if (_dimreduce_return_trivial_no_ident(min, self, dim, keepdim, "min") &&

The first dtype check requirement can be verified from the following example Python code based on test_complex_unsupported():

import unittest 
import torch

class MyTestCase(unittest.TestCase): 
  
   def test_1(self):
      t = torch.tensor((1 + 1j), device='cpu', dtype=torch.complex128) 
      with self.assertRaises(Exception): 
         torch.max(t, dim=0)
  
if __name__ == '__main__':  
    unittest.main()

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jan 13, 2021

💊 CI failures summary and remediations

As of commit 52dcc72 (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

@anjali411 anjali411 requested a review from ngimel January 13, 2021 15:01
@imaginary-person imaginary-person marked this pull request as draft January 13, 2021 17:16
@imaginary-person imaginary-person changed the title Remove dtype checks for complex types Remove dtype checks for complex types & disable complex dispatch for CPU min/max pointwise ops Jan 13, 2021
@imaginary-person
Copy link
Contributor Author

imaginary-person commented Jan 13, 2021

All the errors are related to test_complex_unsupported() when the dim parameter is used.
AT_ERROR() also uses TORCH_CHECK() to raise a RuntimeError,
so I'll check why it's not working as intended & will fix the bug after testing test_complex_unsupported() locally as soon as my local debug build would finish.

@@ -340,6 +340,7 @@ static std::tuple<Tensor &,Tensor &> max_out_impl(Tensor& max, Tensor& max_indic
max_indices.device(), " for indices output");
dim = maybe_wrap_dim(dim, self.dim());
if (_dimreduce_return_trivial_no_ident(max, self, dim, keepdim, "max")) {
TORCH_CHECK(!self.is_complex(), "max does not support complex inputs.");
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's worth noting that dispatch handlers are invoked in the methods registered for min_stub & max_stub.
However, the tests fail in some conditional cases such as this one, in which these stubs are not being called!

We can't rely upon dispatch macros to raise exceptions if they're never invoked in the first place!
So, in these corner cases, we need dtype checks.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, thanks for investigating!

@codecov
Copy link

codecov bot commented Jan 14, 2021

Codecov Report

Merging #50465 (d849f83) into master (ca5d961) will decrease coverage by 0.00%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master   #50465      +/-   ##
==========================================
- Coverage   80.73%   80.73%   -0.01%     
==========================================
  Files        1910     1910              
  Lines      207182   207158      -24     
==========================================
- Hits       167270   167244      -26     
- Misses      39912    39914       +2     

@imaginary-person imaginary-person marked this pull request as ready for review January 14, 2021 06:00
@imaginary-person imaginary-person changed the title Remove dtype checks for complex types & disable complex dispatch for CPU min/max pointwise ops Remove unnecessary dtype checks for complex types & disable complex dispatch for CPU min/max pointwise ops Jan 14, 2021
@imaginary-person
Copy link
Contributor Author

imaginary-person commented Jan 14, 2021

💊 CI failures summary and remediations

As of commit 52dcc72 (more details on the Dr. CI page):

  • 1/1 failures possibly* introduced in this PR

    • 1/1 non-CircleCI failure(s)

ci.pytorch.org: 1 failed

This comment was automatically generated by Dr. CI (expand for details).

This failure is not due to the code in this PR, as Jenkins builds for other PRs are still failing due to the same cause.
For example, https://ci.pytorch.org/jenkins/job/pytorch-builds/job/pytorch-linux-bionic-rocm3.10-py3.6-trigger/3500/

The error messages returned for complex inputs to min/max ops should be informative. Added checks for clamp as well.
@ngimel
Copy link
Collaborator

ngimel commented Jan 15, 2021

This is looking really good! Can you please also remove dtype checks for clamp in UnaryOps.cpp, and rely on dispatch throwing the errors? Dispatch is not enabled for compelx currently, so as far as I can see no dispatch changes are necessary.

@imaginary-person
Copy link
Contributor Author

This is looking really good! Can you please also remove dtype checks for clamp in UnaryOps.cpp, and rely on dispatch throwing the errors? Dispatch is not enabled for compelx currently, so as far as I can see no dispatch changes are necessary.

Thanks for your advice! I'll do that right away.

Removed dtype checks for complex types as the dispatch macro used (AT_DISPATCH_ALL_TYPES_AND) doesn't allow complex types, and would default to error anyway.
So, the dispatch macro can be relied upon for raising an exception for unsupported complex types.
Copy link
Collaborator

@ngimel ngimel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's wait for CI, otherwise looks good.

@imaginary-person
Copy link
Contributor Author

Hello @ngimel, it seems that the Codecov/patch check is insignificant because it's related to a line of code that does a dtype check for complex types if a particular conditional statement is true, as no dispatch macro is invoked for that condition.

The Codecov/patch check failed because no test cases check for that condition, so that line of code was never visited at runtime.

@imaginary-person
Copy link
Contributor Author

💊 CI failures summary and remediations

As of commit 52dcc72 (more details on the Dr. CI page):

  • 1/1 failures possibly* introduced in this PR

    • 1/1 non-CircleCI failure(s)

ci.pytorch.org: 1 failed

This comment was automatically generated by Dr. CI (expand for details).

This failure is not due to the code in this PR, as Jenkins builds for other PRs are still failing due to the same cause.
For example, https://ci.pytorch.org/jenkins/job/pytorch-builds/job/pytorch-linux-bionic-rocm3.10-py3.6-trigger/3500/

This unrelated issue was fixed in #50518

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@imaginary-person
Copy link
Contributor Author

Let's wait for CI, otherwise looks good.

Hello @ngimel, please let me know if I should add a test pertaining to torch._aminmax() for complex inputs in order to pass the Codecov/patch check. Thanks!

Tests to ensure that users receive a runtime exception for_aminmax() currently not being supported for complex types. 
A dispatch macro handles all cases except the case in which the tensor has a single element, 
and the dimension has been provided as an argument. In that particular case, a dtype check
generates an exception. That dtype check was also added in TensorCompare.cpp in this PR.
Delete trailing spaces in comment
Remove whitespace from empty line
Remove extra whitespace
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ngimel has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@ngimel
Copy link
Collaborator

ngimel commented Jan 17, 2021

Thanks for adding the test!

@imaginary-person
Copy link
Contributor Author

Thanks for adding the test!

Thanks for the opportunity to contribute, @ngimel!
Codecov/patch is passing now, but some other internal FB tests still seem to be failing.
I'm unable to see the failures, but if required, I'll make any necessary changes. Thanks!

@ngimel
Copy link
Collaborator

ngimel commented Jan 18, 2021

Internal failures are unrelated, I'm trying to land now. Thanks for the contribution!

@facebook-github-bot
Copy link
Contributor

@ngimel merged this pull request in 3f052ba.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Complex dispatch should be disabled on min/max functions
4 participants