Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[complex] Complex support for masked_scatter and autograd support for masked_scatter and masked_select #51281

Conversation

kshitij12345
Copy link
Collaborator

@kshitij12345 kshitij12345 commented Jan 28, 2021

Reference: #33152

Changes

  • Enable complex support for masked_scatter
  • Enable half support for masked_scatter CPU
  • Enable complex autograd support for masked_scatter CPU and masked_select (both CPU and CUDA).

Note:
Complex Support for masked_scatter CUDA is disabled as it depends on masked_fill which is yet to be ported to ATen.

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Jan 28, 2021

💊 CI failures summary and remediations

As of commit 8ac029d (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

@kshitij12345 kshitij12345 marked this pull request as ready for review January 28, 2021 15:08
@codecov
Copy link

codecov bot commented Jan 28, 2021

Codecov Report

Merging #51281 (8ac029d) into master (392abde) will decrease coverage by 0.00%.
The diff coverage is 100.00%.

@@            Coverage Diff             @@
##           master   #51281      +/-   ##
==========================================
- Coverage   80.84%   80.84%   -0.01%     
==========================================
  Files        1931     1931              
  Lines      210884   210890       +6     
==========================================
- Hits       170496   170493       -3     
- Misses      40388    40397       +9     

sample_inputs_func=sample_inputs_masked_scatter,
skips=(
# _th_masked_fill_bool_ not supported for Complex Types.
SkipInfo('TestGradients', 'test_fn_grad',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you could instead just set test_complex_grad=False

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, masked_fill_ is ported for CPU and it works.

So I thought it would be better to just disable CUDA variant, instead of disabling both of them.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ohh I see

@@ -89,6 +89,7 @@
'reflection_pad1d_backward', 'reflection_pad2d_backward',
'replication_pad1d', 'replication_pad2d', 'replication_pad3d',
'replication_pad1d_backward', 'replication_pad2d_backward', 'replication_pad3d_backward',
'masked_scatter', 'masked_select'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we shouldn't enable masked_scatter since it's dependent on _th_masked_fill_bool_ which is not supported for complex

@@ -4499,12 +4499,6 @@ def test_masked_scatter(self, device, dtype):
# in a separate test
return

# TODO: update test when masked scatter is supported for complex
# and cpu supports half
if (dt == torch.half and self.device_type == 'cpu') or dt.is_complex:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you probably still wanna retain this for if (dt == torch.half and self.device_type == 'cpu') right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh nvm you also enable masked_scatter for torch.half on CPU

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh forgot to add it in the description but have also added support for half on CPU.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the description.

Copy link
Contributor

@anjali411 anjali411 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good to me after masked_scatter backward is disabled for complex edit: masked_fill_ is ported on CPU, so the backward works on only CPU, so it's fine to enable complex autograd

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@anjali411 merged this pull request in a88e1d3.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants