New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[complex] Complex support for masked_scatter and autograd support for masked_scatter and masked_select #51281
Conversation
💊 CI failures summary and remediationsAs of commit 8ac029d (more details on the Dr. CI page):
ci.pytorch.org: 1 failedThis comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
Codecov Report
@@ Coverage Diff @@
## master #51281 +/- ##
==========================================
- Coverage 80.84% 80.84% -0.01%
==========================================
Files 1931 1931
Lines 210884 210890 +6
==========================================
- Hits 170496 170493 -3
- Misses 40388 40397 +9 |
sample_inputs_func=sample_inputs_masked_scatter, | ||
skips=( | ||
# _th_masked_fill_bool_ not supported for Complex Types. | ||
SkipInfo('TestGradients', 'test_fn_grad', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you could instead just set test_complex_grad=False
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, masked_fill_ is ported for CPU and it works.
So I thought it would be better to just disable CUDA variant, instead of disabling both of them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ohh I see
@@ -89,6 +89,7 @@ | |||
'reflection_pad1d_backward', 'reflection_pad2d_backward', | |||
'replication_pad1d', 'replication_pad2d', 'replication_pad3d', | |||
'replication_pad1d_backward', 'replication_pad2d_backward', 'replication_pad3d_backward', | |||
'masked_scatter', 'masked_select' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we shouldn't enable masked_scatter
since it's dependent on _th_masked_fill_bool_
which is not supported for complex
@@ -4499,12 +4499,6 @@ def test_masked_scatter(self, device, dtype): | |||
# in a separate test | |||
return | |||
|
|||
# TODO: update test when masked scatter is supported for complex | |||
# and cpu supports half | |||
if (dt == torch.half and self.device_type == 'cpu') or dt.is_complex: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you probably still wanna retain this for if (dt == torch.half and self.device_type == 'cpu')
right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh nvm you also enable masked_scatter
for torch.half
on CPU
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh forgot to add it in the description but have also added support for half on CPU.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated the description.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks good to me after edit: masked_scatter
backward is disabled for complexmasked_fill_
is ported on CPU, so the backward works on only CPU, so it's fine to enable complex autograd
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
@anjali411 merged this pull request in a88e1d3. |
Reference: #33152
Changes
Note:
Complex Support for masked_scatter CUDA is disabled as it depends on
masked_fill
which is yet to be ported to ATen.