Skip to content

Conversation

@drisspg
Copy link
Contributor

@drisspg drisspg commented Apr 15, 2022

Fixes one of the tests related to this issue #75680
Removes the skip for the composite compliance test_backward and updates implementation of binary_cross_entropy_with_logits so that the correct inplace and out of place operators are used depending on tensor subclasses being present

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Apr 15, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit a50229c (more details on the Dr. CI page):

Expand to see more
  • 1/1 failures introduced in this PR

🕵️‍♀️ 1 failure not recognized by patterns:

The following CI failures may be due to changes from the PR
Job Step Action
GitHub Actions pull / linux-bionic-rocm5.0-py3.7 / test (default, 2, 2, linux.rocm.gpu) Unknown 🔁 rerun

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is a bit worrying that the test don't catch the fact that weights are ignored here :(

@drisspg drisspg requested review from mruberry and ngimel as code owners April 18, 2022 21:36
@drisspg drisspg force-pushed the Update_binary_cross_entropy_with_logits branch from 770a730 to 79d7bf2 Compare April 18, 2022 22:27
@ngimel
Copy link
Collaborator

ngimel commented Apr 19, 2022

binary_cross_entropy_with_logits is not a composit implicit autograd operation, so why is it failing the test? Also, it doesn't look like it's calling resize_, all it does is calling some inplace ops on an intermediate tensor - is it not allowed for composite ops?

@albanD
Copy link
Collaborator

albanD commented Apr 19, 2022

binary_cross_entropy_with_logits is not a composit implicit autograd operation, so why is it failing the test?

That is a very good question! cc @zou3519 ?

all it does is calling some inplace ops on an intermediate tensor

It depends on which ones. You can only do inplace with Tensors that are guaranteed to be of the same "type".
For example (input + 1).mul_(target) would fail if input is a plain Tensor (and thus input + 1 will also be a plain Tensor) while target is a subclass.

@zou3519
Copy link
Contributor

zou3519 commented Apr 19, 2022

binary_cross_entropy_with_logits is not a composit implicit autograd operation, so why is it failing the test?

That is a very good question! cc @zou3519 ?

Composite compliance applies to CompositeImplicitAutograd pytorch operations OR a function (in python or c++) that calls PyTorch operations (a "composite").

binary_cross_entropy_with_logits's backward formula is not a PyTorch operator -- it's a C++ function that calls Pytorch operations. The backward formula is the thing that is not composite compliant.

@drisspg
Copy link
Contributor Author

drisspg commented Apr 19, 2022

binary_cross_entropy_with_logits is not a composit implicit autograd operation, so why is it failing the test?

That is a very good question! cc @zou3519 ?

Composite compliance applies to CompositeImplicitAutograd pytorch operations OR a function (in python or c++) that calls PyTorch operations (a "composite").

binary_cross_entropy_with_logits's backward formula is not a PyTorch operator -- it's a C++ function that calls Pytorch operations. The backward formula is the thing that is not composite compliant.

Should I change forward formula back to what it was with mostly inplace operations that aren't switched on subclass tensors - and keep the backward update?

@ngimel
Copy link
Collaborator

ngimel commented Apr 19, 2022

@zou3519 So the fixes need to exist only in binary_cross_entropy_with_logits_backward, not binary_cross_entropy_with_logits, at least for the purposes of the test? I understand Alban's concern that even for explicit autograd op subclassing won't be propagated with inplace ops.
Should the note about inplace ops and sublassing be added somewhere? Because currently README only warns about resizing in CompositeImplicitAutograd ops.

@drisspg
Copy link
Contributor Author

drisspg commented Apr 20, 2022

It is a bit worrying that the test don't catch the fact that weights are ignored here :(

This got me thinking and found I wrote a terrible sampling function which I think I corrected allowing me to remove a few more skips

@drisspg drisspg requested a review from albanD April 20, 2022 15:18
@drisspg drisspg force-pushed the Update_binary_cross_entropy_with_logits branch from f4dcd26 to 2204344 Compare April 21, 2022 18:19
@zou3519
Copy link
Contributor

zou3519 commented Apr 22, 2022

@zou3519 So the fixes need to exist only in binary_cross_entropy_with_logits_backward, not binary_cross_entropy_with_logits, at least for the purposes of the test? I understand Alban's concern that even for explicit autograd op subclassing won't be propagated with inplace ops.

Yes, sorry for the delayed reply.

Should the note about inplace ops and sublassing be added somewhere? Because currently README only warns about resizing in CompositeImplicitAutograd ops.

Yes... I thought it was there but apparently it's not

@drisspg drisspg force-pushed the Update_binary_cross_entropy_with_logits branch from 2204344 to 92b0386 Compare April 22, 2022 22:28
@zou3519 zou3519 self-requested a review April 25, 2022 18:55
Copy link
Contributor

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fix for composite compliance LGTM. Since we're modifying the OpInfo, I have some questions about the semantics of the operator

@drisspg drisspg force-pushed the Update_binary_cross_entropy_with_logits branch from e450519 to febb26f Compare April 25, 2022 21:12
@drisspg
Copy link
Contributor Author

drisspg commented Apr 25, 2022

Just Rebased onto master and this skip:

DecorateInfo(
                unittest.expectedFailure,
                'TestCompositeCompliance',
                'test_forward_ad'),

Is now failing for performing inplace ops on tensor subclass. I am going to add back in this skip and not update the forward pass because see above convseration

Copy link
Contributor

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

@drisspg
Copy link
Contributor Author

drisspg commented Apr 26, 2022

@pytorchbot merge this please

@github-actions
Copy link
Contributor

Hey @drisspg.
You've committed this PR, but it does not have both a 'release notes: ...' and 'topics: ...' label. Please add one of each to the PR. The 'release notes: ...' label should represent the part of PyTorch that this PR changes (fx, autograd, distributed, etc) and the 'topics: ...' label should represent the kind of PR it is (not user facing, new feature, bug fix, perf improvement, etc). The list of valid labels can be found here for the 'release notes: ...' and here for the 'topics: ...'.
For changes that are 'topic: not user facing' there is no need for a release notes label.

@drisspg drisspg added release notes: autograd release notes category topic: not user facing topic category labels Apr 27, 2022
facebook-github-bot pushed a commit that referenced this pull request Apr 27, 2022
Summary:
Fixes one of the tests related to this issue #75680
Removes the skip for the composite compliance test_backward and updates implementation of binary_cross_entropy_with_logits so that the correct inplace and out of place operators are used depending on tensor subclasses being present

Pull Request resolved: #75929
Approved by: https://github.com/zou3519

Test Plan: contbuild & OSS CI, see https://hud.pytorch.org/commit/pytorch/pytorch/368430036eefd7a4a440678286b7fbc3762122b9

Reviewed By: osalpekar

Differential Revision: D35971230

Pulled By: drisspg

fbshipit-source-id: 9d6e0164ffe1f4254da0a708e0b9999d8dc16388
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants