-
Notifications
You must be signed in to change notification settings - Fork 25.7k
[composite compliance] prod #81969
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[composite compliance] prod #81969
Conversation
🔗 Helpful links
✅ No Failures (0 Pending)As of commit ce42074 (more details on the Dr. CI page): Expand to see more💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
…composite-compliance/prod
…composite-compliance/prod
|
Ping @zou3519 |
…composite-compliance/prod
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/81969
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 FailuresAs of commit f94152e: The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should not regress support for input shapes like (3, 0). To do that, I think just calling grad * (result / input).conj(); in the zero-numel case works. Other than that, this LGTM, with some minor comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
|
@pytorchbot merge |
|
@pytorchbot successfully started a merge job. Check the current status here and land check progress here. |
Merge failedReason: The following mandatory check(s) failed (Rule Dig deeper by viewing the failures on hud If you believe this is an error, you can use the old behavior with Please reach out to the PyTorch DevX Team with feedback or questions! Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge -f "CI failure was flaky" |
|
@pytorchbot successfully started a merge job. Check the current status here. |
|
Hey @kshitij12345. |
|
@dagitses has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
|
@pytorchbot revert --message='a PR this is based on got reverted, rebase and reland' --classification=ghfirst |
|
@pytorchbot successfully started a revert job. Check the current status here. |
Reverting PR 81969 failedReason: Command Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot revert --message='a PR this is based on got reverted, rebase and reland' --classification=ghfirst |
|
Might as well also include #85400 in the stack when you reland this. |
|
@pytorchbot successfully started a revert job. Check the current status here. |
Reverting PR 81969 failedReason: Command Details for Dev Infra teamRaised by workflow job |
|
@dagitses is there anything to be done from my end? |
Ref: #69991 Also fixes #82644 (fix similar to #81617) For CompositeCompliance, we can't use `item` to choose a special fast-path when Tensor is a Subclass. Instead we always dispatch to the slower but safer implementation. Pull Request resolved: #81969 Approved by: https://github.com/zou3519
This reverts commit a4dca98.
Ref: #69991
Also fixes #82644
For CompositeCompliance, we can't use
itemto choose a special fast-path when Tensor is a Subclass. Instead we always dispatch to the slower but safer implementation.@diff-train-skip-merge