-
Notifications
You must be signed in to change notification settings - Fork 24.5k
[opinfo] nn.functional.unfold #62705
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit ea61279 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great, thanks @kshitij12345! I added a comment on how we can a few more cases for completeness.
It's interesting though, that we have an OpInfo for unfold
already (wasn't aware of this) but I'm not sure we need to keep it (or if it's any different) since it doesn't cover dilation, padding, stride
args:
pytorch/torch/testing/_internal/common_methods_invocations.py
Lines 7687 to 7694 in 773a8ee
OpInfo('unfold', | |
op=lambda x, *args: x.unfold(*args), | |
dtypes=all_types_and_complex_and(torch.bool, torch.float16, torch.bfloat16), | |
supports_out=False, | |
supports_forward_ad=True, | |
check_batched_gradgrad=False, | |
skips=( | |
# torch.unfold does not exist so we get a RuntimeError. |
Thanks!
torch.unfold is actually different from
pytorch/torch/nn/functional.py Line 4481 in 773a8ee
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks @kshitij12345!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One really minor comment about the inputs shape. But I'm also happy to merge this as-is and add the shape in a follow-up, please let me know how you'd like to proceed
@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
I wonder if we should partition OpInfos across multiple files to avoid merge conflicts. Right now I can only really merge one of these PRs a day (two sometimes if I'm lucky) because I need to rebase internally, ship it, and then wait for the change to reach viable/strict. |
This is being pursued in #59871 (Though targeting a particular kind of OpInfo like nn.functional will still lead to conflicts) |
Reference: pytorch/functorch#78