Skip to content

[opinfo] nn.functional.unfold #62705

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

kshitij12345
Copy link
Collaborator

@kshitij12345 kshitij12345 commented Aug 4, 2021

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Aug 4, 2021

🔗 Helpful links

💊 CI failures summary and remediations

As of commit ea61279 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@kshitij12345 kshitij12345 requested a review from krshrimali August 4, 2021 10:23
@kshitij12345 kshitij12345 marked this pull request as ready for review August 4, 2021 10:23
Copy link
Contributor

@krshrimali krshrimali left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks great, thanks @kshitij12345! I added a comment on how we can a few more cases for completeness.

It's interesting though, that we have an OpInfo for unfold already (wasn't aware of this) but I'm not sure we need to keep it (or if it's any different) since it doesn't cover dilation, padding, stride args:

OpInfo('unfold',
op=lambda x, *args: x.unfold(*args),
dtypes=all_types_and_complex_and(torch.bool, torch.float16, torch.bfloat16),
supports_out=False,
supports_forward_ad=True,
check_batched_gradgrad=False,
skips=(
# torch.unfold does not exist so we get a RuntimeError.

Thanks!

@kshitij12345
Copy link
Collaborator Author

torch.unfold is actually different from nn.functional.unfold (function implementation as well as API). That is why a new entry.

- func: unfold(Tensor(a) self, int dimension, int size, int step) -> Tensor(a)

return torch._C._nn.im2col(input, _pair(kernel_size), _pair(dilation), _pair(padding), _pair(stride))

Copy link
Contributor

@krshrimali krshrimali left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks @kshitij12345!

@kshitij12345 kshitij12345 requested a review from zou3519 August 5, 2021 07:52
Copy link
Contributor

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One really minor comment about the inputs shape. But I'm also happy to merge this as-is and add the shape in a follow-up, please let me know how you'd like to proceed

@facebook-github-bot
Copy link
Contributor

@zou3519 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@zou3519
Copy link
Contributor

zou3519 commented Aug 5, 2021

I wonder if we should partition OpInfos across multiple files to avoid merge conflicts. Right now I can only really merge one of these PRs a day (two sometimes if I'm lucky) because I need to rebase internally, ship it, and then wait for the change to reach viable/strict.

@kshitij12345
Copy link
Collaborator Author

This is being pursued in #59871

(Though targeting a particular kind of OpInfo like nn.functional will still lead to conflicts)

@facebook-github-bot
Copy link
Contributor

@zou3519 merged this pull request in 64c54f9.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants