New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH Enables No-batch for *Pad1d Modules #61060
Conversation
💊 CI failures summary and remediationsAs of commit a4e0fea (more details on the Dr. CI page and at hud.pytorch.org/pr/61060):
1 failure not recognized by patterns:
❄️ 1 failure tentatively classified as flakybut reruns have not yet been triggered to confirm: pytorch_linux_bionic_cuda10_2_cudnn7_py3_9_gcc7_test1 (1/1)Step: "Set Up CI Environment After attach_workspace" (full log | diagnosis details | 🔁 rerun) ❄️
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good! Couple minor comments below
@@ -44,8 +44,7 @@ inline Tensor pad(const Tensor& input, | |||
"Padding mode \"", | |||
torch::enumtype::get_enum_name(mode), | |||
"\" doesn't take in value argument"); | |||
if (input.dim() == 3) { | |||
TORCH_CHECK(pad.size() == 2, "3D tensors expect 2 values for padding"); | |||
if (pad.size() == 2 and (input.dim() == 2 or input.dim() == 3)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a TORCH_CHECK call below in the final else branch that needs a slightly tweaked error message now
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@thomasjpfan Whoops, I missed this before, but it looks like some python syntax snuck in here:
- and -> &&
- or -> ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
@jbschlosser has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
@jbschlosser has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
@jbschlosser merged this pull request in 48af9de. |
Toward #60585
This PR adds a
single_batch_reference_fn
that uses the single batch implementation to check no-batch.