New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move ConstantPadNd into ATen #10885
Move ConstantPadNd into ATen #10885
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't check the math.
std::vector<int64_t> new_shape; | ||
|
||
for (int i = 0; i < l_diff; i ++) { | ||
new_shape.push_back(input_sizes[i]); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
auto pad_idx = pad.size() - ((i + 1) * 2); | ||
auto new_dim = input_sizes[l_diff + i] + pad[pad_idx] + pad[pad_idx + 1]; | ||
AT_CHECK(new_dim > 0, "input is too small"); | ||
new_shape.push_back(new_dim); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
auto output = at::empty(new_shape, input.options()); | ||
output.fill_(value); | ||
|
||
auto c_input = input; |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
for (int i = l_diff; i < l_inp; i++) { | ||
auto pad_idx = pad.size() - (i - l_diff + 1) * 2; | ||
if (pad[pad_idx] < 0) { | ||
c_input = c_input.narrow(i, -pad[pad_idx], c_input.sizes()[i] + pad[pad_idx]); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
c_input = c_input.narrow(i, -pad[pad_idx], c_input.sizes()[i] + pad[pad_idx]); | ||
} | ||
if (pad[pad_idx + 1] < 0) { | ||
c_input = c_input.narrow(i, 0, c_input.sizes()[i] + pad[pad_idx + 1]); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
for (int i = l_diff; i < l_inp; i++) { | ||
auto pad_idx = pad.size() - (i - l_diff + 1) * 2; | ||
if (pad[pad_idx] > 0) { | ||
c_output = c_output.narrow(i, pad[pad_idx], c_output.sizes()[i] - pad[pad_idx]); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
c_output = c_output.narrow(i, pad[pad_idx], c_output.sizes()[i] - pad[pad_idx]); | ||
} | ||
if (pad[pad_idx + 1] > 0) { | ||
c_output = c_output.narrow(i, 0, c_output.sizes()[i] - pad[pad_idx + 1]); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
c_output = c_output.narrow(i, 0, c_output.sizes()[i] - pad[pad_idx + 1]); | ||
} | ||
} | ||
c_output.copy_(c_input); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
Finished the implementation of the backward function, so I took off the [WIP] on the PR title. Should be ready to review. |
a19ce9f
to
46c1a32
Compare
Getting this error in the failing builds:
I don't know how my changes could've impacted the JIT. |
how can I make these as clear as possible to read?
On Monday, August 27, 2018 7:13 PM, William Horton <notifications@github.com> wrote:
Finished the implementation of the backward function, so I took off the [WIP] on the PR title. Should be ready to review.—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.
| | Virus-free. www.avg.com |
|
how can I make this as clear as possible to read? without all the comments and coeds |
@pytorchbot retest this please |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the patch! Sorry for the late review. The port looks great. I think we can improve upon the original code a bit and commented inline. Let me know if you have any questions!
auto l_pad = pad.size() / 2; | ||
auto l_diff = l_inp - l_pad; | ||
|
||
auto grad_input = Variable(at::zeros(self.sizes(), grad.options())); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
||
auto l_pad = pad.size() / 2; | ||
auto l_diff = l_inp - l_pad; | ||
AT_CHECK(l_inp >= l_pad, "Padding length too large"); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
for (int i = 0; i < l_pad; i++) { | ||
auto pad_idx = pad.size() - ((i + 1) * 2); | ||
auto new_dim = input_sizes[l_diff + i] + pad[pad_idx] + pad[pad_idx + 1]; | ||
AT_CHECK(new_dim > 0, "input is too small"); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
cg_output = cg_output.narrow(i, 0, cg_output.size(i) - pad[pad_idx + 1]); | ||
} | ||
} | ||
cg_input.copy_(cg_output); |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SsnL has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
@wdhorton Thanks for a great work! Please address comments when catch a chance! |
@weiyangfb @ssnl could you take another look at this? |
|
||
auto cg_input = grad_input; | ||
for (int i = l_diff; i < l_inp; i++) { | ||
auto pad_idx = pad.size() - (i - l_diff + 1) * 2; |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
|
||
auto cg_output = grad; | ||
for (int i = l_diff; i < l_inp; i++) { | ||
auto pad_idx = pad.size() - (i - l_diff + 1) * 2; |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SsnL has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
b4d82f2
to
f2109c4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SsnL is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Summary: Addresses #9499. Completed work on the forward function, tests should be passing for that. Working on backward function now. Pull Request resolved: pytorch/pytorch#10885 Differential Revision: D9643786 Pulled By: SsnL fbshipit-source-id: 2930d6f3d2975c45b2ba7042c55773cbdc8fa3ac
Addresses #9499. Completed work on the forward function, tests should be passing for that. Working on backward function now.