Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move ConstantPadNd into ATen #10885

Closed
wants to merge 19 commits into from

Conversation

wdhorton
Copy link
Contributor

@wdhorton wdhorton commented Aug 26, 2018

Addresses #9499. Completed work on the forward function, tests should be passing for that. Working on backward function now.

Copy link
Contributor

@vishwakftw vishwakftw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't check the math.

std::vector<int64_t> new_shape;

for (int i = 0; i < l_diff; i ++) {
new_shape.push_back(input_sizes[i]);

This comment was marked as off-topic.

This comment was marked as off-topic.

auto pad_idx = pad.size() - ((i + 1) * 2);
auto new_dim = input_sizes[l_diff + i] + pad[pad_idx] + pad[pad_idx + 1];
AT_CHECK(new_dim > 0, "input is too small");
new_shape.push_back(new_dim);

This comment was marked as off-topic.

auto output = at::empty(new_shape, input.options());
output.fill_(value);

auto c_input = input;

This comment was marked as off-topic.

This comment was marked as off-topic.

for (int i = l_diff; i < l_inp; i++) {
auto pad_idx = pad.size() - (i - l_diff + 1) * 2;
if (pad[pad_idx] < 0) {
c_input = c_input.narrow(i, -pad[pad_idx], c_input.sizes()[i] + pad[pad_idx]);

This comment was marked as off-topic.

This comment was marked as off-topic.

c_input = c_input.narrow(i, -pad[pad_idx], c_input.sizes()[i] + pad[pad_idx]);
}
if (pad[pad_idx + 1] < 0) {
c_input = c_input.narrow(i, 0, c_input.sizes()[i] + pad[pad_idx + 1]);

This comment was marked as off-topic.

for (int i = l_diff; i < l_inp; i++) {
auto pad_idx = pad.size() - (i - l_diff + 1) * 2;
if (pad[pad_idx] > 0) {
c_output = c_output.narrow(i, pad[pad_idx], c_output.sizes()[i] - pad[pad_idx]);

This comment was marked as off-topic.

c_output = c_output.narrow(i, pad[pad_idx], c_output.sizes()[i] - pad[pad_idx]);
}
if (pad[pad_idx + 1] > 0) {
c_output = c_output.narrow(i, 0, c_output.sizes()[i] - pad[pad_idx + 1]);

This comment was marked as off-topic.

c_output = c_output.narrow(i, 0, c_output.sizes()[i] - pad[pad_idx + 1]);
}
}
c_output.copy_(c_input);

This comment was marked as off-topic.

This comment was marked as off-topic.

@wdhorton wdhorton changed the title [WIP] Move ConstantPadNd into ATen Move ConstantPadNd into ATen Aug 27, 2018
@wdhorton
Copy link
Contributor Author

Finished the implementation of the backward function, so I took off the [WIP] on the PR title. Should be ready to review.

@wdhorton
Copy link
Contributor Author

Getting this error in the failing builds:

00:17:15 ======================================================================
00:17:15 ERROR: test_literal (__main__.TestScript)
00:17:15 ----------------------------------------------------------------------
00:17:15 Traceback (most recent call last):
00:17:15   File "test_jit.py", line 2341, in test_literal
00:17:15     self.checkScript(func3, (a.item(), b.item()), optimize=True)
00:17:15   File "test_jit.py", line 2019, in checkScript
00:17:15     self.checkScript(source, inputs, optimize, outputs, script.__name__, capture_output, frames_up=2)
00:17:15   File "test_jit.py", line 2029, in checkScript
00:17:15     outputs_ge = ge(*inputs)
00:17:15 RuntimeError: Unsupported value kind: Tuple

I don't know how my changes could've impacted the JIT.

@Justin4411
Copy link

Justin4411 commented Aug 28, 2018 via email

@Justin4411
Copy link

how can I make this as clear as possible to read? without all the comments and coeds

@vishwakftw
Copy link
Contributor

@pytorchbot retest this please

@vishwakftw
Copy link
Contributor

@wdhorton #10923 has sent in a patch for this bug I think, which is why I have restarted the tests.

Copy link
Collaborator

@ssnl ssnl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the patch! Sorry for the late review. The port looks great. I think we can improve upon the original code a bit and commented inline. Let me know if you have any questions!

auto l_pad = pad.size() / 2;
auto l_diff = l_inp - l_pad;

auto grad_input = Variable(at::zeros(self.sizes(), grad.options()));

This comment was marked as off-topic.

This comment was marked as off-topic.


auto l_pad = pad.size() / 2;
auto l_diff = l_inp - l_pad;
AT_CHECK(l_inp >= l_pad, "Padding length too large");

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

for (int i = 0; i < l_pad; i++) {
auto pad_idx = pad.size() - ((i + 1) * 2);
auto new_dim = input_sizes[l_diff + i] + pad[pad_idx] + pad[pad_idx + 1];
AT_CHECK(new_dim > 0, "input is too small");

This comment was marked as off-topic.

This comment was marked as off-topic.

cg_output = cg_output.narrow(i, 0, cg_output.size(i) - pad[pad_idx + 1]);
}
}
cg_input.copy_(cg_output);

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SsnL has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@weiyangfb
Copy link
Contributor

@wdhorton Thanks for a great work! Please address comments when catch a chance!

@wdhorton
Copy link
Contributor Author

@weiyangfb @ssnl could you take another look at this?


auto cg_input = grad_input;
for (int i = l_diff; i < l_inp; i++) {
auto pad_idx = pad.size() - (i - l_diff + 1) * 2;

This comment was marked as off-topic.


auto cg_output = grad;
for (int i = l_diff; i < l_inp; i++) {
auto pad_idx = pad.size() - (i - l_diff + 1) * 2;

This comment was marked as off-topic.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SsnL has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@ssnl ssnl closed this Oct 26, 2018
@ssnl ssnl reopened this Oct 26, 2018
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SsnL is landing this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

zdevito pushed a commit to zdevito/ATen that referenced this pull request Oct 26, 2018
Summary:
Addresses #9499. Completed work on the forward function, tests should be passing for that. Working on backward function now.
Pull Request resolved: pytorch/pytorch#10885

Differential Revision: D9643786

Pulled By: SsnL

fbshipit-source-id: 2930d6f3d2975c45b2ba7042c55773cbdc8fa3ac
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants