New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Atrous (Dilated) Convolution Implementation #1390

Merged
merged 6 commits into from May 18, 2018

Conversation

Projects
None yet
4 participants
@ShikharJ
Member

ShikharJ commented May 11, 2018

No description provided.

@ShikharJ ShikharJ changed the title from [WIP] Atrous (Diated) Convolution Implementation to [WIP] Atrous (Dilated) Convolution Implementation May 11, 2018

@ShikharJ ShikharJ changed the title from [WIP] Atrous (Dilated) Convolution Implementation to Atrous (Dilated) Convolution Implementation May 12, 2018

@rcurtin

I took a quick look and have some high-level comments. But I didn't check the code exactly for correctness.

/**
* Implementation of the Atrous Convolution class. The Atrous Convolution
* class represents a single layer of a neural network.

This comment has been minimized.

@rcurtin

rcurtin May 14, 2018

Member

Would it be possible to add a little bit about what atrous convolution is and why someone would want it here?

outputTemp.reshape(outputTemp.n_elem, 1, 1);
output = std::move(outputTemp.slice(0));
std::cout << output << std::endl;

This comment has been minimized.

@rcurtin

rcurtin May 14, 2018

Member

Looks like some extra debugging output here that we should probably remove.

outputHeight = outputTemp.n_cols;
outputTemp.reshape(outputTemp.n_elem, 1, 1);
output = std::move(outputTemp.slice(0));

This comment has been minimized.

@rcurtin

rcurtin May 14, 2018

Member

If the idea of setting outputTemp to point to the same memory as output isn't feasible, I think this is a fine way to do it.

@ShikharJ

This comment has been minimized.

Member

ShikharJ commented May 14, 2018

This is ready for a review @zoq @rcurtin.

@ShikharJ

This comment has been minimized.

Member

ShikharJ commented May 16, 2018

@rcurtin @zoq Sorry for being impatient, but the changes here are required for debugging the GAN PR. Could we get this merged ASAP?

@zoq

I'll have to check the test case (will do this later today) but this looks good, just made some minor comments about the style.

size_t outputCols = (input.n_cols - 1) * dH + 2 * (filter.n_cols - 1)
* dilationH + 1;
for (size_t i = 0; i < dW; i++){

This comment has been minimized.

@zoq

zoq May 16, 2018

Member

Pedantic style, can you put the opening { on a new line.

for (size_t j = 0; j < output.n_slices; j++)
{
reducedMat = output.slice(j);
if (dilationH != 1){

This comment has been minimized.

@zoq

zoq May 16, 2018

Member

See comment above, also maybe > 1 is more descriptive here.

@zoq

This comment has been minimized.

Member

zoq commented May 16, 2018

I guess, an easy option would be to copy the files in your local repo and don't commit the files, once this is merged you could rebase the branch against the master branch and use that.

@zoq

zoq approved these changes May 16, 2018

This looks good to me, no more comments from my side; the test is stable as well.

@rcurtin

Sorry for the slightly slow response---looks good to me. 👍

@zoq zoq merged commit 24d564d into mlpack:master May 18, 2018

5 checks passed

Memory Checks
Details
Static Code Analysis Checks Build finished.
Details
Style Checks Build finished.
Details
continuous-integration/appveyor/pr AppVeyor build succeeded
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
@zoq

This comment has been minimized.

Member

zoq commented May 18, 2018

Thanks for another really great contribution.

@ShikharJ ShikharJ deleted the ShikharJ:AtrousConv branch May 18, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment