Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why there is a pooling layer with pool_size=(1,1) before the upsamping layer? #5

Closed
ssiaw12345 opened this issue Oct 2, 2016 · 3 comments

Comments

@ssiaw12345
Copy link

ssiaw12345 commented Oct 2, 2016

As the title said, there is a pooling layer with pool_size=(1,1) before the upsamping layer which I think don't make sense, such as face.model.build_model line 65

@somewacko
Copy link
Owner

Hmm I think that when I first implemented it I had it in my head that deconv was basically reverse conv/pooling, which is why that's there. It's mostly harmless, although I wonder if it's unnecessarily eating up GPU memory since it would just make a copy of its input.

@ssiaw12345
Copy link
Author

Thank you for your prompt reply~

@somewacko
Copy link
Owner

Yeah thanks for pointing this out ! -- I just removed it and am able to train with larger batch sizes now, so I think it was just eating memory. My bad~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants