New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ksize in max_pool #4746
Comments
FYI this was also asked about in #1957 We welcome contributions! |
#1957 seems completely unrelated. Am I missing something? |
I'm sorry, I meant #1967. Indeed 1957 is completely unrelated. |
I want to work on this issue |
found something useful for the development. |
It may be too late for @guoguo12's assistance, but if anyone else wants to jump on this I'm happy to answer questions. The solution is to make a https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/ops/nn_ops.cc#L2013 |
(You mean @guotong1988.) |
Whoops, sorry about that. |
Created a PR #9514 to try to address this issue. Would appreciate any review or comments. |
@ayushchd I guess it's a bit late but there is a work around if you need to do your max_pool on a unique dimension. In this case you do not need any kernel size nor strides so you can use reduce_max directly. |
pool = tf.reduce_max(activation, axis=1, keep_dims=True) |
I am trying to build a convolution (followed by max_pool) for variable length input size. Since the length of the input in a batch should be the same, I set the batch size to 1. However, for some reason ksize in tf.nn.max_pool is a list attribute required at run-time. This would prevent any neural network that uses variable input size to apply max pool. Is there a workaround for this? Should ksize be an input tensor otherwise?
The text was updated successfully, but these errors were encountered: