Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Default MaxPoolingOp/AvgPoolingOp only supports NHWC #15364

Closed
LucasMahieu opened this issue Dec 14, 2017 · 17 comments
Closed

[BUG] Default MaxPoolingOp/AvgPoolingOp only supports NHWC #15364

LucasMahieu opened this issue Dec 14, 2017 · 17 comments
Assignees

Comments

@LucasMahieu
Copy link

LucasMahieu commented Dec 14, 2017

System information

  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux CentOS 7
  • TensorFlow installed from (source or binary): pip install tensorflow with virtualEnv
  • TensorFlow version (use command below): 1.4.0
  • Python version: 2.7.5
  • Exact command to reproduce:
import numpy as np
import tensorflow as tf
a = tf.nn.max_pool(np.random.rand(1, 1,10,10), [1,1,2,2], [1,1,1,1], 'VALID', data_format='NCHW')
sess=tf.InteractiveSession()
sess.run(a)

Describe the problem

When I try to run a node of type max or avg pool with data_format : 'NCHW' I got an error.
This seems to be a bug because the TF docs affirms that :

data_format: A string. 'NHWC', 'NCHW' and 'NCHW_VECT_C' are supported.

Error logs

With max:

2017-12-14 12:40:23.250331: E tensorflow/core/common_runtime/executor.cc:643] Executor failed to create kernel. Invalid argument: Default MaxPoolingOp only supports NHWC.
[[Node: MaxPool = MaxPoolT=DT_DOUBLE, data_format="NCHW", ksize=[1, 1, 2, 2], padding="VALID", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

With Avg:

tensorflow.python.framework.errors_impl.InvalidArgumentError: Default AvgPoolingOp only supports NHWC.
[[Node: AvgPool_1 = AvgPoolT=DT_FLOAT, data_format="NCHW", ksize=[1, 1, 2, 2], padding="VALID", strides=[1, 1, 1, 1], _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

@LucasMahieu LucasMahieu changed the title [BUG] Default MaxPoolingOp only supports NHWC [BUG] Default MaxPoolingOp/AvgPoolingOp only supports NHWC Dec 14, 2017
@LucasMahieu
Copy link
Author

#2660

@andydavis1
Copy link
Contributor

@zheng-xq Can you comment on this? Maybe we should update docs to say that all data formats are not supported? Thanks...

@andydavis1 andydavis1 added the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Dec 14, 2017
@zheng-xq
Copy link
Contributor

For now, many kernels only supports NCHW on GPU. In the future, we are introducing a layout optimizer so models can use NHWC and still get the best performance as if they are written for NCHW.

@andydavis1
Copy link
Contributor

@zheng-xq Thanks.
@ dr4b Can we update the documentation on nn_ops.py for function "max_pool"?

@dr4b
Copy link

dr4b commented Dec 15, 2017

@andydavis1 we could do it, or if @zheng-xq wants to submit a PR fixing the docs that would also work? You just need to update the doc string in https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/nn_ops.py

@netheril96
Copy link

@zheng-xq cudnn supports both NCHW and NHWC, doesn't it? Why the need for a layout optimizer for better performance of NHWC on GPU?

@zheng-xq
Copy link
Contributor

Both formats are supported, with very different performance characteristics. That's why it is important to use the faster format.

@tensorflowbutler tensorflowbutler removed the stat:awaiting tensorflower Status - Awaiting response from tensorflower label Jan 23, 2018
@sorhus
Copy link

sorhus commented Mar 7, 2018

Just to clear some potential confusion around this. If you compile tensorflow with MKL support you can use these ops with NCHW.

@dr4b dr4b assigned wolffg and unassigned dr4b Mar 7, 2018
@wolffg
Copy link
Contributor

wolffg commented May 4, 2018

@zheng-xq has there been progress on a layout optimizer? Is NCHW now supported?

@bezero
Copy link

bezero commented Jun 5, 2018

I think you are getting this error because of the code you provided. You are trying to max pool from numpy array. When I run your code I got the same error. When modified it is working properly.

import numpy as np
import tensorflow as tf


a = tf.random_uniform((1, 3,10,10))
b = tf.nn.max_pool(a, ksize=[1, 1, 2, 2], strides=[1, 1, 1, 1], padding='VALID', data_format='NCHW')
sess=tf.InteractiveSession()
sess.run(tf.global_variables_initializer())
res = sess.run(b)
print(res.shape)

@KleinYuan
Copy link

I think this may tie to this Todo.

@facaiy
Copy link
Member

facaiy commented Jul 10, 2018

ping @yongtang, please see the comment of @KleinYuan

@ymodak
Copy link
Contributor

ymodak commented Aug 6, 2019

Closing due to staleness. Please check with the latest version of TensorFlow. Feel free to reopen if the issue still persists. Thanks!

@Al-Badri179
Copy link

I tried all the aforementioed methods. nevertheless of these methods were overcome this bug. I still have had the same issue:
InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU
[[{{node max_pooling2d_2/MaxPool}}]]

When I trying to execute this code in jupyter notebooke:

Training

hist = model.fit(X_train, y_train, batch_size=16, epochs=num_epoch, verbose=1, validation_data=(X_test, y_test))
Pleasem I need your support

@Light--
Copy link

Light-- commented Aug 21, 2020

same problem here

tf.version
'2.3.0'
import keras
keras.version
'2.4.3'

@shivani6320
Copy link

I tried all methods, but still cant solve this error. Please help me with this code:
from keras import optimizers
ada = keras.optimizers.Adadelta(learning_rate=1.0, rho=0.95)
model.compile(optimizer=ada,
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
history = model.fit_generator(train_generator,
validation_data=validation_generator,
steps_per_epoch=100,
validation_steps=100,
epochs=10)

error:

InvalidArgumentError: Default MaxPoolingOp only supports NHWC on device type CPU
[[node functional_3/max_pooling2d_9/MaxPool (defined at :10) ]] [Op:__inference_train_function_7377]

Function call stack:
train_function

@Shuntw6096
Copy link

pip install intel-tensorflow==2.3.0
pip install keras==2.4.3
works for me, but run slowly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests