Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

run detect_*.py errors: Invalid argument: NodeDef mentions attr 'T' not in Op<name=Where #14

Closed
gangyahaidao opened this issue Mar 26, 2018 · 10 comments

Comments

@gangyahaidao
Copy link

when i run cmd: python3 detect_single_threaded.py --source ~/Documents/test.mp4
i got those errors:

> ====== loading HAND frozen graph into memory
2018-03-26 22:47:54.129357: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018-03-26 22:47:54.278568: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-03-26 22:47:54.278914: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: 
name: GeForce MX150 major: 6 minor: 1 memoryClockRate(GHz): 1.5315
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 1.57GiB
2018-03-26 22:47:54.278933: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce MX150, pci bus id: 0000:01:00.0, compute capability: 6.1)
>  ====== Hand Inference graph loaded.
2018-03-26 22:47:54.651759: E tensorflow/core/common_runtime/executor.cc:643] Executor failed to create kernel. Invalid argument: NodeDef mentions attr 'T' not in Op<name=Where; signature=input:bool -> index:int64>; NodeDef: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where = Where[T=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where/Cast). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
	 [[Node: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where = Where[T=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where/Cast)]]
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1323, in _do_call
    return fn(*args)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1302, in _run_fn
    status, run_metadata)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/errors_impl.py", line 473, in __exit__
    c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef mentions attr 'T' not in Op<name=Where; signature=input:bool -> index:int64>; NodeDef: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where = Where[T=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where/Cast). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
	 [[Node: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where = Where[T=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where/Cast)]]

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "detect_single_threaded.py", line 52, in <module>
    image_np, detection_graph, sess)
  File "/home/rosrobot/git/handtracking/utils/detector_utils.py", line 90, in detect_objects
    feed_dict={image_tensor: image_np_expanded})
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 889, in run
    run_metadata_ptr)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1120, in _run
    feed_dict_tensor, options, run_metadata)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1317, in _do_run
    options, run_metadata)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1336, in _do_call
    raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef mentions attr 'T' not in Op<name=Where; signature=input:bool -> index:int64>; NodeDef: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where = Where[T=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where/Cast). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
	 [[Node: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where = Where[T=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where/Cast)]]

Caused by op 'Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where', defined at:
  File "detect_single_threaded.py", line 8, in <module>
    detection_graph, sess = detector_utils.load_inference_graph()
  File "/home/rosrobot/git/handtracking/utils/detector_utils.py", line 45, in load_inference_graph
    tf.import_graph_def(od_graph_def, name='')
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/importer.py", line 313, in import_graph_def
    op_def=op_def)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 2956, in create_op
    op_def=op_def)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1470, in __init__
    self._traceback = self._graph._extract_stack()  # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): NodeDef mentions attr 'T' not in Op<name=Where; signature=input:bool -> index:int64>; NodeDef: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where = Where[T=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where/Cast). (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).
	 [[Node: Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where = Where[T=DT_BOOL, _device="/job:localhost/replica:0/task:0/device:GPU:0"](Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/FilterGreaterThan/Where/Cast)]]

it's hard to run this project, can anyone help me to solve this problem??

@gangyahaidao gangyahaidao changed the title run detect_*.py errors run detect_*.py errors: Invalid argument: NodeDef mentions attr 'T' not in Op<name=Where Mar 26, 2018
@victordibia
Copy link
Owner

This error looks like the same in issue #1 .
It appears to be an error associated with different tensorflow versions.
The solution is to generate your own frozen graph from the model checkpoint I provide.

See issue #1 for more details.

-V.

@gangyahaidao
Copy link
Author

gangyahaidao commented Mar 27, 2018

thank you for your reply, i run this cmd:
python3 -c 'import tensorflow as tf; print(tf.__version__)'
output: 1.4.0
maybe it's version mismatch, i will try to fix it and show the result

@gangyahaidao
Copy link
Author

after i reinstall tensor-flow v1.4.0-rc0, it works fine! thanks@victordibia
but i got such errors, when startup comes up CUDA_ERROR_OUT_OF_MEMORY

`rosrobot@rosrobot:~/git/handtracking$ python3 detect_multi_threaded.py
{'num_hands_detect': 2, 'score_thresh': 0.2, 'im_height': 180.0, 'im_width': 320.0} Namespace(display=1, fps=1, height=200, num_hands=2, num_workers=4, queue_size=5, video_source=0, width=300)
>> loading frozen model for worker
> ====== loading HAND frozen graph into memory
>> loading frozen model for worker
> ====== loading HAND frozen graph into memory
>> loading frozen model for worker
> ====== loading HAND frozen graph into memory
>> loading frozen model for worker
> ====== loading HAND frozen graph into memory
2018-03-28 16:09:03.123319: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-03-28 16:09:03.123576: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-03-28 16:09:03.123690: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-03-28 16:09:03.124384: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: 
name: GeForce MX150 major: 6 minor: 1 memoryClockRate(GHz): 1.5315
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 1.55GiB
2018-03-28 16:09:03.124405: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce MX150, pci bus id: 0000:01:00.0, compute capability: 6.1)
2018-03-28 16:09:03.124661: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: 
name: GeForce MX150 major: 6 minor: 1 memoryClockRate(GHz): 1.5315
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 1.55GiB
2018-03-28 16:09:03.124677: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce MX150, pci bus id: 0000:01:00.0, compute capability: 6.1)
2018-03-28 16:09:03.124862: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: 
name: GeForce MX150 major: 6 minor: 1 memoryClockRate(GHz): 1.5315
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 1.55GiB
2018-03-28 16:09:03.124896: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce MX150, pci bus id: 0000:01:00.0, compute capability: 6.1)
2018-03-28 16:09:03.126570: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 1.33G (1427963904 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-03-28 16:09:03.126752: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 1.33G (1427963904 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY

My question is how much GPU memory is required??It's obvious that my computer is too weak.

@victordibia
Copy link
Owner

victordibia commented Mar 28, 2018 via email

@gangyahaidao
Copy link
Author

only use CPU, too slow? what's your fps?

@victordibia
Copy link
Owner

victordibia commented Mar 29, 2018 via email

@TyrionZK
Copy link

after i reinstall tensor-flow v1.4.0-rc0, it works fine! thanks@victordibia
but i got such errors, when startup comes up CUDA_ERROR_OUT_OF_MEMORY

`rosrobot@rosrobot:~/git/handtracking$ python3 detect_multi_threaded.py
{'num_hands_detect': 2, 'score_thresh': 0.2, 'im_height': 180.0, 'im_width': 320.0} Namespace(display=1, fps=1, height=200, num_hands=2, num_workers=4, queue_size=5, video_source=0, width=300)
>> loading frozen model for worker
> ====== loading HAND frozen graph into memory
>> loading frozen model for worker
> ====== loading HAND frozen graph into memory
>> loading frozen model for worker
> ====== loading HAND frozen graph into memory
>> loading frozen model for worker
> ====== loading HAND frozen graph into memory
2018-03-28 16:09:03.123319: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-03-28 16:09:03.123576: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-03-28 16:09:03.123690: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:892] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-03-28 16:09:03.124384: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: 
name: GeForce MX150 major: 6 minor: 1 memoryClockRate(GHz): 1.5315
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 1.55GiB
2018-03-28 16:09:03.124405: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce MX150, pci bus id: 0000:01:00.0, compute capability: 6.1)
2018-03-28 16:09:03.124661: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: 
name: GeForce MX150 major: 6 minor: 1 memoryClockRate(GHz): 1.5315
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 1.55GiB
2018-03-28 16:09:03.124677: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce MX150, pci bus id: 0000:01:00.0, compute capability: 6.1)
2018-03-28 16:09:03.124862: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Found device 0 with properties: 
name: GeForce MX150 major: 6 minor: 1 memoryClockRate(GHz): 1.5315
pciBusID: 0000:01:00.0
totalMemory: 1.95GiB freeMemory: 1.55GiB
2018-03-28 16:09:03.124896: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce MX150, pci bus id: 0000:01:00.0, compute capability: 6.1)
2018-03-28 16:09:03.126570: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 1.33G (1427963904 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2018-03-28 16:09:03.126752: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 1.33G (1427963904 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY

My question is how much GPU memory is required??It's obvious that my computer is too weak.

How to install tensor-flow v1.4.0-rc0 ? I can not find this version with conda. Thanks.

@victordibia
Copy link
Owner

victordibia commented Jan 14, 2019

Hi @TyrionZK

One way to solve this error is to generate a frozen graph (from the model checkpoint I provide ) using your current version of tensorflow.

The tensorflow object detection repo has a python file for this.

You can copy it to the current directory and use it as follows

python3 export_inference_graph.py \
    --input_type image_tensor \
    --model-checkpoint/ssd_mobilenet_v1_pets.config \
    --model-checkpoint/model.ckpt-200002 \ 
    --output_directory hand_inference_graph

@TyrionZK
Copy link

TyrionZK commented Jan 17, 2019

Firstly, thanks for your reply. @victordibia

I have try it as you say. But some error happens as below. Then I try to solve the error with some guide about how to install tensorflow object detection API. After 1 day, it doesn't work. Can you give me some detailed instuction ? I will very appreciate it.

Traceback (most recent call last):
File "export_inference_graph.py", line 96, in
from object_detection import exporter
File "D:\ProgramData\Anaconda3\envs\detect\lib\site-packages\object_detection-0.1-py3.6.egg\object_detection\exporter.py", line 20, in
from tensorflow.contrib.quantize.python import graph_matcher
ImportError: cannot import name 'graph_matcher'

My environment:
win7
anaconda3
python 3.6.6
tensorflow 1.4.0

@namheegordonkim
Copy link

You can copy it to the current directory and use it as follows

No you can't! Like @TyrionZK mentions you need to install Tensorflow object detection API. Do you have instructions for doing so?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants