Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected exception when running the model optimizer on tiny Yolov3 #151

Closed
martin-91x opened this issue May 10, 2019 · 15 comments
Closed

Comments

@martin-91x
Copy link

Hi,
I just tried using the model optimizer following the tutorial https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html.

Because I had some problems with the input shape, I ran the following command:
python mo_tf.py --input_model C:\Users\mle\Documents\OpenVino\tensorflow-yolo-v3\yolov3-tiny.pb --tensorflow_use_custom_operations_config extensions\front\tf\yolov3-tiny.json --input_shape [1,416,416,3]

[ ERROR ] -------------------------------------------------
[ ERROR ] ----------------- INTERNAL ERROR ----------------
[ ERROR ] Unexpected exception happened.
[ ERROR ] Please contact Model Optimizer developers and forward the following information:
[ ERROR ]
[ ERROR ] Traceback (most recent call last):
File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\main.py", line 312, in main
return driver(argv)
File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\main.py", line 263, in driver
is_binary=not argv.input_model_is_text)
File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\pipeline\tf.py", line 141, in tf2nx
graph_clean_up_tf(graph)
File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\middle\passes\eliminate.py", line 186, in graph_clean_up_tf
graph_clean_up(graph, ['TFCustomSubgraphCall', 'Shape'])
File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\middle\passes\eliminate.py", line 181, in graph_clean_up
add_constant_operations(graph)
File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\middle\passes\eliminate.py", line 145, in add_constant_operations
Const(graph, dict(value=node.value, shape=np.array(node.value.shape))).create_node_with_data(data_nodes=node)
File "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo\ops\op.py", line 207, in create_node_with_data
[np.array_equal(old_data_value[id], data_node.value) for id, data_node in enumerate(data_nodes)])
AssertionError

[ ERROR ] ---------------- END OF BUG REPORT --------------
[ ERROR ] -------------------------------------------------

I just recognized that there is another implementation of yolo for OpenVino. I will try this one out as well.

Best,
Martin

@shubha-ramani
Copy link

Dear @martin-91x,
Please make sure you're using at least Tensorflow 1.11 (but not as late as Tensorflow 1.13). Also make sure you're using the latest version of OpenVino. MO currently doesn't work with Tensorflow 1.13.

Thanks,

Shubha

@martin-91x
Copy link
Author

martin-91x commented May 17, 2019

Hi,

thank you for your response. I'm now using tensorflow 1.12 and the error is gone.
However, I'm stuck testing the converted model using the sample application python_samples/object_detection_demo_yolov3_async.

I tried running the model on the CPU and the GPU - and I'm getting an error in either case:

CPU

Running python3 object_detection_demo_yolov3_async -m <path_to_yolo_xml>/yolov3-tiny.xml -i ~/test.mp4 -d CPU

Resulting error:

[ INFO ] Loading network files:
	/home/apollolake/tensorflow-yolo-v3/yolov3-tiny_json/yolov3-tiny.xml
	/home/apollolake/tensorflow-yolo-v3/yolov3-tiny_json/yolov3-tiny.bin
[ ERROR ] Following layers are not supported by the plugin for specified device CPU:
 detector/yolo-v3-tiny/Conv_9/BiasAdd/YoloRegion, detector/yolo-v3-tiny/ResizeNearestNeighbor, detector/yolo-v3-tiny/Conv_12/BiasAdd/YoloRegion
[ ERROR ] Please try to specify cpu extensions library path in sample's command line parameters using -l or --cpu_extension command line argument

GPU

Running python3 object_detection_demo_yolov3_async -m <path_to_yolo_xml>/yolov3-tiny.xml -i ~/test.mp4 -d GPU

Resulting error:

[ INFO ] Loading network files:
	/home/apollolake/tensorflow-yolo-v3/yolov3-tiny_json/yolov3-tiny.xml
	/home/apollolake/tensorflow-yolo-v3/yolov3-tiny_json/yolov3-tiny.bin
Traceback (most recent call last):
  File "object_detection_demo_yolov3_async.py", line 349, in <module>
    sys.exit(main() or 0)
  File "object_detection_demo_yolov3_async.py", line 189, in main
    assert len(net.outputs) == 3, "Sample supports only YOLO V3 based triple output topologies"
AssertionError: Sample supports only YOLO V3 based triple output topologies

Setup

I'm running the entire pipeline on an Atom E3930 Processor (without AVX support) and I have built the samples with -DENABLE_AVX2=OFF and -DENABLE_AVX512=OFF

@shubha-ramani
Copy link

Dearest @martin-91x,

For the CPU error, please build your IE and Samples. When you do so you will find a cpu_extensiond.dll (or *.so) under dldt\inference-engine\bin\intel64\Release. Please add the filename with the full path to the -l argument when you run object_detection_demo_yolov3_async.py

For the GPU error, that looks like it may be a bug. It doesn't look right. According to https://docs.openvinotoolkit.org/latest/_docs_IE_DG_supported_plugins_Supported_Devices.html FP32 should work on GPU, though FP16 is preferred. And I noticed that you did not use a --data_type FP16 in your above mo_tf.py command. Can you kindly retry your MO with --data_type FP16 and try again on the Intel GPU ? Make sure you rename it via the --model_name and --output_dir switches passed into mo_tf.py so that you don't clobber your FP32 version of IR (which is default if you don't provide a --data_type value).

Let me know how these steps work for you and please report back here.

Thanks for using OpenVino !

Shubha

@martin-91x
Copy link
Author

Hi @shubha-ramani

I've added the -l switch and now I get the same error as with using -d GPU. So I think there must be a problem with my converted IR.

So, what I've done is:

  • Clone the tensorflow-yolo-v3.git repo and download weights and coco.names.
  • Run the converter using python convert_weights_pb.py --class_names coco.names --data_format NHWC --weights_file yolov3-tiny.weights --tiny --output_graph C:\Users\mle\Documents\OpenVino\yolov3-tiny\yolov3-tiny.pb
  • Modify the *.json file to
[
  {
    "id": "TFYOLOV3",
    "match_kind": "general",
    "custom_attributes": {
      "coords": 4,
      "classes": 80,
      "jitter": 0.3,
      "ignore_thresh": 0.7,
      "truth_thresh": 1,
      "random": 1,
      "num": 6,
      "mask": [0,1,2],
      "entry_points": ["detector/yolo-v3-tiny/Reshape","detector/yolo-v3-tiny/Reshape_4"]
    }
  }
]
  • Run the model optimizer using python "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\model_optimizer\mo_tf.py" --input_model yolov3-tiny.pb --tensorflow_use_custom_operations_config yolov3-tiny.json --model_name yolov3-tiny --input_shape [1,416,416,3]
  • Run the sample on my windows machine using python "C:\Program Files (x86)\IntelSWTools\openvino\deployment_tools\inference_engine\samples\python_samples\object_detection_demo_yolov3_async\object_detection_demo_yolov3_async.py" -m yolov3-tiny.xml -i dog.jpg -d GPU
  • Run the sample on my linux device using python3 /opt/intel/openvino_2019.1.133/deployment_tools/inference_engine/samples/python_samples/object_detection_demo_yolov3_async/object_detection_demo_yolov3_async.py -m yolov3-tiny.xml -i ../dog.jpg -d CPU -l /opt/intel/openvino/inference_engine/lib/intel64/libcpu_extension_sse4.so

I've also tried using an FP16 version and running the sample with .jpg images and .mp4 videos.
When I print out the expression the assertion evaluates (len(net.outputs)) I get a length of 2 which is why the assertion fails.

I'll try to investigate this further.

Best
Martin

@shubha-ramani
Copy link

shubha-ramani commented May 20, 2019

Dearest @martin-91x
In converting the yolov3-tiny did you follow these instructions exactly ? Because I don't see any mention of convert_weights_pb.py ?
https://docs.openvinotoolkit.org/latest/_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_YOLO_From_Tensorflow.html

Also use Tensorflow 1.11 or 1.12. Tensorflow 1.13 is not yet supported by Model Optimizer.

Let me know how it works for you,

Thanks,

Shubha

@martin-91x
Copy link
Author

I'm using tensorflow 1.12.0 for converting the model (as you suggested after my initial post).
Regarding convert_weights_pb.py: Look at the second point in the bullet list - I've converted the weights before running the MO ;).

Best,
Martin

@shubha-ramani
Copy link

shubha-ramani commented May 20, 2019

Dear @martin-91x
Yes, sorry for doubting you. I will try it myself just now (tiny yolov3) on the latest 2019 R1 release. Perhaps there is an issue. Will report back here.

Thanks,

Shubha

@martin-91x
Copy link
Author

Ok, thank you a lot.

@shubha-ramani
Copy link

Dearest @martin-91x
You are not imagining things. I reproduced your issues on R2019R1.1 on both CPU and GPU for FP32. I will file a bug straightaway. Sorry for the trouble ! I checked and even the object_detection_demo_yolov3_async.exe (C++ sample) doesn't work and gives a similar error.

Thank you for being patient with us !

Sincerely,

Shubha

@martin-91x
Copy link
Author

Hi @shubha-ramani,
Thank you for your investigation.
I haven't tried the C++ sample because I assume the bug in the model optimizer because I think len(net.outputs) should actually be 3 since there is nothing done with the net between loading it and the failing assertion.

@shubha-ramani
Copy link

Dear @martin-91x
Indeed you could be right ! I have filed a bug and I'm sure the devs will fix it quickly (in time for the next release). A broken yolo_v3 sample is not tolerable ! Thanks so much for your support and patience.

Shubha

@suman-19
Copy link

suman-19 commented Jul 4, 2019

facing similar issues

@dexception
Copy link

@shubha-ramani
Is this resolved yet ?

@lazarevevgeny
Copy link
Contributor

@martin-91x is this issue still actual? Can we close it?

@andrei-kochin
Copy link
Contributor

Seems to be resolved already. Closing

redradist pushed a commit to redradist/openvino that referenced this issue Oct 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants