New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] empty model blob after model_compile #3863
Comments
@dnns92 looks like compile_tool only supports Myriad and FPGA plugins, possible reason why ~Luis |
@dnns92, compile_tool doesn't support CPU, but CPU inference works without offline compilation, you can just load xml+bin files in your application as usual in OpenVINO in run-time and it will be compiled on-the-fly. What is your reason to try compile_tool for CPU specifically? Do you have an issue with regular flow loading xml+bin IR in the target application? |
The xml file reveals everything about the model which i do not find pleasing. Pre-compiling it makes the model more obscure (At least thats what my thinking was, correct me if im wrong). |
Offline compilation doesn't give any guarantee for that matter. The only goal for that is to save time in run-time by doing time-consuming compilation for devices, for which it is required offline. As a side effect you don't have ability to easily extract graph topology as you can do with xml but it is just an unintentional side effect and digging deeper one can have access to the topology (optimized for specific HW) and binary weights. If you want to hide the topology you may want to encrypt the IR. So even if offline compilation was available for CPU it would represent a topology somehow in the resulting blob where you can find all convolutions with relevant weights. |
Closing this issue, I hope previous responses were sufficient to help you proceed. Feel free to reopen and ask additional questions related to this topic. |
System information (version)
Also tested using:
Detailed description
Problem Description:
When using the compile_tool to generate a model blob like described here: https://docs.openvinotoolkit.org/latest/openvino_inference_engine_tools_compile_tool_README.html I get an empty blob object and the compile tool returns "[NOT IMPLEMENTED]"
Question:
Do I have to implement something to make the CPU compilation work? Is "CPU" not a valid identificator? What's the problem here?
System:
I tried this on my windows machine using openvino 2020.03 LTS and on my ubuntu machine v18.04 LTS with the newest apt install version.
On the ubuntu machine, using -d MYRIAD option leads to it working, but using -d CPU just returns [NOT IMPLEMENTED]. However I need the model to be compiled for CPUs.
Steps to reproduce for me (fresh installation):
Download onnx alexnet from https://github.com/onnx/models/blob/master/vision/classification/alexnet/model/bvlcalexnet-8.onnx
use model optimizer to generate xml and bin file like this: python mo_onnx.py --input_model path_to_onnx\bvlcalexnet-8.onnx --output_dir path_to_out
run compile_tool -m /path_to_xml/file.xml -d CPU -o /path_to_out/out.blob
Scrollbacks:
After mo_onnx:
After compile_tool
The text was updated successfully, but these errors were encountered: