Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] empty model blob after model_compile #3863

Closed
dnns92 opened this issue Jan 14, 2021 · 5 comments
Closed

[Bug] empty model blob after model_compile #3863

dnns92 opened this issue Jan 14, 2021 · 5 comments
Assignees

Comments

@dnns92
Copy link

dnns92 commented Jan 14, 2021

System information (version)
  • OpenVINO => 2020.3LTS
  • Operating System / Platform => Windows 10 64 Bit
  • Compiler => Visual Studio 2017
  • Problem classification: Model Compilation Blob
  • Framework: ONNX
  • Model name: any as far as I can tell, tested densenet121 and alexnet

Also tested using:

  • OpenVINO =>latest apt installer 2021.2
  • Operating System / Platform => ubuntu1804
  • Problem classification: Model Compilation Blob
  • Framework: ONNX
  • Model name: any as far as I can tell, tested densenet121 and alexnet
Detailed description

Problem Description:

When using the compile_tool to generate a model blob like described here: https://docs.openvinotoolkit.org/latest/openvino_inference_engine_tools_compile_tool_README.html I get an empty blob object and the compile tool returns "[NOT IMPLEMENTED]"

Question:

Do I have to implement something to make the CPU compilation work? Is "CPU" not a valid identificator? What's the problem here?

System:

I tried this on my windows machine using openvino 2020.03 LTS and on my ubuntu machine v18.04 LTS with the newest apt install version.

On the ubuntu machine, using -d MYRIAD option leads to it working, but using -d CPU just returns [NOT IMPLEMENTED]. However I need the model to be compiled for CPUs.

Steps to reproduce for me (fresh installation):

Scrollbacks:

After mo_onnx:

Model Optimizer arguments:
Common parameters:
- Path to the Input Model: ...\bvlcalexnet-8.onnx
- Path for generated IR: .../some_path/...
- IR output name: bvlcalexnet-8
- Log level: ERROR
- Batch: Not specified, inherited from the model
- Input layers: Not specified, inherited from the model
- Output layers: Not specified, inherited from the model
- Input shapes: Not specified, inherited from the model
- Mean values: Not specified
- Scale values: Not specified
- Scale factor: Not specified
- Precision of IR: FP32
- Enable fusing: True
- Enable grouped convolutions fusing: True
- Move mean values to preprocess section: False
- Reverse input channels: False
ONNX specific parameters:
Model Optimizer version:

[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: ...\bvlcalexnet-8.xml
[ SUCCESS ] BIN file: ...\bvlcalexnet-8.bin
[ SUCCESS ] Total execution time: 4.00 seconds.

After compile_tool

C:\Program Files (x86)\IntelSWTools\openvino_2020.3.194\deployment_tools\inference_engine\bin\intel64\Release>compile_tool.exe -m D:\path\bvlcalexnet-8.xml -d CPU -o D:\alexnet.blob
Inference Engine:
API version ............ 2.1
Build .................. 2020.3.0-3467-15f2c61a-releases/2020/3
Description ....... API
[NOT_IMPLEMENTED]
@dnns92 dnns92 added bug Something isn't working support_request labels Jan 14, 2021
@avitial
Copy link
Contributor

avitial commented Jan 14, 2021

@dnns92 looks like compile_tool only supports Myriad and FPGA plugins, possible reason why [NOT IMPLEMENTED] is thrown. It is not made very clear from documentation though that this is in fact true, but let me verify that is the case here.

~Luis

@avitial avitial self-assigned this Jan 14, 2021
@slyalin
Copy link
Contributor

slyalin commented Jan 15, 2021

However I need the model to be compiled for CPUs.

@dnns92, compile_tool doesn't support CPU, but CPU inference works without offline compilation, you can just load xml+bin files in your application as usual in OpenVINO in run-time and it will be compiled on-the-fly. What is your reason to try compile_tool for CPU specifically? Do you have an issue with regular flow loading xml+bin IR in the target application?

@dnns92
Copy link
Author

dnns92 commented Jan 15, 2021

Do you have an issue with regular flow loading xml+bin IR in the target application?

The xml file reveals everything about the model which i do not find pleasing. Pre-compiling it makes the model more obscure (At least thats what my thinking was, correct me if im wrong).

@slyalin
Copy link
Contributor

slyalin commented Jan 15, 2021

Offline compilation doesn't give any guarantee for that matter. The only goal for that is to save time in run-time by doing time-consuming compilation for devices, for which it is required offline. As a side effect you don't have ability to easily extract graph topology as you can do with xml but it is just an unintentional side effect and digging deeper one can have access to the topology (optimized for specific HW) and binary weights. If you want to hide the topology you may want to encrypt the IR. So even if offline compilation was available for CPU it would represent a topology somehow in the resulting blob where you can find all convolutions with relevant weights.

@avitial avitial removed the bug Something isn't working label Jan 20, 2021
@avitial
Copy link
Contributor

avitial commented Jan 27, 2021

Closing this issue, I hope previous responses were sufficient to help you proceed. Feel free to reopen and ask additional questions related to this topic.

@avitial avitial closed this as completed Jan 27, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants