Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

opencv dnn unsupported data type float16 #18735

Closed
dodler opened this issue Nov 5, 2020 · 3 comments
Closed

opencv dnn unsupported data type float16 #18735

dodler opened this issue Nov 5, 2020 · 3 comments

Comments

@dodler
Copy link

dodler commented Nov 5, 2020

opencv-python-4.4.0.46
onnx 1.6.0
onnxmltools 1.7.0
onnxruntime 1.4.0

Hi, I'm using opencv dnn to load fp16 onnx model, but it fails with the following error

cv2.error: OpenCV(4.4.0) /private/var/folders/nz/vv4_9tw56nv9k3tkvyszvwg80000gn/T/pip-req-build-ucld1hvm/opencv/modules/dnn/src/onnx/onnx_graph_simplifier.cpp:527: error: (-210:Unsupported format or combination of formats) Unsupported data type: FLOAT16 in function 'getMatFromTensor'

My code:

import cv2 import sys net = cv2.dnn.readNetFromONNX(sys.argv[1]) img = cv2.cvtColor(cv2.imread("img.jpg"), cv2.COLOR_BGR2RGB) img=cv2.resize(img, (224,224)) blob= cv2.dnn.blobFromImage(img, size=(224,224)) net.setInput(blob) print(net.forward())

It works well with original fp32 model. I get fp16 model using onnxmltools using this
from onnxmltools.utils.float16_converter import convert_float_to_float16

The same problem occurs with int8 model.

@2er0 2er0 mentioned this issue Nov 12, 2020
4 tasks
@krush11
Copy link
Contributor

krush11 commented Nov 15, 2020

@alalek May i work on this issue? Although i would need some help

@alalek
Copy link
Member

alalek commented Nov 15, 2020

Try convert FLOAT16 ONNX tensors to FP32 and check if it works (with adding some simple test):

  • prepare .onnx model for reproducer (you may want to convert some model from this list or try to generate synthetic models by this script)
  • add tensor conversion to FP32
  • prepare test with FP16 model

Alternative is creating Mat as CV_FP16 (master branch only), but we need to check/fix its usage through whole DNN library implementation (there are many places where FP32 is supported / expected only). This is long way.

@zihaomu
Copy link
Member

zihaomu commented Oct 10, 2022

We have supported ONNX float16 model in #22337.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants