You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm using opencv dnn to load fp16 onnx model, but it fails with the following error
cv2.error: OpenCV(4.4.0) /private/var/folders/nz/vv4_9tw56nv9k3tkvyszvwg80000gn/T/pip-req-build-ucld1hvm/opencv/modules/dnn/src/onnx/onnx_graph_simplifier.cpp:527: error: (-210:Unsupported format or combination of formats) Unsupported data type: FLOAT16 in function 'getMatFromTensor'
It works well with original fp32 model. I get fp16 model using onnxmltools using this from onnxmltools.utils.float16_converter import convert_float_to_float16
The same problem occurs with int8 model.
The text was updated successfully, but these errors were encountered:
Try convert FLOAT16 ONNX tensors to FP32 and check if it works (with adding some simple test):
prepare .onnx model for reproducer (you may want to convert some model from this list or try to generate synthetic models by this script)
add tensor conversion to FP32
prepare test with FP16 model
Alternative is creating Mat as CV_FP16 (master branch only), but we need to check/fix its usage through whole DNN library implementation (there are many places where FP32 is supported / expected only). This is long way.
opencv-python-4.4.0.46
onnx 1.6.0
onnxmltools 1.7.0
onnxruntime 1.4.0
Hi, I'm using opencv dnn to load fp16 onnx model, but it fails with the following error
cv2.error: OpenCV(4.4.0) /private/var/folders/nz/vv4_9tw56nv9k3tkvyszvwg80000gn/T/pip-req-build-ucld1hvm/opencv/modules/dnn/src/onnx/onnx_graph_simplifier.cpp:527: error: (-210:Unsupported format or combination of formats) Unsupported data type: FLOAT16 in function 'getMatFromTensor'
My code:
import cv2 import sys net = cv2.dnn.readNetFromONNX(sys.argv[1]) img = cv2.cvtColor(cv2.imread("img.jpg"), cv2.COLOR_BGR2RGB) img=cv2.resize(img, (224,224)) blob= cv2.dnn.blobFromImage(img, size=(224,224)) net.setInput(blob) print(net.forward())
It works well with original fp32 model. I get fp16 model using onnxmltools using this
from onnxmltools.utils.float16_converter import convert_float_to_float16
The same problem occurs with int8 model.
The text was updated successfully, but these errors were encountered: