You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While I switched to version 4.5.4 and later after version 4.5.1, tensorflow models, that use NHWC-type layers, stopped working. I ran dnn tests on actual version and all was fun with tf. After I use forward, there's an error:
what(): OpenCV(4.5.5-pre) /home/user/libs/opencv_19_12_master/src/opencv-4.x/modules/dnn/src/layers/[convolution_layer.cpp:405](https://github.com/opencv/opencv/blame/80492d663e3fcdaf84f1edf234e563d2a5c81951/modules/dnn/src/layers/convolution_layer.cpp#L402-L405): error: (-2:Unspecified error) Number of input channels should be multiple of 3 but got 224 in function 'getMemoryShapes'.
Before I also tried to make input of my model NCHW, then permutate tensor's dims so, that after it would be NHWC and I wouldn't have to change architectures to NCHW-style.
So, I see there are 2 problems:
1). There are lots of tf NHWC models on my project and rewriting architectures of each of them(to NCHW format) and retraining them will be a huge waste of time. And, I think, there are also lots of people, who are using models of NHWC-type and are using OpenCV for inference purposes. So, wouldn't it be too rough to break the support of NHWC?
2). If support of NHWC layers was canceled, shouldn't be there any exception, that will be threw, when you load a model, that contains NHWC-style layers?
Steps to reproduce
Just load NHWC-style tf model:
const std::string imgPath{"test_img.png"};
const std::string pbModelPath = "opencv_graph.pb";
auto net = cv::dnn::readNetFromTensorflow(pbModelPath);
cv::Mat img = cv::imread(imgPath, 1);
auto blob = cv::dnn::blobFromImage(img, 1.0, cv::Size(224, 224));
net.setInput(blob);
auto outputs = net.forward();
The text was updated successfully, but these errors were encountered:
@Ichini24, I found there is a reshape_layer after the placeholder. Your problem is caused by inconsistency between the reshape layer and the placeholder.
In the short term you can solve it by removing the reshape layer after the input and use the blobfromimage to resize input image instead.
Hope it will help.
Detailed description
While I switched to version 4.5.4 and later after version 4.5.1, tensorflow models, that use NHWC-type layers, stopped working. I ran dnn tests on actual version and all was fun with tf. After I use forward, there's an error:
Before I also tried to make input of my model NCHW, then permutate tensor's dims so, that after it would be NHWC and I wouldn't have to change architectures to NCHW-style.
So, I see there are 2 problems:
1). There are lots of tf NHWC models on my project and rewriting architectures of each of them(to NCHW format) and retraining them will be a huge waste of time. And, I think, there are also lots of people, who are using models of NHWC-type and are using OpenCV for inference purposes. So, wouldn't it be too rough to break the support of NHWC?
2). If support of NHWC layers was canceled, shouldn't be there any exception, that will be threw, when you load a model, that contains NHWC-style layers?
Steps to reproduce
Just load NHWC-style tf model:
The text was updated successfully, but these errors were encountered: