Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prelu from Tensorflow.keras (assert m.total() == outCn) #17263

Open
anas-899 opened this issue May 11, 2020 · 7 comments
Open

Prelu from Tensorflow.keras (assert m.total() == outCn) #17263

anas-899 opened this issue May 11, 2020 · 7 comments

Comments

@anas-899
Copy link

anas-899 commented May 11, 2020

I built latest opencv c++ from source on windows_x64, (this include PR #16983)
I am aware that opencv recently added the support of PRelu from tensorflow. and that is why I built from source.

I am trying a very simple model (that is written in python using tensorflow 1.15):

import tensorflow as tf
from tensorflow import keras

def build_model(net_input_dim=112):
    inputs = keras.layers.Input((net_input_dim, net_input_dim, 3), name="input")
    x = keras.layers.Conv2D(filters=1, kernel_size=3, strides=1, name="conv2d")(inputs)
    x = keras.layers.PReLU(name="prelu")(x)
    return keras.models.Model(inputs, x, name='small_net')

I freezed this model.
opencv_issue.zip

to call this freezed model from c++:

#include <opencv2/dnn.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>

void main() {
int network_input_size = 112;
cv::dnn::Net net = cv::dnn::readNetFromTensorflow("model.pb");
cv::Mat input_image = cv::imread("any_colored_image.jpg", 1);
cv::Mat inputBlob = cv::dnn::blobFromImage(input_image, 1.0 / 255.0, cv::Size(network_input_size, network_input_size));
net.setInput(inputBlob);
cv::Mat out;
net.forward(out);
}

when net.forward is run. I am getting the following error:
(m.isContinuous() && m.type() == CV_32F && (int)m.total() == outCn) in cv::dnn::ConvolutionLayerImpl::forward, file "opencv-master\modules\dnn\src\layers\convolution_layer.cpp, line 1425

the failure is in the following line:
CV_Assert(m.isContinuous() && m.type() == CV_32F && (int)m.total() == outCn);
m.total() is the width x height = 1 x 12100, while outCn is 1 and that's why it's failing.

I already checked similar issue (13384)

System information (version)
  • OpenCV > 4.3 (master) - built from source
  • Operating System / Platform => Windows 64 Bit
  • Compiler => Visual Studio 2017
  • Python 3.7
  • tensorflow 1.15

to Freeze the model:

from tensorflow.keras import backend as K

def freeze_session(session, keep_var_names=None, output_names=None, clear_devices=True):
    graph = session.graph
    with graph.as_default():
        freeze_var_names = list(set(v.op.name for v in tf.global_variables()).difference(keep_var_names or []))
        output_names = output_names or []
        output_names += [v.op.name for v in tf.global_variables()]
        input_graph_def = graph.as_graph_def()
        if clear_devices:
            for node in input_graph_def.node:
                node.device = ""
        frozen_graph = tf.graph_util.convert_variables_to_constants(
            session, input_graph_def, output_names, freeze_var_names)
        return frozen_graph

model = build_model()

frozen_graph = freeze_session(K.get_session(),
                              output_names=[out.op.name for out in model.outputs])

tf.compat.v1.train.write_graph(frozen_graph, "D:/", "model.pb", as_text=False)
@dkurt
Copy link
Member

dkurt commented May 11, 2020

Hi!
It seems to me that it may be a problem with model usage. Please show OpenCV code.

@anas-899
Copy link
Author

Hi @dkurt
I have updated the question to add the c++ caller part.
many thanks

@YashasSamaga
Copy link
Contributor

YashasSamaga commented May 11, 2020

The current PReLU implementation scales across channels only I think (the functor name is ChannelsPReLU so it was probably intended for PReLU across channels only). CUDA and OpenCL seem to require that the number of parameters is equal to the number of channels.

A similar assertion is triggered in the CUDA backend too.

[ RUN      ] Test_TensorFlow_layers.tf2_prelu/0, where GetParam() = CUDA/CUDA
unknown file: Failure
C++ exception with description "OpenCV(4.3.0-dev) /fakepath/opencv/modules/dnn/src/layers/../cuda4dnn/primitives/activation.hpp:107: error: (-215:Assertion failed) input.get_axis_size(1) == slopeTensor.size() in function 'forward'
" thrown in the test body.
[  FAILED  ] Test_TensorFlow_layers.tf2_prelu/0, where GetParam() = CUDA/CUDA (10 ms)

The test has applied PReLU (with 6 parameters) to an input tensor of 1 3 1 2 shape. I think the fix is to extend PReLU to work with a broader class of input shapes.

@anas-899
Copy link
Author

Many Thanks @YashasSamaga
do you have any suggestion how to fix this?
or I should wait to be fixed soon in new PR?

@YashasSamaga
Copy link
Contributor

YashasSamaga commented May 11, 2020

This is what is happening in your case:

input: 1 1 110 110 
parameters: 12100

Clearly, the number of channels is not equal to the number of parameters.

The fix seems to be non-trivial. I think you'll have to wait for a PR. If you plan on using the CUDA backend, I can make a temporary workaround for you.


I just realized that this problem exists in Inference Engine too.

@anas-899
Copy link
Author

anas-899 commented May 11, 2020

@YashasSamaga I am using CPU

@dkurt
Copy link
Member

dkurt commented May 11, 2020

https://keras.io/api/layers/activation_layers/prelu/ says

Parametric Rectified Linear Unit.

It follows:

f(x) = alpha * x for x < 0
f(x) = x for x >= 0
where alpha is a learned array with the same shape as x.

But

shared_axes: The axes along which to share learnable parameters for the activation function. For example, if the incoming feature maps are from a 2D convolution with output shape (batch, height, width, channels), and you wish to share parameters across space so that each filter only has one set of parameters, set shared_axes=[1, 2].

However we somehow enabled the test case from #16983 by channel-wise ReLU. Need to check.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants