Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cant run converted keras model in dnn model, get error 'allocateLayers' or 'cv::dnn::dnn4_v20210301::`anonymous-namespace'::addConstNodes' #20153

Open
Ohad-Multisense opened this issue May 25, 2021 · 10 comments

Comments

@Ohad-Multisense
Copy link

Ohad-Multisense commented May 25, 2021

System information (version)
  • OpenCV => 4.5.2 / 4.1.1
  • Operating System / Platform => Windows 10 64 Bit / Ubuntu 16.04 64 Bit
  • python=> 3.7.9
Detailed description

Hello, i am try to run my trained model with openCV but get errors:

first i am train my model in keras (down will be show summary).
After training i convert keras model to tensorflow model and get pb file
when i try to run the model get error :

// python code example
error: OpenCV(4.5.2) C:\Users\runneradmin\AppData\Local\Temp\pip-req-build-dn5w5exm\opencv\modules\dnn\src\dnn.cpp:3127: error: (-215:Assertion failed) inp.total() in function 'cv::dnn::dnn4_v20210301::Net::Impl::allocateLayers'

if i try to load pb model with pbtxt configuration i am get error:

// python code example
[ERROR:0] global C:\Users\runneradmin\AppData\Local\Temp\pip-req-build-dn5w5exm\opencv\modules\dnn\src\tensorflow\tf_importer.cpp (748) cv::dnn::dnn4_v20210301::`anonymous-namespace'::addConstNodes DNN/TF: Can't handle node='strided_slice/stack_2'. Exception: OpenCV(4.5.2) C:\Users\runneradmin\AppData\Local\Temp\pip-req-build-dn5w5exm\opencv\modules\dnn\src\tensorflow\tf_importer.cpp:742: error: (-215:Assertion failed) const_layers.insert(std::make_pair(name, li)).second in function 'cv::dnn::dnn4_v20210301::`anonymous-namespace'::addConstNodes'

i am try first with openCV version 4.1.1, and after i am try with version 4.5.2 and try different OS (windows 10 and Ubuntu 16.04)

may be someone can help me to load my trained model with openCV

Steps to reproduce
```.py
// python code example
import numpy as np
import cv2
from keras.models import load_model


keras_mdodel = load_model('pb_people_20/sim_mdl_func_v2_pow2_china_people_20.h5')
keras_mdodel.summary()

matcher_path_opencv='pb_people_20/sim_mdl_func_v2_pow2_china_people_20.pb'
matcher_path_opencv_pbtxt='pb_people_20/sim_mdl_func_v2_pow2_china_people_20_constant_graph.pbtxt'

matcher_opencv = cv2.dnn.readNetFromTensorflow(matcher_path_opencv,matcher_path_opencv_pbtxt)

def get_match_opencv(embs_0,embs_1):
    blobs = cv2.dnn.blobFromImages([embs_0,embs_1])
    # Set blob as input to faceNet
    matcher_opencv.setInput(blobs)#,'input_1')
    #opencv_matcher_model.setInput(blobs)
    #opencv_matcher_model.forward() 
    # Runs a forward pass to compute the net output
    return matcher_opencv.forward() 

def match_opencv(embs_0,embs_1):
    res = 0
    for first_emb in embs_0:
        for second_emb in embs_1:
            match_res = np.squeeze(get_match_opencv(first_emb,second_emb))
            res = max(res,match_res)
    return res


emb_comp = np.array([[-0.01565263, -0.03656595, -0.02106497, -0.01172114, -0.03183658,
         0.04423258,  0.07379042, -0.03448544,  0.00917874,  0.10619327,
         0.067057  , -0.04213215, -0.07464422, -0.06878724, -0.09982515,
         0.01137134,  0.04709429,  0.02330536, -0.00764309, -0.14052634,
        -0.17314363, -0.04568923,  0.01383415, -0.00578358, -0.08864444,
         0.07767007, -0.06337555, -0.09800968, -0.04953913,  0.01720826,
        -0.00948498, -0.09592741, -0.10465072, -0.06163599,  0.22899778,
         0.10588204, -0.03106888,  0.06826289,  0.19125976,  0.09685656,
         0.10258153, -0.04890387,  0.07771394,  0.10267048,  0.12820566,
        -0.17984524,  0.10338812,  0.11158907, -0.06663922,  0.05078162,
        -0.0144849 ,  0.03813416, -0.12748823,  0.06191524, -0.09051557,
         0.01777557,  0.14020114, -0.00481044, -0.04369346, -0.06189118,
         0.02411589, -0.2553893 ,  0.04399932, -0.11846916,  0.08772267,
         0.14251669, -0.11032436,  0.09020332, -0.04854635, -0.1081277 ,
        -0.11582536, -0.03723017,  0.03233388,  0.00288666,  0.13660526,
        -0.03048528, -0.10125471,  0.0071717 ,  0.00672208, -0.02564991,
         0.02921752,  0.03457039,  0.01814651, -0.04918699, -0.08088212,
        -0.00733991, -0.0040632 ,  0.12189984, -0.02091988,  0.27729505,
        -0.09878255,  0.01982491,  0.10892831, -0.05085582, -0.08475107,
         0.13422482, -0.04512339,  0.01896015, -0.10464227,  0.04971102,
        -0.07694119, -0.06038413, -0.0517869 ,  0.00917315,  0.00206291,
        -0.02974024, -0.01651836,  0.16789761,  0.07368388, -0.10541685,
         0.14497165,  0.06329738,  0.1521214 , -0.01266835, -0.00671655,
        -0.083036  ,  0.07795148, -0.02421858,  0.17278863, -0.05978476,
        -0.06421243, -0.14550938,  0.00752902,  0.01600735, -0.07240639,
         0.02841103, -0.05694156, -0.11571351]], dtype=np.float32)

img_embs = np.array([[ 0.11008514,  0.01455243,  0.01908959, -0.05834481, -0.14181934,
         0.1882611 ,  0.08232455, -0.11491345,  0.05500843,  0.04011501,
        -0.04828305, -0.03326504, -0.05615507, -0.02345234, -0.0491137 ,
         0.12314314,  0.03639589, -0.02172398,  0.12210201, -0.09503838,
        -0.08331381,  0.01604286,  0.00464164, -0.01787202, -0.0006151 ,
         0.13468699,  0.01733225, -0.10102501, -0.02729505, -0.10542642,
        -0.07366027,  0.02062424, -0.08315776, -0.01222375,  0.18682827,
         0.11723035, -0.16790824,  0.0639649 ,  0.17808685,  0.05183998,
         0.07382489, -0.0351243 ,  0.00582562,  0.01956308,  0.04352709,
        -0.10843045,  0.02012463, -0.05169405, -0.05626178,  0.07084419,
        -0.03077255, -0.02904841, -0.0701267 , -0.04788103,  0.08546986,
         0.05289406,  0.05466829,  0.05425635, -0.0813577 , -0.03403144,
         0.00544576, -0.2616964 ,  0.06646861, -0.05636386,  0.0138996 ,
         0.01612337, -0.00722741, -0.06695902,  0.05770075, -0.05611902,
        -0.00302837, -0.05027693,  0.03430333,  0.02994055,  0.16164383,
        -0.02481532, -0.20324335,  0.0160754 ,  0.06737015, -0.01307072,
         0.14364758,  0.0039455 ,  0.09170939, -0.03038747,  0.02139785,
        -0.0776043 ,  0.07581864,  0.00768354, -0.03795378,  0.11794008,
        -0.10925949, -0.07035936,  0.03584298, -0.05980027, -0.05402324,
         0.11599824, -0.2232277 ,  0.05435819, -0.06722381,  0.04105445,
        -0.12166104,  0.02219271, -0.10503792,  0.04090016,  0.01218935,
        -0.1845343 , -0.00819734,  0.2274523 , -0.0579717 , -0.19095403,
         0.14852127,  0.10555372,  0.11048518,  0.05386249, -0.07497197,
        -0.01151269,  0.15070955,  0.0480066 ,  0.06213563, -0.01823237,
         0.09147692, -0.14006777,  0.072235  ,  0.03428208, -0.08408232,
         0.03943622, -0.07408921, -0.08130765]], dtype=np.float32)


res2 = match_opencv([emb_comp],[img_embs])




```

Test_load_with_open_CV.zip

the summary of the model:
```.py
// python code example
Model: "Similarity_Model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to

=================================================================================
ImageA_Input (InputLayer)       (None, 128)          0                                            
__________________________________________________________________________________________________
ImageB_Input (InputLayer)       (None, 128)          0                                            
__________________________________________________________________________________________________
lambda_1 (Lambda)               (None, 128)          0           ImageA_Input[0][0]               
                                                                 ImageB_Input[0][0]               
__________________________________________________________________________________________________
dense_1 (Dense)                 (None, 32)           4128        lambda_1[0][0]                   
__________________________________________________________________________________________________
dropout_1 (Dropout)             (None, 32)           0           dense_1[0][0]                    
__________________________________________________________________________________________________
dense_2 (Dense)                 (None, 16)           528         dropout_1[0][0]                  
__________________________________________________________________________________________________
dropout_2 (Dropout)             (None, 16)           0           dense_2[0][0]                    
__________________________________________________________________________________________________
dense_3 (Dense)                 (None, 1)            17          dropout_2[0][0]                  

============================================================================    
Total params: 4,673
Trainable params: 4,673
Non-trainable params: 0
'''
@Ohad-Multisense
Copy link
Author

Thanks @alalek

@rogday
Copy link
Member

rogday commented Jun 21, 2021

I think you obtained pb and pbtxt incorrectly. This should work:

import tensorflow as tf
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
from tensorflow.keras.models import load_model

def write_pb(model, filename):
    func = tf.function(lambda x: model(x))
    func = func.get_concrete_function([tf.TensorSpec(model_input.shape, model_input.dtype) for model_input in model.inputs])

    func = convert_variables_to_constants_v2(func)
    graph = func.graph.as_graph_def()

    tf.io.write_graph(graph_or_graph_def=graph, logdir=".", name=filename, as_text=False)

    return graph

def write_pbtxt(model, graph, filename):
    for i in reversed(range(len(graph.node))):
        if graph.node[i].op == 'Const':
            del graph.node[i]

    graph.library.Clear()

    tf.compat.v1.train.write_graph(graph_or_graph_def=graph, logdir=".", name=filename, as_text=True)

def saveModel(model, name):
    graph = write_pb(model, name + '.pb')
    write_pbtxt(model, graph, name + '.pbtxt')

model = load_model('./sim_mdl_func_v2_pow2_china_people_20.h5')
saveModel(model, 'model')

@Ohad-Multisense
Copy link
Author

I think you obtained pb and pbtxt incorrectly. This should work:

import tensorflow as tf
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
from tensorflow.keras.models import load_model

def write_pb(model, filename):
    func = tf.function(lambda x: model(x))
    func = func.get_concrete_function([tf.TensorSpec(model_input.shape, model_input.dtype) for model_input in model.inputs])

    func = convert_variables_to_constants_v2(func)
    graph = func.graph.as_graph_def()

    tf.io.write_graph(graph_or_graph_def=graph, logdir=".", name=filename, as_text=False)

    return graph

def write_pbtxt(model, graph, filename):
    for i in reversed(range(len(graph.node))):
        if graph.node[i].op == 'Const':
            del graph.node[i]

    graph.library.Clear()

    tf.compat.v1.train.write_graph(graph_or_graph_def=graph, logdir=".", name=filename, as_text=True)

def saveModel(model, name):
    graph = write_pb(model, name + '.pb')
    write_pbtxt(model, graph, name + '.pbtxt')

model = load_model('./sim_mdl_func_v2_pow2_china_people_20.h5')
saveModel(model, 'model')

thanks i will try it,
witch version of tensorflow you are using?

@rogday
Copy link
Member

rogday commented Jul 6, 2021

I think you obtained pb and pbtxt incorrectly. This should work:

import tensorflow as tf
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
from tensorflow.keras.models import load_model

def write_pb(model, filename):
    func = tf.function(lambda x: model(x))
    func = func.get_concrete_function([tf.TensorSpec(model_input.shape, model_input.dtype) for model_input in model.inputs])

    func = convert_variables_to_constants_v2(func)
    graph = func.graph.as_graph_def()

    tf.io.write_graph(graph_or_graph_def=graph, logdir=".", name=filename, as_text=False)

    return graph

def write_pbtxt(model, graph, filename):
    for i in reversed(range(len(graph.node))):
        if graph.node[i].op == 'Const':
            del graph.node[i]

    graph.library.Clear()

    tf.compat.v1.train.write_graph(graph_or_graph_def=graph, logdir=".", name=filename, as_text=True)

def saveModel(model, name):
    graph = write_pb(model, name + '.pb')
    write_pbtxt(model, graph, name + '.pbtxt')

model = load_model('./sim_mdl_func_v2_pow2_china_people_20.h5')
saveModel(model, 'model')

thanks i will try it,
witch version of tensorflow you are using?

I was using 2.5.0 IIRC

@Ohad-Multisense
Copy link
Author

I think you obtained pb and pbtxt incorrectly. This should work:

import tensorflow as tf
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
from tensorflow.keras.models import load_model

def write_pb(model, filename):
    func = tf.function(lambda x: model(x))
    func = func.get_concrete_function([tf.TensorSpec(model_input.shape, model_input.dtype) for model_input in model.inputs])

    func = convert_variables_to_constants_v2(func)
    graph = func.graph.as_graph_def()

    tf.io.write_graph(graph_or_graph_def=graph, logdir=".", name=filename, as_text=False)

    return graph

def write_pbtxt(model, graph, filename):
    for i in reversed(range(len(graph.node))):
        if graph.node[i].op == 'Const':
            del graph.node[i]

    graph.library.Clear()

    tf.compat.v1.train.write_graph(graph_or_graph_def=graph, logdir=".", name=filename, as_text=True)

def saveModel(model, name):
    graph = write_pb(model, name + '.pb')
    write_pbtxt(model, graph, name + '.pbtxt')

model = load_model('./sim_mdl_func_v2_pow2_china_people_20.h5')
saveModel(model, 'model')

thanks i will try it,
witch version of tensorflow you are using?

I was using 2.5.0 IIRC

thanks, i will try it with tensorflow 2.5.0 because with 1.14.0 it not working

@Ohad-Multisense
Copy link
Author

I think you obtained pb and pbtxt incorrectly. This should work:

import tensorflow as tf
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
from tensorflow.keras.models import load_model

def write_pb(model, filename):
    func = tf.function(lambda x: model(x))
    func = func.get_concrete_function([tf.TensorSpec(model_input.shape, model_input.dtype) for model_input in model.inputs])

    func = convert_variables_to_constants_v2(func)
    graph = func.graph.as_graph_def()

    tf.io.write_graph(graph_or_graph_def=graph, logdir=".", name=filename, as_text=False)

    return graph

def write_pbtxt(model, graph, filename):
    for i in reversed(range(len(graph.node))):
        if graph.node[i].op == 'Const':
            del graph.node[i]

    graph.library.Clear()

    tf.compat.v1.train.write_graph(graph_or_graph_def=graph, logdir=".", name=filename, as_text=True)

def saveModel(model, name):
    graph = write_pb(model, name + '.pb')
    write_pbtxt(model, graph, name + '.pbtxt')

model = load_model('./sim_mdl_func_v2_pow2_china_people_20.h5')
saveModel(model, 'model')

thanks i will try it,
witch version of tensorflow you are using?

I was using 2.5.0 IIRC

it test with tensorflow 2.5.0
but get error: ValueError: bad marshal data (unknown type code)

Python 3.8.10 (default, Jun  4 2021, 15:09:15) 
Type "copyright", "credits" or "license" for more information.

IPython 7.22.0 -- An enhanced Interactive Python.

runfile('/media/maxim/Windows/Maxim/FaceVerification/Matcher/k2tf_convert__form_forum.py', wdir='/media/maxim/Windows/Maxim/FaceVerification/Matcher')
2021-07-07 15:31:46.369922: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/tensorrt/lib:/usr/local/cuda-9.0/lib64:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:
2021-07-07 15:31:46.369944: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Using TensorFlow backend.
WARNING:tensorflow:From /home/maxim/anaconda2/envs/flask/lib/python3.8/site-packages/tensorflow/python/compat/v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
Instructions for updating:
non-resource variables are not supported in the long term
Traceback (most recent call last):

  File "/home/maxim/anaconda2/envs/flask/lib/python3.8/site-packages/keras/utils/generic_utils.py", line 231, in func_load
    code = marshal.loads(raw_code)

ValueError: bad marshal data (unknown type code)


During handling of the above exception, another exception occurred:

Traceback (most recent call last):

  File "/media/maxim/Windows/Maxim/FaceVerification/Matcher/k2tf_convert__form_forum.py", line 50, in <module>
    keras_model = load_model(mdl_path)

  File "/home/maxim/anaconda2/envs/flask/lib/python3.8/site-packages/keras/engine/saving.py", line 419, in load_model
    model = _deserialize_model(f, custom_objects, compile)

  File "/home/maxim/anaconda2/envs/flask/lib/python3.8/site-packages/keras/engine/saving.py", line 225, in _deserialize_model
    model = model_from_config(model_config, custom_objects=custom_objects)

  File "/home/maxim/anaconda2/envs/flask/lib/python3.8/site-packages/keras/engine/saving.py", line 458, in model_from_config
    return deserialize(config, custom_objects=custom_objects)

  File "/home/maxim/anaconda2/envs/flask/lib/python3.8/site-packages/keras/layers/__init__.py", line 52, in deserialize
    return deserialize_keras_object(config,

  File "/home/maxim/anaconda2/envs/flask/lib/python3.8/site-packages/keras/utils/generic_utils.py", line 142, in deserialize_keras_object
    return cls.from_config(

  File "/home/maxim/anaconda2/envs/flask/lib/python3.8/site-packages/keras/engine/network.py", line 1022, in from_config
    process_layer(layer_data)

  File "/home/maxim/anaconda2/envs/flask/lib/python3.8/site-packages/keras/engine/network.py", line 1007, in process_layer
    layer = deserialize_layer(layer_data,

  File "/home/maxim/anaconda2/envs/flask/lib/python3.8/site-packages/keras/layers/__init__.py", line 52, in deserialize
    return deserialize_keras_object(config,

  File "/home/maxim/anaconda2/envs/flask/lib/python3.8/site-packages/keras/utils/generic_utils.py", line 142, in deserialize_keras_object
    return cls.from_config(

  File "/home/maxim/anaconda2/envs/flask/lib/python3.8/site-packages/keras/layers/core.py", line 735, in from_config
    function = func_load(config['function'], globs=globs)

  File "/home/maxim/anaconda2/envs/flask/lib/python3.8/site-packages/keras/utils/generic_utils.py", line 236, in func_load
    code = marshal.loads(raw_code)

ValueError: bad marshal data (unknown type code)

do you know how i can solve it? thanks a lot

@rogday
Copy link
Member

rogday commented Jul 7, 2021

I tested my code for saving with python 3.7.10, tensorflow 1.15 and opencv 4.5.2. Everything works fine if I insert tf.compat.v1.enable_eager_execution() at the start.

@Ohad-Multisense
Copy link
Author

Ohad-Multisense commented Jul 11, 2021

I tested my code for saving with python 3.7.10, tensorflow 1.15 and opencv 4.5.2. Everything works fine if I insert tf.compat.v1.enable_eager_execution() at the start.
Thank you very much, i try this and get both files: .pb and .pbtxt

converted model according to you instructions:
sim_mdl_func_v2_pow2_china_people_20.zip

but when i try to test it with opench get error:

emb_comp = [np.array([[-2.07526069e-02, -3.76138873e-02, -1.81949530e-02,
        -1.27303926e-03, -2.80939788e-02,  3.59291025e-02,
         7.89560154e-02, -3.59906815e-02, -2.18113977e-03,
         1.24354869e-01,  7.09442645e-02, -3.73696014e-02,
        -7.89967924e-02, -7.08339959e-02, -1.08526520e-01,
        -5.16016176e-03,  4.82012965e-02,  1.79995038e-02,
        -8.50073341e-03, -1.43060327e-01, -1.79133609e-01,
        -5.06920442e-02,  1.30563295e-02, -1.70601644e-02,
        -8.52133632e-02,  7.94231817e-02, -4.59608659e-02,
        -9.71303508e-02, -5.04867733e-02,  2.24852767e-02,
        -1.70053616e-02, -8.70493948e-02, -1.04366392e-01,
        -8.27868059e-02,  2.25306377e-01,  9.93802696e-02,
        -2.76333746e-02,  6.41410127e-02,  1.88372731e-01,
         8.63860697e-02,  9.89582837e-02, -4.50619422e-02,
         7.15015009e-02,  1.19220816e-01,  1.34420380e-01,
        -1.83178157e-01,  9.51981917e-02,  1.14982471e-01,
        -4.97261807e-02,  5.97340763e-02, -1.05344653e-02,
         4.21733148e-02, -1.17164105e-01,  4.82109375e-02,
        -9.32606086e-02,  1.34737287e-02,  1.39889300e-01,
        -3.43040272e-04, -3.76531482e-02, -6.32638708e-02,
         1.92606598e-02, -2.68090129e-01,  4.57393005e-02,
        -1.21325538e-01,  9.15667936e-02,  1.50711983e-01,
        -1.18540570e-01,  9.71808583e-02, -4.71785925e-02,
        -9.60603356e-02, -1.25995740e-01, -3.34104635e-02,
         2.29550730e-02,  8.03949032e-03,  1.38658345e-01,
        -3.91187556e-02, -8.95357132e-02,  1.62388720e-02,
         7.89797457e-04, -1.24329710e-02,  1.69271659e-02,
         4.36490513e-02,  1.87081564e-02, -5.01879491e-02,
        -8.33383203e-02,  1.45936632e-04, -1.64315850e-02,
         1.07943699e-01, -3.75260226e-02,  2.72473335e-01,
        -1.00223340e-01,  2.67054718e-02,  1.03623085e-01,
        -4.34564687e-02, -8.17834139e-02,  1.23940527e-01,
        -4.41038907e-02,  2.31757257e-02, -1.09333508e-01,
         3.59940566e-02, -8.47578347e-02, -6.52976036e-02,
        -3.89938951e-02,  3.28362011e-03, -1.21749649e-02,
        -2.25499980e-02, -1.25503661e-02,  1.66699931e-01,
         7.29326755e-02, -1.01906836e-01,  1.43347621e-01,
         6.38188198e-02,  1.49770677e-01,  2.11979705e-03,
         2.86606583e-03, -6.54566661e-02,  8.51759166e-02,
        -3.50150280e-02,  1.66765362e-01, -4.12612893e-02,
        -5.51314838e-02, -1.58738792e-01,  6.95668627e-04,
         1.28341401e-02, -8.43253508e-02,  2.57490240e-02,
        -5.32488748e-02, -1.07368752e-01]], dtype=np.float32)]

img_embs = [np.array([[ 0.08299512,  0.04560803,  0.03116382, -0.03930001, -0.08403633,
         0.14950442,  0.0729406 , -0.10549465,  0.10704392,  0.06648093,
        -0.06004801, -0.02714206, -0.07316988, -0.08157247, -0.04738788,
         0.14541239, -0.02736695,  0.0721011 ,  0.10346765, -0.05763485,
        -0.06799629,  0.01061096,  0.05501751,  0.07780259, -0.01651143,
         0.14266111, -0.03180892, -0.13978282, -0.07177536, -0.09723921,
        -0.04592864, -0.04917077, -0.09854629,  0.0072313 ,  0.1446461 ,
         0.10760631, -0.10153292,  0.05217709,  0.22002015,  0.05179396,
         0.11984765, -0.04360949, -0.00775165, -0.0418541 ,  0.06799418,
         0.01069122,  0.05947721, -0.06431236, -0.10907257,  0.07055985,
        -0.08934348, -0.04483014, -0.05907475,  0.01866955,  0.06799407,
        -0.00205917,  0.09784131, -0.02283142, -0.05978012,  0.02966089,
         0.13560592, -0.21287084,  0.06778804, -0.08625988,  0.06007105,
         0.02196306,  0.03125143, -0.03363482,  0.09186722, -0.04559114,
        -0.0246855 ,  0.00345274, -0.01052804,  0.06904986,  0.17674777,
        -0.04841331, -0.22535895,  0.022614  ,  0.13221958, -0.01430765,
         0.1128735 , -0.0665758 ,  0.10571466, -0.05914513,  0.00428186,
         0.01118833,  0.06930657,  0.0920514 , -0.02113676,  0.16240422,
        -0.08114551, -0.02897336,  0.09336429, -0.01132694, -0.03403351,
         0.06171956, -0.16830747,  0.09194379, -0.0766651 ,  0.06004078,
        -0.09434112,  0.03440706, -0.13617109,  0.00559863,  0.03040126,
        -0.17688258, -0.12178319,  0.24337214, -0.12474367, -0.13440283,
         0.10498703,  0.11160351,  0.10737723, -0.03220632, -0.12789217,
        -0.00120715,  0.02936077,  0.03229442,  0.09272455,  0.00215561,
         0.00268578, -0.1165002 ,  0.02916852, -0.0132524 , -0.07830146,
        -0.01242953, -0.05116267, -0.10529439]], dtype=np.float32)]



# load model
matcher_path_opencv = './sim_mdl_func_v2_pow2_china_people_20.pb'
matcher_opencv = cv2.dnn.readNetFromTensorflow(matcher_path_opencv,matcher_path_opencv+'txt')

def get_match_opencv(matcher_opencv,embs_0,embs_1):
    blobs_feat = np.squeeze(np.stack([embs_0,embs_1]).astype(np.float32))
    matcher_opencv.setInput(blobs_feat)#,'input_1')
    # Runs a forward pass to compute the net output
    return matcher_opencv.forward()     

res_opencv =  get_match_opencv(matcher_opencv,emb_comp,img_embs) 

and the error is:

error: OpenCV(4.5.2) ../modules/dnn/src/dnn.cpp:3127: error: (-215:Assertion failed) inp.total() in function 'allocateLayers'

did you get this error?

@Ohad-Multisense
Copy link
Author

I tested my code for saving with python 3.7.10, tensorflow 1.15 and opencv 4.5.2. Everything works fine if I insert tf.compat.v1.enable_eager_execution() at the start.
Thank you very much, i try this and get both files: .pb and .pbtxt

converted model according to you instructions:
sim_mdl_func_v2_pow2_china_people_20.zip

but when i try to test it with opench get error:

emb_comp = [np.array([[-2.07526069e-02, -3.76138873e-02, -1.81949530e-02,
        -1.27303926e-03, -2.80939788e-02,  3.59291025e-02,
         7.89560154e-02, -3.59906815e-02, -2.18113977e-03,
         1.24354869e-01,  7.09442645e-02, -3.73696014e-02,
        -7.89967924e-02, -7.08339959e-02, -1.08526520e-01,
        -5.16016176e-03,  4.82012965e-02,  1.79995038e-02,
        -8.50073341e-03, -1.43060327e-01, -1.79133609e-01,
        -5.06920442e-02,  1.30563295e-02, -1.70601644e-02,
        -8.52133632e-02,  7.94231817e-02, -4.59608659e-02,
        -9.71303508e-02, -5.04867733e-02,  2.24852767e-02,
        -1.70053616e-02, -8.70493948e-02, -1.04366392e-01,
        -8.27868059e-02,  2.25306377e-01,  9.93802696e-02,
        -2.76333746e-02,  6.41410127e-02,  1.88372731e-01,
         8.63860697e-02,  9.89582837e-02, -4.50619422e-02,
         7.15015009e-02,  1.19220816e-01,  1.34420380e-01,
        -1.83178157e-01,  9.51981917e-02,  1.14982471e-01,
        -4.97261807e-02,  5.97340763e-02, -1.05344653e-02,
         4.21733148e-02, -1.17164105e-01,  4.82109375e-02,
        -9.32606086e-02,  1.34737287e-02,  1.39889300e-01,
        -3.43040272e-04, -3.76531482e-02, -6.32638708e-02,
         1.92606598e-02, -2.68090129e-01,  4.57393005e-02,
        -1.21325538e-01,  9.15667936e-02,  1.50711983e-01,
        -1.18540570e-01,  9.71808583e-02, -4.71785925e-02,
        -9.60603356e-02, -1.25995740e-01, -3.34104635e-02,
         2.29550730e-02,  8.03949032e-03,  1.38658345e-01,
        -3.91187556e-02, -8.95357132e-02,  1.62388720e-02,
         7.89797457e-04, -1.24329710e-02,  1.69271659e-02,
         4.36490513e-02,  1.87081564e-02, -5.01879491e-02,
        -8.33383203e-02,  1.45936632e-04, -1.64315850e-02,
         1.07943699e-01, -3.75260226e-02,  2.72473335e-01,
        -1.00223340e-01,  2.67054718e-02,  1.03623085e-01,
        -4.34564687e-02, -8.17834139e-02,  1.23940527e-01,
        -4.41038907e-02,  2.31757257e-02, -1.09333508e-01,
         3.59940566e-02, -8.47578347e-02, -6.52976036e-02,
        -3.89938951e-02,  3.28362011e-03, -1.21749649e-02,
        -2.25499980e-02, -1.25503661e-02,  1.66699931e-01,
         7.29326755e-02, -1.01906836e-01,  1.43347621e-01,
         6.38188198e-02,  1.49770677e-01,  2.11979705e-03,
         2.86606583e-03, -6.54566661e-02,  8.51759166e-02,
        -3.50150280e-02,  1.66765362e-01, -4.12612893e-02,
        -5.51314838e-02, -1.58738792e-01,  6.95668627e-04,
         1.28341401e-02, -8.43253508e-02,  2.57490240e-02,
        -5.32488748e-02, -1.07368752e-01]], dtype=np.float32)]

img_embs = [np.array([[ 0.08299512,  0.04560803,  0.03116382, -0.03930001, -0.08403633,
         0.14950442,  0.0729406 , -0.10549465,  0.10704392,  0.06648093,
        -0.06004801, -0.02714206, -0.07316988, -0.08157247, -0.04738788,
         0.14541239, -0.02736695,  0.0721011 ,  0.10346765, -0.05763485,
        -0.06799629,  0.01061096,  0.05501751,  0.07780259, -0.01651143,
         0.14266111, -0.03180892, -0.13978282, -0.07177536, -0.09723921,
        -0.04592864, -0.04917077, -0.09854629,  0.0072313 ,  0.1446461 ,
         0.10760631, -0.10153292,  0.05217709,  0.22002015,  0.05179396,
         0.11984765, -0.04360949, -0.00775165, -0.0418541 ,  0.06799418,
         0.01069122,  0.05947721, -0.06431236, -0.10907257,  0.07055985,
        -0.08934348, -0.04483014, -0.05907475,  0.01866955,  0.06799407,
        -0.00205917,  0.09784131, -0.02283142, -0.05978012,  0.02966089,
         0.13560592, -0.21287084,  0.06778804, -0.08625988,  0.06007105,
         0.02196306,  0.03125143, -0.03363482,  0.09186722, -0.04559114,
        -0.0246855 ,  0.00345274, -0.01052804,  0.06904986,  0.17674777,
        -0.04841331, -0.22535895,  0.022614  ,  0.13221958, -0.01430765,
         0.1128735 , -0.0665758 ,  0.10571466, -0.05914513,  0.00428186,
         0.01118833,  0.06930657,  0.0920514 , -0.02113676,  0.16240422,
        -0.08114551, -0.02897336,  0.09336429, -0.01132694, -0.03403351,
         0.06171956, -0.16830747,  0.09194379, -0.0766651 ,  0.06004078,
        -0.09434112,  0.03440706, -0.13617109,  0.00559863,  0.03040126,
        -0.17688258, -0.12178319,  0.24337214, -0.12474367, -0.13440283,
         0.10498703,  0.11160351,  0.10737723, -0.03220632, -0.12789217,
        -0.00120715,  0.02936077,  0.03229442,  0.09272455,  0.00215561,
         0.00268578, -0.1165002 ,  0.02916852, -0.0132524 , -0.07830146,
        -0.01242953, -0.05116267, -0.10529439]], dtype=np.float32)]



# load model
matcher_path_opencv = './sim_mdl_func_v2_pow2_china_people_20.pb'
matcher_opencv = cv2.dnn.readNetFromTensorflow(matcher_path_opencv,matcher_path_opencv+'txt')

def get_match_opencv(matcher_opencv,embs_0,embs_1):
    blobs_feat = np.squeeze(np.stack([embs_0,embs_1]).astype(np.float32))
    matcher_opencv.setInput(blobs_feat)#,'input_1')
    # Runs a forward pass to compute the net output
    return matcher_opencv.forward()     

res_opencv =  get_match_opencv(matcher_opencv,emb_comp,img_embs) 

and the error is:

error: OpenCV(4.5.2) ../modules/dnn/src/dnn.cpp:3127: error: (-215:Assertion failed) inp.total() in function 'allocateLayers'

did you get this error?

resolve this error... i am set not correct the input to the network.
correct input set is:

def get_match_opencv(matcher_opencv,embs_0,embs_1):
    matcher_opencv.setInput(embs_0,'x')
    matcher_opencv.setInput(embs_1,'x_1')
    # Runs a forward pass to compute the net output
    return matcher_opencv.forward()   

but when i try to load my old network that has different in second layer i get error:

cv2.error: OpenCV(4.5.2) ../modules/dnn/src/dnn.cpp:621: error: (-2:Unspecified error) Can't create layer "Similarity_Model/lambda_1/pow" of type "Pow" in function 'getLayerInstance'

the different is that in old network i am use pow function instead multiplication in new network

may be you know how i can solve it?
thanks a lot

@rogday
Copy link
Member

rogday commented Jul 13, 2021

the different is that in old network i am use pow function instead multiplication in new network

Do you mean A^B element-wise? I do not believe we support this layer. ONNX importer supports A^c, where A, B - tensors and c is a constant scalar. If the original problem was resolved, feel free to close this issue and open a new one, concerning Pow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants