Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sequential model runtime error #66

Closed
karloballa opened this issue Oct 29, 2020 · 8 comments
Closed

Sequential model runtime error #66

karloballa opened this issue Oct 29, 2020 · 8 comments

Comments

@karloballa
Copy link

I'm new to Keras/Tensorflow framework, so my question may be silly.

I have the next problem. If I create a functional model:

input = tf.keras.Input(shape=(5,))
output = tf.keras.layers.Dense(5, activation=tf.nn.relu)(input)
output = tf.keras.layers.Dense(1, activation=tf.nn.sigmoid)(output)
model = tf.keras.Model(inputs=input, outputs=output)
model.compile()
model.save('model', save_format='tf')

then everything works fine.

When however I create the same model with the sequential API, I got the runtime error:

model = Sequential()
model.add(Input(shape=(5,)))
model.add(Dense(5))
model.add(Dense(1))
model.compile()
model.save('model', save_format='tf')
...
auto input = cppflow::fill({10, 5}, 1.0f);
cppflow::model model("model");
auto output = model(input); // << throw std::runtime_error("No operation named "" + op_name + "" exists");

Any help?

Thanks in advance

@serizba
Copy link
Owner

serizba commented Oct 29, 2020

Hi @karloballa,

I have tried your two ways of creating the model and for me it works fine in both ways. Maybe your models are being created with different operations names.

Can you please show me the output of the command saved_model_cli show --dir /path/to/model --all for both ways?

Also, you can use the function model::get_operations() to show the available operations. Can you also show the output of running the following code on both ways of create the model?

for (auto s: model.get_operations())
    std::cout << s << std::endl;

@karloballa
Copy link
Author

karloballa commented Oct 29, 2020

Thank you for the fast answer. I see there are some minor differences, but I don't know what they mean.
Here are the logs.

model.get_operations(), Functional:

dense/kernel
dense/kernel/Read/ReadVariableOp
dense/bias
dense/bias/Read/ReadVariableOp
dense_1/kernel
dense_1/kernel/Read/ReadVariableOp
dense_1/bias
dense_1/bias/Read/ReadVariableOp
NoOp
Const
serving_default_input_1
StatefulPartitionedCall
saver_filename
StatefulPartitionedCall_1
StatefulPartitionedCall_2

model.get_operations(), Sequential:

dense_2/kernel
dense_2/kernel/Read/ReadVariableOp
dense_2/bias
dense_2/bias/Read/ReadVariableOp
dense_3/kernel
dense_3/kernel/Read/ReadVariableOp
dense_3/bias
dense_3/bias/Read/ReadVariableOp
NoOp
Const
serving_default_input_2
StatefulPartitionedCall
saver_filename
StatefulPartitionedCall_1
StatefulPartitionedCall_2

@karloballa
Copy link
Author

saved_model_cli , Functional:

`Defined Functions:
Function Name: 'call'
Option #1
Callable with:
Argument #1
inputs: TensorSpec(shape=(None, 5), dtype=tf.float32, name='inputs')
Argument #2
DType: bool
Value: True
Argument #3
DType: NoneType
Value: None
Option #2
Callable with:
Argument #1
input_1: TensorSpec(shape=(None, 5), dtype=tf.float32, name='input_1')
Argument #2
DType: bool
Value: True
Argument #3
DType: NoneType
Value: None
Option #3
Callable with:
Argument #1
input_1: TensorSpec(shape=(None, 5), dtype=tf.float32, name='input_1')
Argument #2
DType: bool
Value: False
Argument #3
DType: NoneType
Value: None
Option #4
Callable with:
Argument #1
inputs: TensorSpec(shape=(None, 5), dtype=tf.float32, name='inputs')
Argument #2
DType: bool
Value: False
Argument #3
DType: NoneType
Value: None

Function Name: '_default_save_signature'
Option #1
Callable with:
Argument #1
input_1: TensorSpec(shape=(None, 5), dtype=tf.float32, name='input_1')

Function Name: 'call_and_return_all_conditional_losses'
Option #1
Callable with:
Argument #1
inputs: TensorSpec(shape=(None, 5), dtype=tf.float32, name='inputs')
Argument #2
DType: bool
Value: False
Argument #3
DType: NoneType
Value: None
Option #2
Callable with:
Argument #1
input_1: TensorSpec(shape=(None, 5), dtype=tf.float32, name='input_1')
Argument #2
DType: bool
Value: True
Argument #3
DType: NoneType
Value: None
Option #3
Callable with:
Argument #1
input_1: TensorSpec(shape=(None, 5), dtype=tf.float32, name='input_1')
Argument #2
DType: bool
Value: False
Argument #3
DType: NoneType
Value: None
Option #4
Callable with:
Argument #1
inputs: TensorSpec(shape=(None, 5), dtype=tf.float32, name='inputs')
Argument #2
DType: bool
Value: True
Argument #3
DType: NoneType
Value: None`

@karloballa
Copy link
Author

saved_model_cli , Sequential:

Defined Functions:
Function Name: 'call'
Option #1
Callable with:
Argument #1
inputs: TensorSpec(shape=(None, 5), dtype=tf.float32, name='inputs')
Argument #2
DType: bool
Value: False
Argument #3
DType: NoneType
Value: None
Option #2
Callable with:
Argument #1
input_2: TensorSpec(shape=(None, 5), dtype=tf.float32, name='input_2')
Argument #2
DType: bool
Value: False
Argument #3
DType: NoneType
Value: None
Option #3
Callable with:
Argument #1
inputs: TensorSpec(shape=(None, 5), dtype=tf.float32, name='inputs')
Argument #2
DType: bool
Value: True
Argument #3
DType: NoneType
Value: None
Option #4
Callable with:
Argument #1
input_2: TensorSpec(shape=(None, 5), dtype=tf.float32, name='input_2')
Argument #2
DType: bool
Value: True
Argument #3
DType: NoneType
Value: None

Function Name: '_default_save_signature'
Option #1
Callable with:
Argument #1
input_2: TensorSpec(shape=(None, 5), dtype=tf.float32, name='input_2')

Function Name: 'call_and_return_all_conditional_losses'
Option #1
Callable with:
Argument #1
inputs: TensorSpec(shape=(None, 5), dtype=tf.float32, name='inputs')
Argument #2
DType: bool
Value: True
Argument #3
DType: NoneType
Value: None
Option #2
Callable with:
Argument #1
input_2: TensorSpec(shape=(None, 5), dtype=tf.float32, name='input_2')
Argument #2
DType: bool
Value: True
Argument #3
DType: NoneType
Value: None
Option #3
Callable with:
Argument #1
input_2: TensorSpec(shape=(None, 5), dtype=tf.float32, name='input_2')
Argument #2
DType: bool
Value: False
Argument #3
DType: NoneType
Value: None
Option #4
Callable with:
Argument #1
inputs: TensorSpec(shape=(None, 5), dtype=tf.float32, name='inputs')
Argument #2
DType: bool
Value: False
Argument #3
DType: NoneType
Value: None

@serizba
Copy link
Owner

serizba commented Oct 29, 2020

Hi @karloballa ,

I think the problem is that you are creating the two models one after the other from the same python script. In this way, when you create the second model, the name serving_default_input_1 is already taken, and then it assigns the serving_default_input_2 name.

If that's your problem, just try to create one model from one script, and the other model from a different script.
Hope it helps!

@karloballa
Copy link
Author

@serizba

You were right. I made a new script and now it works for both APIs.
But that is not the end of the story.

The thing is that I'm not using Python for learning but Keras.Net and the problem I had was originally from the .net framework.

I don't know for Python but in .net there are 2 ways to define an input layer. For example:

smodel.Add(new Input(new Shape(inHeight, inWidth, inChannels)));
smodel.Add(new Conv2D(8, new Tuple<int, int>(3, 3), padding: "same", activation: "relu"));

smodel.Add(new Conv2D(8, new Tuple<int, int>(3, 3), padding: "same", activation: "relu", input_shape: new Shape(inHeight, inWidth, inChannels)));

Now, Cppflow works fine with a model created as in 1), but give me a runtime error on a model created as in 2)

I run get_operations() for both models and find 1 difference. I don't know if that matters, but as I said, the first model works, but the second doesn't.

conv2d/kernel
conv2d/kernel/Read/ReadVariableOp
conv2d/bias
conv2d/bias/Read/ReadVariableOp
conv2d_1/kernel
conv2d_1/kernel/Read/ReadVariableOp
conv2d_1/bias
conv2d_1/bias/Read/ReadVariableOp
conv2d_2/kernel
conv2d_2/kernel/Read/ReadVariableOp
conv2d_2/bias
conv2d_2/bias/Read/ReadVariableOp
dense/kernel
dense/kernel/Read/ReadVariableOp
dense/bias
dense/bias/Read/ReadVariableOp
NoOp
Const
serving_default_input_1 <<<
StatefulPartitionedCall
saver_filename
StatefulPartitionedCall_1
StatefulPartitionedCall_2

conv2d/kernel
conv2d/kernel/Read/ReadVariableOp
conv2d/bias
conv2d/bias/Read/ReadVariableOp
conv2d_1/kernel
conv2d_1/kernel/Read/ReadVariableOp
conv2d_1/bias
conv2d_1/bias/Read/ReadVariableOp
conv2d_2/kernel
conv2d_2/kernel/Read/ReadVariableOp
conv2d_2/bias
conv2d_2/bias/Read/ReadVariableOp
dense/kernel
dense/kernel/Read/ReadVariableOp
dense/bias
dense/bias/Read/ReadVariableOp
NoOp
Const
serving_default_conv2d_input <<<
StatefulPartitionedCall
saver_filename
StatefulPartitionedCall_1
StatefulPartitionedCall_2

@serizba
Copy link
Owner

serizba commented Oct 29, 2020

@karloballa,

As you correctly pointed, that difference in the name is making the calling fail. When you just use model(input) the model looks for an input called serving_default_input_1 but it does not exist in your second case.

To fix it, just specify the name of the input and the output that you want to use, as in the multi input output example:

auto output = model({{"serving_default_conv2d_input:0", input}},{"StatefulPartitionedCall:0"});

@karloballa
Copy link
Author

@serizba

It works now. Thank you!
I think we could consider this "issue" closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants