-
Notifications
You must be signed in to change notification settings - Fork 19.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues in Keras model loading in Tensorflow Serving #2310
Comments
I will think about it. What you can do to fix it right now is to make the K._LEARNING_PHASE = tf.constant(0) # test mode (define your model here) (export your model here) In the future we might expose a built-in interface for exporting and Let me know how it goes. On 13 April 2016 at 17:07, Viksit Gaur notifications@github.com wrote:
|
@fchollet ah - so, I've tried doing that and no luck. Steps
Does an exported model contain the learning phase value within it? I'm still seeing the same error. |
The method I described does work. Just make sure that all ops in your model On 13 April 2016 at 19:10, Viksit Gaur notifications@github.com wrote:
|
Alright, let me retry with a fresh environment. One thing to note - I train and export my model/weights into json and .h5 files to disk, and then load them/compile them again in the export script which is sitting in a different location. I'm hoping that won't cause any issues. |
As long as you are setting the learning phase as constant before you build the fresh model (not just compile, everything), it will be fine. The learning phase does not affect weight loading. |
Gotcha. Is there something I can check within a model to see that the learning phase constant is being set correctly? |
No luck. Retried the entire model creation in one go with K._LEARNING_PHASE set and exporting via the Exporter. Then loaded it via the C++ interface - and the same error message.
Somewhere, this constant is not being picked up. I'm not sure where, in my case. |
Since this issue should be reproducible even locally - that is, if I create a Keras model and then create a K.function() using the input and output - and then execute it directly, we bypass the internal _make_predict_function() (which adds the K.learning_phase()) - we should see this issue. Here's the code to reproduce. What am I missing?
|
Weird stuff: ...
K._LEARNING_PHASE = "bob" # try with 1
... Still working:
With the same error:
When I try to compile:
|
It's working if I modify it directly in pred = K.function([model.input], [model.output])
pred([X_test])
[array([[ 2.50758745e-07, 3.05174382e-07, 4.29301508e-05, ...,
9.99435365e-01, 1.76608160e-06, 1.35888593e-04],
[ 2.79599544e-06, 4.31858498e-05, 9.99716461e-01, ...,
1.24848683e-07, 2.41637827e-05, 3.72424114e-09],
[ 2.41596881e-05, 9.93469715e-01, 1.15284347e-03, ...,
3.07259732e-03, 3.31207732e-04, 1.67881037e-04],
...,
[ 3.63393269e-07, 2.36019332e-07, 3.89473325e-06, ...,
2.09421967e-04, 1.90011575e-04, 4.02843840e-02],
[ 3.75664458e-05, 1.40146096e-06, 6.29712886e-08, ...,
1.97701837e-07, 1.85844809e-04, 4.84997031e-07],
[ 1.27407093e-05, 1.05844613e-07, 2.26282689e-04, ...,
7.78336506e-09, 1.69879925e-07, 6.13535818e-08]], dtype=float32)] |
Import ...
from keras.layers.core import K
... |
This is Python imports shenanigans, I get it now. This is easily solved via On 14 April 2016 at 06:59, Thomas Boquet notifications@github.com wrote:
|
I added |
Thanks @fchollet - will update and test out. Hopefully I can make this a repeatable process as well. This python import problem was such a headache :) |
Running on Keras 2.0.5, tensorflow 1.2.1 and tensorflow-serving-api 1.0.0 |
it is not working for me at all. |
This worked for me. |
This worked wonders for my export script for h5 -> pb. THANK YOU! |
reinstall tensorflow==1.11.0 and keras==2.1.2 and tensorflow-serving-api==1.0.0 works for me. |
Here's how I'm approaching this problem,
My problem is in (2).
Here's how I export the model.
So far so good. We've managed to export the keras model session via the session exporter.
However, the issue here is the way TF serving's exporter module expects a classification signature.
In a non Keras model, this works fine. But in a Keras model using the TF backend, unless your input is of the form
with the second item being the K.learning_phase() placeholder, you can't invoke the prediction function.
You get this error,
Since the classification API only supports one input tensor and not a list, I can't see a way to export this model in a way that can be read by the Serving infrastructure.
Is there a workaround?
The text was updated successfully, but these errors were encountered: