-
Notifications
You must be signed in to change notification settings - Fork 19.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple output model Prediction #5331
Comments
Hello, |
Hi fouadb66, have you solved your question? I am not sure how you train your models? Since these 3 models share the same layer, I think train the models separately will not lead to global optimal. |
This issue has been handled in #2397 . The problem is not in output prediction, but occurs when doing inference in a different thread than where you load the model. Hope it helps. |
For reference, since there's two components to the solution.
from keras.models import Model
# in the main thread
my_model = Model(inputs = [inputs], outputs = [outputs])
my_model._make_predict_function()
# my_model.compile(... if you're doing training as well
# in another thread
my_model.predict(inputs) # should work now
So you'll need to manually carry around a copy of the tensorflow graph that keras creates and use it as context when you do inference in a thread: import tensorflow as tf
from keras.models import Model
# in the main thread
my_model = Model(inputs = [inputs], outputs = [outputs])
my_model._make_predict_function()
# my_model.compile(... if you're doing training as well
graph = tf.get_default_graph()
# in another thread
with graph.as_default():
my_model.predict(inputs) # should work now It's a pain, and really should be handled by Keras, but at least it works, so you can do something like serve your keras model using an asynchronous webserver to handle requests, and a threadpool to do the actual inference. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed. |
I designed a multi-output model with 3 loss. here is the output of my model:
output1 = Dense(outoutClaas,activation='softmax')(BT1)
output2 = Dense(outclassL12,activation='softmax')(BT1)
output3 = Dense(outclassL3,activation='softmax')(BT1)
model = Model(input=inputs, output=[output1, output2,output3])
The model training well, but i am getting errors during the prediction:
model.predict(input)
ValueError: Tensor Tensor("Softmax:0", shape=(?, 7201), dtype=float32) is not an element of this graph.
7201 is the number of the classes of the first output.
My questions is how can it get the prediction from output1, output2, output3, separately?
Thank you for your help...
The text was updated successfully, but these errors were encountered: