New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Output tensors to a Model must be Keras tensors. Found: Tensor #6263
Comments
If you directly compute a tensor it wont work the way you want to. Any tensor you are feeding into a layer should be coming from another layer. Make sure absolutely everything you do is wrapped within a Lambda and that particular error should go away. Also, are you on keras 1 or 2? Helps to know the version. Cheers |
@bstriner Thanks a lot for the reply. What you said is right about the flip_gradient. The problem is fixed by Joel using f_ = Lambda(lambda x: flip_gradient(x, 1)) (merg_x). However, regarding the statement that "Any tensor you are feeding into a layer should be coming from another layer" , if suppose y1 is a 1 dim numpy array of a ground truth, and I want to use |
@amaall Keras standard is for y1 to be a 2 dim array (n,1) so all of the dimension checking will work correctly. When you use it you can always Anything you are passing into another layer needs to be a keras tensor so it will have a known shape. Keras tensors are theano/tf tensors with additional information included. You get keras tensors from If you're just using the tensor in a loss calculation or something else, you don't have to wrap it in Lambdas. Let's say your model has an output named Use a different loss function for whatever That is the most straightforward way to do things. The alternative you sometimes use in some situations is to have y1 be a (n,1) input The first way you would pass y1 as a target when you train. The second way you would pass y1 as an input when you train. If you're using y1 for something else as well the second way is sometimes easier. Cheers |
@bstriner Many thanks for the explanation. I have followed the solution you provided as below: def my_loss(yb,pred_out):
And I got the following error: TypeError: Output tensors to a Model must be Keras tensors. Found: <function myModel2..my_loss at 0x7f41e2291bf8>Could there be a way around it? I tried using mse =Lambda(lambda ys,pred_out: K.mean(yb - pred_out)) and I got thesame error. |
Dude, fix your formatting. The loss is a function or a lambda, not one wrapped in the other. You give keras a list of outputs and targets and losses and it adds the losses together. You don't need to total things yourself. You pass your custom losses into compile, not the Model constructor. Simple code for custom loss. Just add more losses if you want and Keras will add them together. If you have multiple outputs, you can use a dictionary or an array for the losses. In order for the Subtracting two values is not the mse or mean_squared_error. Please call it something else or someone is going to get confused. The below uses a custom loss for pred_out and binary_crossentropy for dom_out. The two losses are added together to make the total loss automatically by keras. If you want to adjust how they are added, use # pred_out and dom_out are the outputs of two layers
# input1 and input2 are keras.layers.Input objects
model = Model(input=[input1, input2],output=[pred_out, dom_out])
#Something like this
def my_loss(ytrue, ypred):
# ytrue is your target Y during training. ypred is the output, in this case, pred_out
return ytrue-ypred
model.compile('RMSprop',loss={'pred_out':my_loss, 'dom_out':'binary_crossentropy'})
#or something like this
model.compile('RMSprop',loss=[lambda ytrue, ypred: ytrue-ypred, 'binary_crossentropy']) Cheers |
@bstriner You are a great tutor! I learn a lot from you. Thanks |
I have used the same data setup in a feed function in tensorflow, and my codes is working. I wonder why the codes below give me the error:
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed. |
i am trying to convlove two tensor and output of the convolution will be the the output of the model,but it is giving error that output tensor should be keras tensor.
Stage 1
|
i have the same problem. when i used concat ,this problem appear. sentence_input = Input(shape=(1000,), dtype='int32') total_vec = np.load('total_vec.npy') sentEncoder = Model(sentence_input, document_vec) what shou i do? thank you! |
Can you guys help me out i'm trying to deploy keras neural network on a flask web service, but i get this error when i click on the predict button import base64 app = Flask(name) def get_model(): def preprocess_image(image, target_size):
print(" * Loading keras model...") @app.route("/predict", methods=["POST","GET"])
Predictions dog: cat:
Pls help anyone |
Hi, I have been struggling to address this problem, even though, similar problems were solved
. But I've been trying to handle this, with less success. I will appreciate any idea to my peculiar problem.Thank you.
x= Dense(n_neurons,input_dim=n_feat, W_regularizer=l2(0.001),activation='relu')(input1)
y = Dense(n_neurons,input_dim=n_feat,activation='relu')(input2)
#concat_x = K.concat([x,input2],1)# + 0.01* tf.nn.l2_loss(W0)
#x2 = Dense(n_neurons)(input2)
#my_lamb = Lambda(my_input,output_shape=my_output)(x2)
merg_x = merge([x, y],mode='concat',concat_axis=-1)
pred_out = Dense(1,activation='relu')(merg_x)
f_ =flip_gradient(merg_x, 1)
x = Dense(n_feat,activation='relu')(f_)
x = Dense(n_feat,activation='relu')(x)
dom_out = Dense(2,activation='softmax')(x)
print(K.shape(pred_out))
#mse2 = Dense(1,activation='relu')(mse)
TypeError: Output tensors to a Model must be Keras tensors. Found: Tensor("Softmax:0", shape=(?, 2), dtype=float32)
The text was updated successfully, but these errors were encountered: