New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError: Graph disconnected: cannot obtain value for tensor Tensor…The following previous layers were accessed without issue: #42017
Comments
Was able to reproduce the issue with TF v2.3 and TF-nightly. Please find the gist of it here. Thanks! |
any updates ? |
@gowthamkpr No,This issue is very different from that one. These are two completely unrelated issue. embedding_model = model1()
model2 = model2(embedding_model)
print(model2.summary())
input_data2 = np.zeros((1,10,100,100,3))
result = model2.predict(input_data2)
print(result) #works Do you see it ? when I use the raw model2 to predict ,JUST input one tensor ,no problem.But I MUST input two tensor when I want to get output intermediate layer of model1 and model2. output1 = model1_output_layer.get_output_at(0)
output2 = model2.get_layer('dense1').get_output_at(0)
output_tensors = [output1,output2]
isubmodel = tf.keras.Model([model1_input,model2_input],output_tensors) #input two tensor
input_data1 = np.zeros((1,100,100,3))
input_data2 = np.zeros((1,10,100,100,3))
result = submodel.predict([input_data1,input_data2]) Model1 is part of model2, we don't need an extra tensor |
I have encountered this same issue as well. In general, it happens any time you use some functional sub-model as a layer in your model, and then you want to use a tensor from that sub-model in the outer model (e.g., exposing it as an extra output to debug or analyze a model). Example: import tensorflow as tf
inp1 = tf.keras.Input(1)
h1 = tf.keras.layers.Dense(1)(inp1)
out1 = tf.keras.layers.Dense(1)(h1)
model1 = tf.keras.Model(inputs=inp1, outputs=out1)
inp2 = tf.keras.Input(1)
h2 = model1(inp2)
out2 = tf.keras.layers.Dense(1)(h2)
model2 = tf.keras.Model(inputs=inp2, outputs=[out2, model1.layers[1].output]) =>
The issue does not occur if the submodel is at the same "level" as the outer model, or if you explicitly make the required tensor an output of the sub-model. It's not clear to me if there are any other ways around this, but this seems to be the intent with the functional model API. |
I am also experiencing this issue. I wish to perform feature extraction on my siamese network model (which contains two sub-models) and I am facing this issue. |
As a workaround, you can add the embedding layers to the model's outputs:
where A and B are the inputs to the sub-networks, and embeddingA and embeddingB are the outputs (the embeddings) of these sub-networks. Not beautiful, but it works in a custom train loop. |
Can you take a look at the above workaround proposed by @Yannik1337 and let us know if it helps? Thanks! |
This issue has been automatically marked as stale because it has no recent activity. It will be closed if no further activity occurs. Thank you. |
Closing as stale. Please reopen if you'd like to work on this further. |
I want to obtain the output of intermediate sub-model layers with tf2.keras.Here is a model composed of two sub-modules:
output:
and then,I want to get output intermediate layer of model1 and model2:
Running in tf2.3 ,the error I am getting is:
But the following code works:
But not what I want.This is strange, model1 is part of model2, so why do we need to input an extra tensor
input_data1
? Sometimes,it is hard to get an extra tensor,especially for complex models.What should I do? Do we need a new API to support this functionality?The text was updated successfully, but these errors were encountered: