New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
layer.output raises AttributeError because inbound nodes lost after call to activation function #34834
Comments
I am not using the Sequential Model. I discovered the problem while subclassing a layer where I needed to call tf.stop_gradient. [Note: Subclassing a layer does not require the input_shape parameter. In my example I'm calling the model on a tf.keras.Input which provides the appropriate input shape. Just to prove this to myself I've ammended the code in my original post, it still fails.] TL;DNR: The inbound nodes problem happens whenever you are subclassing (either Model or Layer) and you use functions instead of classes for some of the steps in the call function. You can work around this problem by wrapping any tensorflow function inside a tf.keras.layers.Lambda. Detailed comments:
Addendum:See my original post for code to reproduce the problem. Here is an example using the functional interface that works:
|
Issue replicating for given code,kindly find the gist of colab.Thanks! |
Yes I know that the ReLU layer works (I showed that in my code to reproduce the problem). There are two reasons why I don't think this should be closed. First: tf.keras.activations.get('relu') returns the function, not the layer class. If only the layer class is supported then it should be the one returned by the get. I was using the get method so that I could change the activation function from a configuration file. The get method does not accept "ReLU", which is another way to fix this part of the problem. And 2nd, As I said in my second post, the problem seems to occur whenever you use any tensorflow function in a layer subclass or model subclass. It is not limited to relu. For some tensorflow functions there is no Keras layer available: e.g. tf.stop_gradient() I know there is a workaround (I explained how to do that in my second post). But this problem ought to be fixed, and not just worked around. The error message is very obscure, and this was difficult to debug and reproduce. At the least, the documentation and tutorials discussing the subclass approach should indicate that there are issues if you use tensorflow functions in the pipeline, or the function should throw an exception if it is an invalid use of the function. |
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model, backend as K
mnist = keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
x_train = x_train[..., tf.newaxis]
x_test = x_test[..., tf.newaxis]
train_ds = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(10000).batch(32)
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = Conv2D(32, 3, activation='relu', input_shape=(28, 28, 1))
self.flatten = Flatten()
self.d1 = Dense(128, activation='relu')
self.d2 = Dense(10, activation='softmax')
def call(self, x):
x = self.conv1(x)
x = self.flatten(x)
x = self.d1(x)
return self.d2(x)
# def get_layer_output(self):
# output = graph.get_tensor_by_name('output:0')
model = MyModel()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='train_accuracy')
test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(
name='test_accuracy')
@tf.function
def train_step(images, labels):
with tf.GradientTape() as tape:
predictions = model(images)
loss = loss_object(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
train_accuracy(labels, predictions)
@tf.function
def test_step(images, labels):
predictions = model(images)
t_loss = loss_object(labels, predictions)
test_loss(t_loss)
test_accuracy(labels, predictions)
EPOCHS = 1
for epoch in range(EPOCHS):
train_loss.reset_states()
train_accuracy.reset_states()
test_loss.reset_states()
test_accuracy.reset_states()
for images, labels in train_ds:
train_step(images, labels)
for test_images, test_labels in test_ds:
test_step(test_images, test_labels)
template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
print(template.format(epoch+1,
train_loss.result(),
train_accuracy.result()*100,
test_loss.result(),
test_accuracy.result()*100))
def get_all_outputs(model, input_data, learning_phase=1):
outputs = [layer.output for layer in model.layers[1:]] # exclude Input
layers_fn = K.function([model.input, K.learning_phase()], outputs)
return layers_fn([input_data, learning_phase])
outputs = get_all_outputs(model, "input_data", 1)
print(outputs) The above sample code reproduces the error: Traceback (most recent call last):
File "/Volumes/SAMSUNG/ai/get-layer-output.py", line 102, in <module>
outputs = get_all_outputs(model, "input_data", 1)
File "/Volumes/SAMSUNG/ai/get-layer-output.py", line 97, in get_all_outputs
outputs = [layer.output for layer in model.layers[1:]] # exclude Input
File "/Volumes/SAMSUNG/ai/get-layer-output.py", line 97, in <listcomp>
outputs = [layer.output for layer in model.layers[1:]] # exclude Input
File "/Users/xoxoxo/Library/Python/3.6/lib/python/site-packages/tensorflow_core/python/keras/engine/base_layer.py", line 1553, in output
raise AttributeError('Layer ' + self.name + ' has no inbound nodes.')
AttributeError: Layer flatten has no inbound nodes. I am using Python 3.6.6 on Mac with TF 2.1 |
Was able to reproduce the issue with TF v2.2 and TF-nightly. Please find the attached gist. Thanks! |
Hi @zzj0402 ... were you able to solve this issue with regard to the inbound nodes? I am trying to do the exact same thing and am running into similar problems. How did you get around this when using custom models in TF2? |
Was able to reproduce the issue in TF v2.5,please find the gist here..Thanks ! |
@chahld @zzj0402 I solved this issue with a simple trick import tensorflow as tf
class MyModel(tf.keras.Model):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.dense0 = tf.keras.layers.Dense(10, name='dense0')
self.dense1 = tf.keras.layers.Dense(10, name='dense1')
self.dense2 = tf.keras.layers.Dense(10, name='dense2')
def build(self, input_shape):
self.dense2(self.dense1(self.dense0(tf.keras.layers.Input(shape=input_shape[1:], name="input_x"))))
def call(self, x):
x = self.dense0(x)
# if you use this line it works
x = tf.keras.layers.ReLU()(x)
x = self.dense1(x)
print('correct:', self.dense1.inbound_nodes)
# if you use this line it doesn't work
relu = tf.keras.activations.get('relu')
x = relu(x)
x = self.dense2(x)
print('incorrect:', self.dense2.inbound_nodes)
return x
def main():
my_model = MyModel()
inp = tf.keras.Input(shape=(5, 5, 1))
out = my_model(inp)
my_model.summary()
my_model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy')
for l in my_model.layers:
try:
print(l.output)
except AttributeError:
print('EXCEPTION: {}.output raises attribute error'.format(l.name))
if __name__=='__main__':
main() and this is the output:
Basically, I did add a build method to your model and explicitly computed the output of each layer. def build(self, input_shape):
self.dense2(self.dense1(self.dense0(tf.keras.layers.Input(shape=input_shape, name="input_x"))))
my_model(tf.random.uniform([5, 5, 1]) I hope that helps good luck |
@chahld please check #34834 (comment) and let us know if your issue got resolved or not? |
This issue has been automatically marked as stale because it has no recent activity. It will be closed if no further activity occurs. Thank you. |
Closing as stale. Please reopen if you'd like to work on this further. |
This issue still persists in TF 2.10 when building custom models that inherit from |
System information
ubuntu 18.04
pip install tensorflow-gpu
v2.0.0-rc2-26-g64c3d38 2.0.0
and
v2.0.0-beta0-16-g1d91213fe7 2.0.0-beta1
3.6.9
Describe the current behavior
calling layer.output on a keras layer that is called on the output of an activation function does not setup the inbound nodes properly and so one cannot call the layer.output method.
Describe the expected behavior
layer.output should return the output tensor
Code to reproduce the issue
Other info / logs
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
UPDATED: to include input_shape which does not solve the problem.
The text was updated successfully, but these errors were encountered: