-
Notifications
You must be signed in to change notification settings - Fork 74.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using Hidden layers output in the loss function #43151
Comments
Can you |
Thanks for your answer ! I'm actually getting a In fact, i'm trying to reproduce this model https://github.com/IAmSuyogJadhav/3d-mri-brain-tumor-segmentation-using-autoencoder-regularization/blob/master/model.py Using 2 model outputs, a dice loss for one ouput ( GT ) and a loss function ( VAE) depending on two layers (z_mean,z_var) of the model. I think i've tried everything to get it work (layer output in loss function), so if someone can manage to make the standalone code i've made in the first post work, with ability to generalize for multiple outputs, it will be huge ! Thanks. |
@ravikyram It seems to me that you have not reproduced the same user example with nightly cause on your Colab gist you are missing the @Otakarlp As I told you I think the you could follow the documentation example I mentioned with |
First, thank you both for trying to help me. Secondly, i want to make my mea culpa here, @bhack was absolutely right, i used For those who are trying to add intermediate layers to the loss function with respect to a certain path of the model, please use the gist Here. |
Please make sure that this is a bug. As per our
GitHub Policy,
we only address code/doc bugs, performance issues, feature requests and
build/installation issues on GitHub. tag:bug_template
System information
Describe the current behavior
Hidden layers output of the model cannot be accessed outside of the function building code.
Describe the expected behavior
To be able to use hidden layers output in my loss function.
Standalone code to reproduce the issue
https://colab.research.google.com/drive/1laEpykHax2QbAV4SB-8Srwfh9SvmAt4B?usp=sharing
Other info / logs
TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
@tf.function
def has_init_scope():
my_constant = tf.constant(1.)
with tf.init_scope():
added = my_constant * 2
The graph tensor has name: dense_6/Relu:0
During handling of the above exception, another exception occurred:
_SymbolicException Traceback (most recent call last)
9 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
72 raise core._SymbolicException(
73 "Inputs to eager execution function cannot be Keras symbolic "
---> 74 "tensors, but found {}".format(keras_symbolic_tensors))
75 raise e
76 # pylint: enable=protected-access
_SymbolicException: Inputs to eager execution function cannot be Keras symbolic tensors, but found [<tf.Tensor 'dense_6/Relu:0' shape=(None, 128) dtype=float32>]
When i used
tf.config.run_functions_eagerly(True)
I got my loss being equal to 0.0000e+00
The text was updated successfully, but these errors were encountered: