-
Notifications
You must be signed in to change notification settings - Fork 19.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The loss function can not contain the output of intermediate layers ? #5563
Comments
Hello, You usually can pass an array of losses in compile. But to add a custom loss. You create a custom layer (returning anything as you won't use it) , and inside call you do add_loss() Running code (not tested for bugs):
|
Thanks for your reply. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed. |
@unrealwill I have the need for computing loss using the output of intermediate layers, and I use the code you give above; The part train result as follow: |
@unrealwill where is this function "get_output_shape_for " being called? because when I run the above code, I get the dimension mismatch error, as the shape of the output of custom regularization layer is the shape of bce and not the shape of get_output_shape_for function. |
If I am calculating multiple custom losses throughout the model, using this method, how can I return each loss separately when running m.fit()? For instance, in this example the first loss returned is the total loss, the second and third are the binary cross-entropy losses, and the last returned loss is just zero. Therefore, it is easy to calculate the custom loss, as it is just the remainder of the binary cross-entropy losses subtracted from the total loss. What if I have two separate custom losses? |
@unrealwill: Thank you for your code. I was wondering about the role of zero_loss in your code. Here, zero_loss function returns a tensor with zero values for all elements. What does it do here as the loss of the full model exactly? I was very confused about this. If the loss is such a constant, what would happen during training? Also do you have any reference about this? |
@hoangcuong2011 Hello, First of all, this is kind of old thread digging. Not still sure if the code works again or if it has rotten, but in principle (i.e. adding a custom loss the same way we add a regularization term is still sound). Nowadays, when I use Keras, I write my losses in Tensorflow. So I'm not aware if there is a newer neater way of doing things staying Keras only. If I remember correctly, the issue was mostly about getting Keras to compute a specific loss function. The zero_loss is a trick : it's just a way to ignore the output value (yet not have it simplified by the computation graph), because what matter is the "self.add_loss" inside the CustomRegularization Layer, which taps directly at a lower level into the back-end (theano or tensorflow) to allow us to add any custom term. Thanks. |
I need to add a loss term of two intermediate layers. For example:
However, the error tells me that the
three_loss
need six input but only given four. How can I solve this problem? To my knowledge, the input and output must be the actual data sets and observed targets, I can not add the rd or ld in them.The text was updated successfully, but these errors were encountered: