-
Notifications
You must be signed in to change notification settings - Fork 74.2k
-
Notifications
You must be signed in to change notification settings - Fork 74.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tf.keras computes incorrect loss values with 3+D data #25970
Comments
@bersbersbers are you still seeing this issue? I was not able to repro this on the latest nightly. |
@pavithrasv you are right, |
Here's my
|
I tried to pin down when this issue was introduced and fixed:
So, introduced some time in Aug/Sep 2018 - due to missing |
I am closing this issue as it has been fixed. Thank you for digging into the release details! |
This bug is still in the most current version, By the way, this is a reduced example where the batch size does divide the number of samples:
It outputs:
Note how |
Have you tried https://github.com/tensorflow/tensorflow/releases/tag/v2.0.0-alpha0? We will also have a TF 1.14 release very soon. |
Yes, I have, see #25970 (comment). TF2 does not have that issue, but I don't want my research to rely on Alpha software currently :)
That is good news, thank you! |
I can confirm that this bug has been fixed in
Output:
|
Thank you! |
I am using tf 2.1.0 and experience the same problem, can you suggest anything? |
I am also getting a delta between mse loss and mse metric values, but only when applying regularization (l2 or dropout). |
I have the same problem with tensorflow 2.3.0 when using l1/l2 regularization. |
I'm having the same issue with TensorFlow GPU 2.1.0 and no regularization. However, this happens only on the validation step. |
@oO0oO0oO0o0o00 I have the same strange problem. |
same problem |
When using weight regularization it seems to me that it is normal that the two functions don't ouptut the same results. That is because regularization adds the squared values of all weights to the loss function. In the other hand the MSE metric computes the MSE between the true output and the predicted one. |
I have a similar problem.
during the validation of my model in the fit call, I get these outputs for the validation:
I wonder why val_loss is not the same as val_student_loss, since they are supposed to be the same. When I call model.evaluate() after training, val_loss and val_student_loss are the same. |
System information
Yes. For a minimal example, run
and observe that loss and poisson values are different, and loss values vary:
OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
Windows 10
TensorFlow installed from (source or binary):
pip install tensorflow
TensorFlow version (use command below):
v1.13.0-rc1-19-gc865ec5621, 1.13.0-rc2
Python version:
3.7.2 x64
CUDA/cuDNN version:
n/a
GPU model and memory:
n/a
Describe the current behavior
When fitting a model with
loss="poisson"
, I would expect reportedloss
andpoisson
values to be identical.Describe the expected behavior
loss
values are incorrect. They vary from epoch to epoch.Code to reproduce the issue
See above.
Other info / logs
More code examples and investigations at https://stackoverflow.com/q/54802328/880783
The text was updated successfully, but these errors were encountered: