-
Notifications
You must be signed in to change notification settings - Fork 51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
loss and acc have large differences even though train and valid are set totally same #14
Comments
Could you be more specific, please? Edit: I understand your question now I think. It would still be good to see a minimal example to reproduce the error and to inspect differences between validation and training mode. (Dropout would be the first thing that comes to mind, however that probably doesn't explain it). |
I mean, i found there were gaps between loss and val-loss, acc and val_acc, as you can see in the sample data. i think it was the imbalance distribution between training and valid datatset, so I copy the training data as the valid data, that is, now, the training data is totally same with the valid data, but loss is also different with the val_loss, and acc is also different with val_acc. However, loss should be equal to val_loss, and acc should be val_acc in this setting, since they share totally same data. |
How can we reproduce this? |
Any news? Can I close this? |
dear author,
I download the code to train the original data , but i found acc and loss are much different. Then I set the train dataset as the validation, that is we use the same dataset in training and validation. But, i found the same result. which as follow shows:
8/8 [=========] - 52s 7s/step - loss: 1.2012 - acc: 0.5000 - val_loss: 6.4256 - val_acc: 0.3926
Epoch 2/50
8/8 [=========] - 46s 6s/step - loss: 0.7563 - acc: 0.7617 - val_loss: 1.4548 - val_acc: 0.5596
Epoch 3/50
8/8 [=========] - 45s 6s/step - loss: 0.5647 - acc: 0.7969 - val_loss: 3.5613 - val_acc: 0.5557
Epoch 4/50
8/8 [=========] - 47s 6s/step - loss: 0.4402 - acc: 0.8496 - val_loss: 4.9303 - val_acc: 0.2559
Epoch 5/50
8/8 [=========] - 46s 6s/step - loss: 0.3777 - acc: 0.8672 - val_loss: 1.0182 - val_acc: 0.6807
Epoch 6/50
8/8 [=========] - 45s 6s/step - loss: 0.3009 - acc: 0.8945 - val_loss: 3.2592 - val_acc: 0.3340
Epoch 7/50
8/8 [=========] - 46s 6s/step - loss: 0.2769 - acc: 0.9053 - val_loss: 2.2627 - val_acc: 0.4609
Epoch 8/50
8/8 [=========] - 47s 6s/step - loss: 0.2585 - acc: 0.9150 - val_loss: 1.1746 - val_acc: 0.6348
Epoch 9/50
8/8 [=========] - 47s 6s/step - loss: 0.2096 - acc: 0.9316 - val_loss: 3.2337 - val_acc: 0.5039
Epoch 10/50
8/8 [=========] - 47s 6s/step - loss: 0.2602 - acc: 0.9131 - val_loss: 2.9752 - val_acc: 0.3994
The text was updated successfully, but these errors were encountered: