New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Validation quality metrics "get stuck" #20
Comments
Hi @saskra . Thanks for your interest in our project and thanks for sharing this. It's a bit weird, but unless we can reproduce the exact issue, it would be difficult to do solve this. Maybe somehow it is acting as an adversarial attack or something. What do you think? |
Yes, this could well be such a case. Unfortunately, it is also difficult for me to reproduce. A few examples:
Presumably it would not be possible to clear it up without greater effort and the original data. It would only have been interesting to know if I am actually the only one with this event, so whether it is perhaps due to my data or the hardware. |
Interesting points and observations. Sorry I can't suggest or add anything as I haven't faced such. I hope this gets resolved and you get some significant findings. |
With certain splits of training, validation and test data sets, I can always observe a strange behaviour that does not occur with a different distribution of the same overall data set. Namely, the values on the validation data seem to stand still from the beginning during training, on enormously bad numbers, while the values on the training data keep improving. Unfortunately, it is not really possible to reconstruct which combination of images triggers this behaviour, but it probably really depends on a combination and not on individual images. Is such a behaviour known, or even a solution for it?
The text was updated successfully, but these errors were encountered: