This repository has been archived by the owner on Mar 21, 2024. It is now read-only.
Training metrics are different for the same config and same seed for different runs #839
Closed
1 task done
Is there an existing issue for this?
Problem summary
Training metrics used in consecutive trainings, with same parameters, are different for our own configs.
Code for reproduction
Actual outcome
We developed our configs with 8495a2e InnerEye commit,
When training on 8495a2e the resulting metrics are the same for different runs and same seed.
When training on last committ: d902e02, the resulting metrics are slighlty different for different runs and same seed.
We made a separate test for Lung.py config which comes with InnerEye, and the metrics are the same when tested on both 8495a2e and d902e02 .
In our config we use only random module from standard library, which is supposed to be seed with seed_everything method from pytorch lightning. We checked some values in randomly generated numbers in our module, and they were the same for different runs.
Error messages
No response
Expected outcome
We want to have exactly the same results for all runs with the same config and seed value.
System info
System: Ubuntu 18.04.5 LTS
env.txt
AB#8327
The text was updated successfully, but these errors were encountered: