You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I have tried to implement several MIL models, following a similar approach to your implementation. However, I noticed that the performance can vary significantly, up to 10%, just by changing the training seed (ex from seed 0 to 10). I suspect this is due to the small training dataset size (Camelyon16), and the final test performance is determined by the last epoch (50). Moreover, instabilities in training seem to occur, especially with the AB-MIL and DS-MIL models
The text was updated successfully, but these errors were encountered:
Hi, I have tried to implement several MIL models, following a similar approach to your implementation. However, I noticed that the performance can vary significantly, up to 10%, just by changing the training seed (ex from seed 0 to 10). I suspect this is due to the small training dataset size (Camelyon16), and the final test performance is determined by the last epoch (50). Moreover, instabilities in training seem to occur, especially with the AB-MIL and DS-MIL models
The text was updated successfully, but these errors were encountered: