Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training Process Unstable #17

Closed
bryanwong17 opened this issue Jan 23, 2024 · 2 comments
Closed

Training Process Unstable #17

bryanwong17 opened this issue Jan 23, 2024 · 2 comments

Comments

@bryanwong17
Copy link

Hi, I have tried to implement several MIL models, following a similar approach to your implementation. However, I noticed that the performance can vary significantly, up to 10%, just by changing the training seed (ex from seed 0 to 10). I suspect this is due to the small training dataset size (Camelyon16), and the final test performance is determined by the last epoch (50). Moreover, instabilities in training seem to occur, especially with the AB-MIL and DS-MIL models

@HHHedo
Copy link
Owner

HHHedo commented Jan 23, 2024

Yes, we do observe a similar phenomenon and discuss it in the issue of dsmil. However, this is actually out of the scope of this repository.

@bryanwong17
Copy link
Author

Thank you for the confirmation

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants