Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handling large bags during inference #17

Closed
harshith2794 opened this issue Jun 27, 2022 · 1 comment
Closed

Handling large bags during inference #17

harshith2794 opened this issue Jun 27, 2022 · 1 comment

Comments

@harshith2794
Copy link

Hi,
I was wondering how large slides were handled during inference and training. Was there any limit on the bag-size to prevent OOMs? If so can you clarify how the predictions from multiple bags were aggregated.

Thanks in advance.

@szc19990412
Copy link
Owner

During training and testing, we adopted the batch size=1 scheme. At the same time, the half-precision floating-point training of the pytorch lightning framework helps TransMIL to process 20 times WSI features.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants