You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I was wondering how large slides were handled during inference and training. Was there any limit on the bag-size to prevent OOMs? If so can you clarify how the predictions from multiple bags were aggregated.
Thanks in advance.
The text was updated successfully, but these errors were encountered:
During training and testing, we adopted the batch size=1 scheme. At the same time, the half-precision floating-point training of the pytorch lightning framework helps TransMIL to process 20 times WSI features.
Hi,
I was wondering how large slides were handled during inference and training. Was there any limit on the bag-size to prevent OOMs? If so can you clarify how the predictions from multiple bags were aggregated.
Thanks in advance.
The text was updated successfully, but these errors were encountered: