Skip to content
This repository has been archived by the owner on Mar 21, 2024. It is now read-only.

Enable fine-tuning in Deepmil #650

Merged
merged 28 commits into from
Feb 9, 2022
Merged

Enable fine-tuning in Deepmil #650

merged 28 commits into from
Feb 9, 2022

Conversation

harshita-s
Copy link
Contributor

This PR contains the following:

  • Fine-tuning can be enabled or disabled in Deepmil for PANDA.
  • Correct way of initializing HistoSSL encoder.

ant0nsc
ant0nsc previously approved these changes Feb 3, 2022
Copy link
Contributor

@ant0nsc ant0nsc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks good - but do we have any way of testing it?

InnerEye/ML/Histopathology/models/deepmil.py Outdated Show resolved Hide resolved
Copy link
Contributor

@vale-salvatelli vale-salvatelli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed I think we need to make this more robust:

  • makes sure it can run on multi-gpu and doesn't hit memory errors
  • adding tests
    But I am happy to approve as first version so that more people can help out

vale-salvatelli
vale-salvatelli previously approved these changes Feb 8, 2022
histossl_encoder.fc = torch.nn.Sequential()
for param in histossl_encoder.parameters():
param.requires_grad = False
return histossl_encoder, num_features # type: ignore
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor suggestion to revert these changes in _get_encoder(), as setup_feature_extractor() now implements the same behaviour.

@harshita-s harshita-s marked this pull request as ready for review February 9, 2022 09:58
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants