Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Models Inconsistency #566

Open
8 tasks
blaginin opened this issue Mar 13, 2023 · 4 comments · May be fixed by #578
Open
8 tasks

Models Inconsistency #566

blaginin opened this issue Mar 13, 2023 · 4 comments · May be fixed by #578
Labels
code readability documentation Improvements or additions to documentation enhancement New feature or request refactoring Code Refactoring

Comments

@blaginin
Copy link
Collaborator

  • TIA Toolbox version: 1.3.3
  • Python version: 3.10.8

Description

Tiatoolbox has several pre-trained models helpful for data processing. However, models differ in how they handle input and output, making them confusing to use (especially when customizing):

  • Activation functions apply at different steps for different models. Sometimes in forward method (e.g. CNNModel), sometimes forward returns a raw layer output and the transformation applies in infer_batch (e.g. UNetModel).
  • Moreover, activation functions are hardcoded. To customize, you can't simply change an attribute; you must overwrite the whole method (different for each model).
  • Data normalization distributes across methods: HoVerNet uses it in forward, UNetModel in _transform, MicroNet in preproc, and vanilla models rely on the user to do so.
  • Data preprocessing also lacks consistency. Even though it should happen in preproc_func/_preproc functions, UNetModel uses its own _transform, unrelated to the standard methods. Yet, its behavior could implement in _preproc.

What to do

Refactoring the code will significantly improve readability:

  • Decompose the pipeline into small granular methods in ModelABC: one method for normalization, activation function as an attribute, etc.
  • Explain ModelABC methods in their documentation: does infer_batch rely on postproc_func? Can infer_batch be used for training? How?
  • Reorganize custom model methods to match the new ModelABC structure.
  • Add a new page to the documentation explaining the Tiatoolbox models pipeline: how is it related to the PyTorch pipeline? How to evaluate a model? How to train a model?
@John-P John-P changed the title Models inconsistency Models Inconsistency Mar 16, 2023
@John-P John-P added documentation Improvements or additions to documentation enhancement New feature or request refactoring Code Refactoring code readability labels Mar 16, 2023
@blaginin
Copy link
Collaborator Author

I guess it will be fixed by #635

@shaneahmed shaneahmed linked a pull request Jan 12, 2024 that will close this issue
@Ahmad-Tamim-Hamad
Copy link

I have my own model ViT (vision transformer) I have trained my model and saved the best weight of the model, I want to run tia toolbox on my own data how can I use it my own model and weight of the model patches and WSI, please help me.

@GeorgeBatch
Copy link
Contributor

GeorgeBatch commented Aug 30, 2024

I have my own model ViT (vision transformer) I have trained my model and saved the best weight of the model, I want to run tia toolbox on my own data how can I use it my own model and weight of the model patches and WSI, please help me.

@Tamim1992 You can wrap your ViT into a format compatible with tiatoolbox like shown in this notebook: https://github.com/TissueImageAnalytics/tiatoolbox/blob/develop/examples/07-advanced-modeling.ipynb

@Ahmad-Tamim-Hamad
Copy link

Thank you for your response. The model performs well at the patch level, but when I apply it to overlay on my own data, the results aren't as good. Do you have any suggestions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
code readability documentation Improvements or additions to documentation enhancement New feature or request refactoring Code Refactoring
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants