Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add instructions on how to run WSI-level predictions #10

Closed
andreped opened this issue May 4, 2022 · 10 comments
Closed

Add instructions on how to run WSI-level predictions #10

andreped opened this issue May 4, 2022 · 10 comments
Assignees
Labels
enhancement New feature or request

Comments

@andreped
Copy link
Owner

andreped commented May 4, 2022

As our TileImporter script does not support multi-class, the alternative method is to run predictions on WSI level.

This requires you to do something slightly different when running predictions from what was done in the tutorial video.

However, I don't see that there is any documentations for this. This should be added to assist users.

@andreped andreped added the enhancement New feature or request label May 4, 2022
@SahPet
Copy link
Collaborator

SahPet commented May 4, 2022

A tutorial/description of how to predict (multiclass) with DeepMIB on the WSI level (no need for patches) on exported rendered downsampled whole slide images (WSIs) from QuPath, and importing the predictions directly back into QuPath

  • Download and install the latest MIB-version, with support for creating TIFs directly from predictions: http://mib.helsinki.fi/web-update/MIB2_Win.exe

  • Download and install the latest QuPath-version: https://github.com/qupath/qupath/releases/download/v0.3.2/QuPath-0.3.2-Windows.msi

  • This is an example from the PANDA dataset, which can be downloaded from here (whole slide images from prostate with corresponding masks with the following annotations (color nr 1, 2, 3, 4, 5 = "Stroma", "Benign", "Gleason3", "Gleason4", "Gleason5"): https://www.kaggle.com/competitions/prostate-cancer-grade-assessment/data

  • In this example we've use a few images from the PANDA Radboud dataset:
    Screenshot_1672

  • Annotations can be imported by using the same script as we'll use later to import downsampled WSI prediciton tifs from DeepMIB, which can be found in the NoCodeSeg repository (created by @andreped): https://github.com/andreped/NoCodeSeg/blob/main/source/importStitchedTIFfromMIB.groovy

  • You will have to add a "Labels_" term before each file name (e. g. by using BulkRenameUtility, see below) or remove that term from a bit further down the script. The image below is slightly inaccurate at FastPathology is set to true, this should be "false", and DeepMIB set to "true":
    Screenshot_1685

  • Say we've already trained a deep segmentation network in DeepMIB from the PANDA dataset, by exporting tiles from QuPath and training in DeepMIB, as described here in this tutorial: https://youtu.be/9dTfUwnL6zY

  • The network was trained from exported patches from the PANDA WSIs with corresponding imported annotations. We've deleted the "Stroma" class (import value 1) annotations and creating a combined "Tumor" class for "Gleason3", "Gleason4", and "Gleason5" with this QuPath script:

replaceClassification('Gleason3', 'Tumor')
replaceClassification('Gleason4', 'Tumor')
replaceClassification('Gleason5', 'Tumor')
  • A lot of the PANDA dataset is incompletely or falsely labelled, so say we want to use the trained deep segmentation netowk to predict a few more WSIs which we've identified as incorrectly labelled, so we later can correct the annotations in QuPath and then add to our training data.

  • First, we'll export a rendered 2x downsampled version of the PANDA WSIs we want to predict in DeepMIB using this script from Pete Bankhead, the creator of QuPath:

/**
 * Script to export a rendered (RGB) image in QuPath v0.2.0.
 *
 * This is much easier if the image is currently open in the viewer,
 * then see https://qupath.readthedocs.io/en/latest/docs/advanced/exporting_images.html
 *
 * The purpose of this script is to support batch processing (Run -> Run for project (without save)),
 * while using the current viewer settings.
 *
 * Note: This was written for v0.2.0 only. The process may change in later versions.
 *
 * @author Pete Bankhead
 */

import qupath.imagej.tools.IJTools
import qupath.lib.gui.images.servers.RenderedImageServer
import qupath.lib.gui.viewer.overlays.HierarchyOverlay
import qupath.lib.regions.RegionRequest

import static qupath.lib.gui.scripting.QPEx.*

// It is important to define the downsample!
// This is required to determine annotation line thicknesses
double downsample = 2

// Add the output file path here
String path = buildFilePath(PROJECT_BASE_DIR, 'Rendered_DS2_WSIs_040522', getProjectEntry().getImageName() + '.jpg')

// Request the current viewer for settings, and current image (which may be used in batch processing)
def viewer = getCurrentViewer()
def imageData = getCurrentImageData()

// Create a rendered server that includes a hierarchy overlay using the current display settings
def server = new RenderedImageServer.Builder(imageData)
    .downsamples(downsample)
    .layers(new HierarchyOverlay(viewer.getImageRegionStore(), viewer.getOverlayOptions(), imageData))
    .build()

// Write or display the rendered image
if (path != null) {
    mkdirs(new File(path).getParent())
    writeImage(server, path)
} else
    IJTools.convertToImagePlus(server, RegionRequest.createInstance(server)).getImage().show()

  • The 2x downsampled images are stored in a folder in the QuPath project. Remember to move the resultant jpg images into an "Images" folder before proceeding to DeepMIB prediction, as DeepMIB looks for an "Images" folder in the specified prediction folder:
    Screenshot_1682

  • We'll set this as the prediction folder in DeepMIB and create a "4_Results...." folder which will contain the tif files from the predictions.
    Screenshot_1675

  • We'll set the output from the prediction to TIF compressed format and also tick the "bigimage mode" (this prevents overloading the GPU when predictiing on larger image files).
    Screenshot_1678

  • The resultant tifs can now be found here after pressing "Predict":
    Screenshot_1684

  • For this demo I've use jpg converted from the original tiff images in the PANDA dataset to save some disk space. The ".jpg" is included in the filename for some reason after the rendering export from Qupath, so we'll remove ".jpg" from the filename in BulkRenameUtility (https://www.bulkrenameutility.co.uk/):
    Screenshot_1681

  • We'll also copy the path to the "ResultsModels" folder in our DeepMIB project directory and change the slashes in Notepad first:
    Screenshot_1680

  • Now we're ready to import the WSI-predictions into QuPath - just copy the path to the "ResultsModels" folder into the script from above and run in batch mode for all the images you've predicted in DeepMIB (script can be found here: https://github.com/andreped/NoCodeSeg/blob/main/source/importStitchedTIFfromMIB.groovy)
    Screenshot_1683

  • That's it. You've now predicted WSIs with a multiclass deep segmentation network in DeepMIB from downsampled WSIs exported from QuPath, and imported the multiclass predictions into QuPath, all without patch generation. You're now ready to correct your predictions in QuPath and expand your dataset further through this active learning process.

  • The multiclass supported import script above was created by @andreped

@andreped
Copy link
Owner Author

andreped commented May 4, 2022

I assume @SahPet's comment above will be of interest to you, @pr4deepr, @ajr82, @aaronsathya.

I have moved this comment to its own wiki page:
https://github.com/andreped/NoCodeSeg/wiki/Tutorial-on-how-to-import-multiclass-predictions-from-MIB-into-QuPath

Great work, @SahPet !!

@pr4deepr
Copy link
Contributor

pr4deepr commented May 4, 2022

Great stuff.
Thanks for tagging me!

@ajr82
Copy link

ajr82 commented May 4, 2022

Thank you @andreped and @SahPet!
We actually tried predicting (singleclass) directly on WSIs (.svs) in DeepMIB, but it didn't work out well. Your workflow, with the downsampling and jpg files will be much more efficient.
We also find improvements in the training after using your fast-stain-normalization technique on the tiles. Now with the DeepLabV3Resnet18 architecture in the latest release of DeepMIB, things will likely improve further in our upcoming projects.
Thank you for you help!

@andreped
Copy link
Owner Author

andreped commented May 4, 2022

@ajr82 Great to hear, @ajr82!

I will be returning to the task of integrating stain normalization into FastPathology quite soon, which will enable you to perform stain normalization during deployment.

@Ajaxels
Copy link
Collaborator

Ajaxels commented May 4, 2022

@ajr82

We actually tried predicting (singleclass) directly on WSIs (.svs) in DeepMIB, but it didn't work out well. Your workflow, with the downsampling and jpg files will be much more efficient.

have you been using bigimage mode? We have currently a beta version of DeepMIB that works way better with large rasters.

@ajr82
Copy link

ajr82 commented May 4, 2022

@Ajaxels Hi Ilya, not as much, but eventually we plan on using it. I will try the beta version. Thanks for letting me know. Also thank you very much for DeepMIB. DeepMIB and NoCodeSeg, along with QuPath, have made our ongoing projects easier to work on (we are all pathology residents and pathologists)!

@Ajaxels
Copy link
Collaborator

Ajaxels commented May 4, 2022

@ajr82 that beta is not deposited yet, if you want I can share Matlab version,
But I also suggest to test at least bigimage mode (there is a checkbox in the Predict tab)

@aaronsathya
Copy link

aaronsathya commented May 4, 2022

Wow fantastic. Awesome @SahPet and @andreped - this is great. We were able to get multiclass working using our email discussion. But having this tutorial will be significant moving forward.

@andreped
Copy link
Owner Author

andreped commented May 5, 2022

As this seems to have been solved, I am closing for now. If you guys have any other requests or issues, please, let me know by making a new issue or similar.

@andreped andreped closed this as completed May 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants