Skip to content

Commit

Permalink
Merge pull request #77 from finglis/0.4-wsinfer
Browse files Browse the repository at this point in the history
0.4 wsinfer
  • Loading branch information
petebankhead committed Sep 13, 2023
2 parents a07645d + 1e1a825 commit 74f5832
Show file tree
Hide file tree
Showing 6 changed files with 146 additions and 29 deletions.
2 changes: 2 additions & 0 deletions docs/concepts/objects.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,12 +74,14 @@ Some of these performance issues have been addressed in v0.2.0, and it is now fe
Nevertheless, working with annotations remains rather more computationally expensive compared to working with detections.
:::

(concepts-tiles)=
:::{admonition} Special examples of detections
In addition to the types defined above, there are two more specialized detection subtypes:

> **Cell objects** <br />
> This has two ROIs - the main one represents the cell boundary, while a second (optional) ROI represents the nucleus.
>
>
> **Tile objects** <br />
> Differs from a standard detection in that a tile has less intrinsic 'meaning' in itself - i.e. it does not directly correspond to a recognizable structure within the image.
> See {doc}`../tutorials/superpixels` for an example of tiles in action.
Expand Down
Binary file added docs/deep/images/wsinfer.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/deep/images/wsinfer_options.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/deep/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,5 @@
djl
bioimage
stardist
wsinfer
```
59 changes: 30 additions & 29 deletions docs/deep/stardist.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ It exists as a [Python library](https://github.com/mpicbg-csbd/stardist) and [Fi
This page describes how to start using StarDist 2D directly within QuPath as an alternative method of cell detection.

:::{admonition} Cite the paper!
:class: warning
If you use StarDist in a publication, be sure to cite it:

> - Uwe Schmidt, Martin Weigert, Coleman Broaddus, and Gene Myers. [Cell Detection with Star-convex Polygons](https://arxiv.org/abs/1806.03535). *International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)*, Granada, Spain, September 2018.
Expand Down Expand Up @@ -77,17 +78,17 @@ The following script applies the *he_heavy_augment.pb* StarDist model to a brigh
import qupath.ext.stardist.StarDist2D
// Specify the model file (you will need to change this!)
var pathModel = '/path/to/he_heavy_augment.pb'
def pathModel = '/path/to/he_heavy_augment.pb'
var stardist = StarDist2D.builder(pathModel)
def stardist = StarDist2D.builder(pathModel)
.threshold(0.5) // Prediction threshold
.normalizePercentiles(1, 99) // Percentile normalization
.pixelSize(0.5) // Resolution for detection
.build()
// Run detection for the selected objects
var imageData = getCurrentImageData()
var pathObjects = getSelectedObjects()
def imageData = getCurrentImageData()
def pathObjects = getSelectedObjects()
if (pathObjects.isEmpty()) {
Dialogs.showErrorMessage("StarDist", "Please select a parent object!")
return
Expand Down Expand Up @@ -117,18 +118,18 @@ The following script applies the *dsb2018_heavy_augment.pb* model to the DAPI ch
import qupath.ext.stardist.StarDist2D
// Specify the model file (you will need to change this!)
var pathModel = '/path/to/dsb2018_heavy_augment.pb'
def pathModel = '/path/to/dsb2018_heavy_augment.pb'
var stardist = StarDist2D.builder(pathModel)
def stardist = StarDist2D.builder(pathModel)
.threshold(0.5) // Probability (detection) threshold
.channels('DAPI') // Specify detection channel
.normalizePercentiles(1, 99) // Percentile normalization
.pixelSize(0.5) // Resolution for detection
.build()
// Run detection for the selected objects
var imageData = getCurrentImageData()
var pathObjects = getSelectedObjects()
def imageData = getCurrentImageData()
def pathObjects = getSelectedObjects()
if (pathObjects.isEmpty()) {
Dialogs.showErrorMessage("StarDist", "Please select a parent object!")
return
Expand Down Expand Up @@ -257,18 +258,18 @@ Another customization is to include the probability estimates as measurements fo
import qupath.ext.stardist.StarDist2D
// Specify the model file (you will need to change this!)
var pathModel = '/path/to/he_heavy_augment.pb'
def pathModel = '/path/to/he_heavy_augment.pb'
var stardist = StarDist2D.builder(pathModel)
def stardist = StarDist2D.builder(pathModel)
.threshold(0.1) // Prediction threshold
.normalizePercentiles(1, 99) // Percentile normalization
.pixelSize(0.5) // Resolution for detection
.includeProbability(true) // Include prediction probability as measurement
.build()
// Run detection for the selected objects
var imageData = getCurrentImageData()
var pathObjects = getSelectedObjects()
def imageData = getCurrentImageData()
def pathObjects = getSelectedObjects()
if (pathObjects.isEmpty()) {
Dialogs.showErrorMessage("StarDist", "Please select a parent object!")
return
Expand Down Expand Up @@ -303,9 +304,9 @@ A similar distance-based expansion can also be used with StarDist, with optional
import qupath.ext.stardist.StarDist2D
// Specify the model file (you will need to change this!)
var pathModel = '/path/to/dsb2018_heavy_augment.pb'
def pathModel = '/path/to/dsb2018_heavy_augment.pb'
var stardist = StarDist2D.builder(pathModel)
def stardist = StarDist2D.builder(pathModel)
.threshold(0.5) // Probability (detection) threshold
.channels('DAPI') // Select detection channel
.normalizePercentiles(1, 99) // Percentile normalization
Expand All @@ -318,8 +319,8 @@ var stardist = StarDist2D.builder(pathModel)
.build()
// Run detection for the selected objects
var imageData = getCurrentImageData()
var pathObjects = getSelectedObjects()
def imageData = getCurrentImageData()
def pathObjects = getSelectedObjects()
if (pathObjects.isEmpty()) {
Dialogs.showErrorMessage("StarDist", "Please select a parent object!")
return
Expand Down Expand Up @@ -368,7 +369,7 @@ There are even more options available than those described above.
Here is an example showing most of them:

```groovy
var stardist = StarDist2D.builder(pathModel)
def stardist = StarDist2D.builder(pathModel)
.threshold(0.5) // Probability (detection) threshold
.channels('DAPI') // Select detection channel
.normalizePercentiles(1, 99) // Percentile normalization
Expand Down Expand Up @@ -404,7 +405,7 @@ One of the most useful extra options to the builder is `preprocessing`, which ma
For example, rather than normalizing each image tile individually (as `normalizePercentiles` will do), we can normalize pixels using fixed values, for example with

```groovy
var stardist = StarDist2D.builder(pathModel)
def stardist = StarDist2D.builder(pathModel)
.threshold(0.5) // Prediction threshold
.preprocess( // Extra preprocessing steps, applied sequentially
ImageOps.Core.subtract(100),
Expand All @@ -423,13 +424,13 @@ If needed, we can add extra things like filters to reduce noise as well.

```groovy
// Get current image - assumed to have color deconvolution stains set
var imageData = getCurrentImageData()
var stains = imageData.getColorDeconvolutionStains()
def imageData = getCurrentImageData()
def stains = imageData.getColorDeconvolutionStains()
// Set everything up with single-channel fluorescence model
var pathModel = '/path/to/dsb2018_heavy_augment.pb'
def pathModel = '/path/to/dsb2018_heavy_augment.pb'
var stardist = StarDist2D.builder(pathModel)
def stardist = StarDist2D.builder(pathModel)
.preprocess( // Extra preprocessing steps, applied sequentially
ImageOps.Channels.deconvolve(stains),
ImageOps.Channels.extract(0),
Expand All @@ -456,17 +457,17 @@ It only requires a change to input a map linking StarDist prediction labels to Q

```
// Define model and resolution
var pathModel = "/path/to/classification/model.pb"
def pathModel = "/path/to/classification/model.pb"
double pixelSize = 0.5
// Define a classification map, connecting prediction labels and classification names
var classifications = [
def classifications = [
0: 'Background',
1: 'Stroma',
2: 'Tumor'
]
var stardist = StarDist2D.builder(pathModel)
def stardist = StarDist2D.builder(pathModel)
.threshold(0.5)
.simplify(0)
.classificationNames(classifications) // Include names so that classifications can be applied
Expand All @@ -476,8 +477,8 @@ var stardist = StarDist2D.builder(pathModel)
.build()
// Run detection for the selected objects
var imageData = getCurrentImageData()
var pathObjects = getSelectedObjects()
def imageData = getCurrentImageData()
def pathObjects = getSelectedObjects()
if (pathObjects.isEmpty()) {
Dialogs.showErrorMessage("StarDist", "Please select a parent object!")
return
Expand All @@ -502,7 +503,7 @@ Unzipped examples from the [stardist-imagej repository](https://github.com/stard
You will also need to give QuPath the path to the *folder* containing the model files in this case, e.g.

```groovy
var pathModel = '/path/to/dsb2018_heavy_augment' // A folder, not a file
def pathModel = '/path/to/dsb2018_heavy_augment' // A folder, not a file
```

:::{admonition} Troubleshooting
Expand Down Expand Up @@ -537,7 +538,7 @@ To optimize StarDist using OpenVINO, download [QuPath OpenVINO Extension](https:
```groovy
// Specify the model directory (you will need to change this!)
def pathModel = '/path/to/converted_model.xml'
var dnn = qupath.ext.openvino.OpenVINOTools.createDnnModel('/path/to/model.xml')
def dnn = qupath.ext.openvino.OpenVINOTools.createDnnModel('/path/to/model.xml')
def stardist = StarDist2D.builder(dnn)
...
.build()
Expand Down
113 changes: 113 additions & 0 deletions docs/deep/wsinfer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
(wsinfer-extension)=
# WSInfer

The [WSInfer QuPath extension](https://github.com/qupath/qupath-extension-wsinfer/) makes it possible to do patch-based deep learning inference for digital pathology, without any need for scripting.

It's a collaboration between the QuPath group (the extension) and Stony Brook University ([WSInfer](https://wsinfer.readthedocs.io/en/latest/)).

:::{admonition} Cite the paper!
:class: warning
If you use WSInfer and/or this extension in a publication, please make sure to cite our preprint at <https://arxiv.org/abs/2309.04631>
:::

## Requirements

- QuPath [version 0.4](https://qupath.github.io/) (installation instructions [here](https://qupath.readthedocs.io/en/0.4/docs/intro/installation.html)).
- At least one whole slide image
- [WSInfer QuPath Extension](https://github.com/qupath/qupath-extension-wsinfer/releases)
- PyTorch (this can be downloaded while using the extension)

## Set-up

With QuPath installed and running, drag and drop the WSInfer extension into the application and restart QuPath.
Once installed, open up an image and run the extension via {menuselection}`Extensions --> WSInfer`.
You should see the window below:

:::{figure} images/wsinfer.png
:align: center
:class: shadow-image
:width: 40%

The WSInfer user interface
:::

:::{note}
Please note that you'll need internet access to start the extension and download the models.
:::

## Whole Slide Inference

### 1. Select a model

Select the a model from the dropdown menu and click the download icon button to start the download.
You should see a notification when the download is complete.

### 2. Create or select an annotation

Create an annotation or select a pre-existing annotations/tiles you wish to run the model on.
It's recommended that if this is the first time running WSInfer to keep the annotation smaller to test the processing speed before running it on a larger region.
This might take some time, depending on your computers processing speed.

:::{admonition} Select tiles or annotations?
WSInfer assign classifications to [tile objects](concepts-tiles).

Most of the time, you should draw/select annotations on the image before running WSInfer.
The WSInfer extension will then create the tiles that it needs.

The size of the tiles created automatically will match the size of the patch WSInfer is using to for inference.
That's why the tile sizes generated for different models can be different: it depends what size of patch was used to train the model.

*Sometimes* you might want to reuse existing tiles, and append the measurements made by WSInfer to them.
This is especially useful if you want to run WSInfer multiple times using different models.
This is why there is also an option to select tiles, as an alternative to selecting annotations.

When you do that, WSInfer won't create new tiles - *but it will still use patches based on the resolution and patch size used to train the model*.
These patches don't necessarily have to correspond exactly to the tiles shown in QuPath - they might be bigger or smaller - but they should still be centered on the same pixels.
:::

### 3. Run

Check you have an annotation selected and click run and if all the requirements are present then the processing will begin.
If you don't have PyTorch yet, you will be prompted to download it (this may well be > 100 MB, so may take a while).

### 4. View Results

Once the progress bar is complete the results can be visualized using the tools in the {guilabel}`View Results` section.

The {guilabel}`Measurement Maps` tool presents the score of each tile by color and can be interacted with using the 3 toggle buttons to either show, hide or fill the annotations and detections.

The slider can be used to increase or decrease the fill opacity so the tissue features can be seen under the WSInfer scores.

The {guilabel}`Results Table` provides details for each tile and the option to export for further analysis.

## Additional Options

You can also use the additional options to specify where models should be stored, and also the number of parallel threads used to read patches from the image (usually 1 or 2).

:::{figure} images/wsinfer_options.png
:align: center
:class: shadow-image
:width: 40%

WSInfer's additional options
:::

However the most (potentially) exciting additional option is the {guilabel}`Preferred device`: the one that promises to (maybe) make things run much faster.

The options available will depend upon your computer's capabilities (at least as far as they could be discerned by Deep Java Library):

* **CPU**: This is generally the safest - and slowest - option, because it should be supported on all computers.
* **MPS**: This stands for *Metal Performance Shaders*, and should be available on recent Apple Silicon - it is the Mac version of GPU acceleration
* **GPU**: This should appear if you have an NVIDIA GPU, CUDA... and some luck.

If either MPS or GPU work for you, they should reduce the time required for inference by a *lot*.
However configuration for GPU can be tricky, as it will depend upon other hardware and software on your computer.


:::{admonition} PyTorch & CUDA versions
The WSInfer extension is using Deep Java Library to manage its PyTorch installation.
It won't automatically find any existing PyTorch you might have installed: Deep Java Library will download its own.

If you have a compatible GPU, and want CUDA support, you'll need to ensure you have an appropriate CUDA installed *before* PyTorch is downloaded.
QuPath v0.4.x uses PyTorch 1.13.x by default, which is expected to work with CUDA 11.6 or 11.7.
:::

0 comments on commit 74f5832

Please sign in to comment.