Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve pixel classifier measurement performance #1076

Merged
merged 6 commits into from Oct 15, 2022

Conversation

petebankhead
Copy link
Member

Also fixes a possible bug when making measurements from point annotations.
Inspired by this discussion: https://forum.image.sc/t/qupath-measure-pixel-classifier-area-per-cell-detection-for-wsis/72701

Also fixes a possible bug when making measurements from point annotations.
Inspired by this discussion: https://forum.image.sc/t/qupath-measure-pixel-classifier-area-per-cell-detection-for-wsis/72701
This avoids creating a defensive copy for each tile, which could sometimes be a bottleneck.
Add warnings to ImageServer about cached tiles and mutability.
Filter out non-intersecting tiles earlier, so their predictions are never requested
@petebankhead
Copy link
Member Author

petebankhead commented Oct 15, 2022

I tested performance using CMU-1.svs.
I used a very basic thresholder and simple classifier trained for 3 classes, saved for both classification and probability output - then ran the script at the bottom.

Using a Mac Studio (2022) with M1 Max and 32 GB RAM the processing time was:

v0.3.0 v0.4.0-SNAPSHOT
593.9 s 60.1 s

Results identical as far as I can tell. So... quite a substantial difference :)

Cell detection took close to 30s, with 326 498 cells detected,.

def checkpoints = [:]


setImageType('BRIGHTFIELD_H_E')
setColorDeconvolutionStains('{"Name" : "H&E default", "Stain 1" : "Hematoxylin", "Values 1" : "0.65111 0.70119 0.29049", "Stain 2" : "Eosin", "Values 2" : "0.2159 0.8012 0.5581", "Background" : " 255 255 255"}')

clearAllObjects()

checkpoints << ['Tissue detection': System.currentTimeMillis()]

createAnnotationsFromPixelClassifier("Tissue detection", 10000.0, 0.0, "INCLUDE_IGNORED")

checkpoints << ['Cell detection': System.currentTimeMillis()]

selectAnnotations()
runPlugin('qupath.imagej.detect.cells.WatershedCellDetection', '{"detectionImageBrightfield": "Hematoxylin OD",  "requestedPixelSizeMicrons": 1.0,  "backgroundRadiusMicrons": 8.0,  "medianRadiusMicrons": 0.0,  "sigmaMicrons": 1.5,  "minAreaMicrons": 10.0,  "maxAreaMicrons": 400.0,  "threshold": 0.1,  "maxBackground": 2.0,  "watershedPostProcess": true,  "cellExpansionMicrons": 5.0,  "includeNuclei": true,  "smoothBoundaries": true,  "makeMeasurements": true}')

for (classifier in ['Some probability', 'Some classification']) {

    // Create annotation measurements
    checkpoints << ["Annotation measurements for $classifier": System.currentTimeMillis()]
    selectAnnotations()
    addPixelClassifierMeasurements(classifier, classifier)
    
    // Create cell measurements
    checkpoints << ["Cell measurements for $classifier": System.currentTimeMillis()]
    selectCells()
    addPixelClassifierMeasurements(classifier, classifier)
}
checkpoints << ["Done": System.currentTimeMillis()]
resetSelection()
println 'Done!'

def entries = checkpoints.entrySet() as List
println "Total time: \t${entries[-1].value - entries[0].value} ms"
for (int i = 0; i < entries.size()-1; i++) {
    println "    ${entries[i].key} \t${entries[i+1].value - entries[i].value}"
}

@petebankhead petebankhead added this to the v0.4.0 milestone Oct 15, 2022
Fix problems getting pixel classification measurements at boundary tiles where the mask image is larger than the tile image.
Rather than only bounding boxes, as was previously the case.
@petebankhead
Copy link
Member Author

Latest commit adds more options to restrict where live pixel classifier prediction is calculated.

Previously, it could be restricted to annotations - but using their full bounding box. This could sometimes still result in very large regions being processed.

annotations_bounds

Now it's also possible to restrict using the annotation ROI directly (i.e. the ROI shape intersects the tiled region that may be processed). This can reduce the amount of processing required substantially in some cases.

annotations_only

Both options still exist, since the more complex calculations to restrict the predicted regions could potentially slow things down in some cases.

Avoid requesting all tiles for pixel classification measurements up front. This reduces the risk of out-of-memory errors when measuring large regions (e.g. the full image).
@petebankhead petebankhead merged commit fcbbf62 into qupath:main Oct 15, 2022
@petebankhead petebankhead deleted the pixel-measurements branch October 15, 2022 11:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant