Replies: 1 comment 3 replies
-
Yes! This method was intended as a starting point (until I forgot about it...): https://github.com/qupath/qupath/blob/main/qupath-core-processing/src/main/java/qupath/opencv/tools/OpenCVTools.java#L931 Basically, the idea was that you'd need to create a But as you pointed out on the forum, we also need to think about per-channel and joint-channel normalization... so defining how it should look at the end sounds like the trickiest bit. Two other thoughts:
It wouldn't be guaranteed to be correct, but it is likely to be a better estimate than using the downsampled image alone. |
Beta Was this translation helpful? Give feedback.
-
Background
Things like the StarDist extension or cell detection work by tiling the image data in order to process these tiles in parallel, saving time.
While for cell detection, local background subtraction schemes make sense, as a hard threshold follows the preprocessing steps, this is more questionable for Deep learning approaches, where on "empty" or very sparse tiles, the model will dream up cells because each tile is normalized individually.
Moreover, the actual implementation of StarDist does perform the normalization on the full image, before splitting it into tiles. It would be nice to reach that point in QuPath as well.
Project
The challenge lies in being able to compute a meaningful quantile (the most common normalization approach) on the data at the same resolution as the one when the processing.
The goals of this project would be
Beta Was this translation helpful? Give feedback.
All reactions