Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Positive pixel count on multiple annotations #67

Closed
mnolan1989 opened this issue Apr 27, 2017 · 14 comments

Comments

Projects
None yet
4 participants
@mnolan1989
Copy link

commented Apr 27, 2017

Hi,

First of all, QuPath is a great piece of software and I'm really enjoying using it. However, I am finding that the positive pixel count is unpredictable, I find I often need to run the same command multiple times on the same annotation before anything is actually measured.

Is anyone else having this problem?

Thanks!

Matt

@Svidro

This comment has been minimized.

Copy link

commented Apr 27, 2017

I have not noticed mixed results, but there are some problems you might be running into in the experimental tool. First, I have had it fail to save or complete occasionally when the data set is too large. You can quickly get 20+ gb output files on fine settings. Also there is a problem with thresholds that Peter is aware of that can cause it to crash.
I sometimes have better luck tiling and using the Cytokeratin tool, if that is an option for you.

@petebankhead

This comment has been minimized.

Copy link
Member

commented Apr 28, 2017

I'm afraid that some of the commands flagged 'experimental' are more experimental than others... and that's one that is more experimental than most. It was added as a very simple counting method, but I only used it myself for some TMAs to have a quick comparison of the results against 'full' cell-by-cell analysis. It turns out to have some troubles that need to be fixed, especially when used in other contexts.

The problem @Svidro mentions is that it requires at least one 'hematoxylin' pixel to be able to return anything.
Another strange feature is that, if you look at the hierarchy, the 'Positive' region is inside the 'Negative' one.
And a third is that the 'Num pixels' value is a count of the pixels at the downsample level used. This isn't necessarily 'wrong', but it is not ideal because the measurement name doesn't say what downsample was used. It would be preferable to have a value converted to µm.

Some of these problems arose because the command was initially designed to generate 'Any staining' and 'DAB' regions; at that time, it was logical to return nothing if 'Any staining' was 0. It was also logical to put the 'DAB' region inside 'Any staining' in the hierarchy. Unfortunately, these aspects weren't updated when 'Any staining' was switched to become 'Hematoxylin'.

Added to all that, the command doesn't handle fluorescence or other stain types. For all these reasons, I expect that this command will be replaced or substantially changed at some point.

Therefore, while you could maybe work around the limitations of the positive pixel command, I'd suggest trying to use other commands for now if possible.

@mnolan1989

This comment has been minimized.

Copy link
Author

commented Apr 28, 2017

Hi @Svidro and @petebankhead

Thanks for getting back to me. I'm using positive pixel count to estimate the extent of pathology in defined annotations (TDP-43 in ALS motor cortex, H-DAB slides). I can't use positive cell detection because the pathology is varied in shape and structure and a sizeable proportion of it is extracellular. However, I'm finding that it is ok as long as each annotation is drawn, then counted, then another annotation drawn and so on. If you draw multiple annotations and try to run them simultaneously it doesn't like it. I'm recording my output as a ratio of positive pixels per µm2, so for me the number of negative pixels is irrelevant.

The software is already better and more user friendly than the ImageScope package we were using before, so thank you!

@Svidro

This comment has been minimized.

Copy link

commented Apr 28, 2017

That does sound kind of like the memory limits I have run into. Even with 90GB of RAM committed to a single slide I sometimes have to split things up a bit. I hope to test out how a newer processor handles things soon with a lower RAM cap though! Just finished building a new pc :] Depending on how fine you want your measurements to be, you might also take a look at using a classifier on SLICs. I think the command is roughly in the same menu area. I like that it gives me a little more flexibility in automatically weeding out black bits or other things I am not interested in without having to hand draw every little bit.

@petebankhead petebankhead self-assigned this May 6, 2017

@petebankhead petebankhead added the bug label May 6, 2017

petebankhead added a commit to petebankhead/qupath that referenced this issue Jan 24, 2018

Fixed positive pixel count bug
Fixed bug that required at least one hematoxylin pixel so as to return
a result, as described at qupath#67

(Nevertheless, this command still isn’t really to be recommended and
requires more improvement)
@pyushkevich

This comment has been minimized.

Copy link

commented May 24, 2018

Hi @mnolan1989 ,

I am curious if you were successful in using QuPath for the TDP43 inclusion quantification, and if you would mind sharing the parameters that you used. I am starting to experiment with labeling Tau tangles over a NISSL counterstain, and also need to label extracellular inclusions. I am finding that the positive pixel count does well but also picks up some background stain and also just dirt on the slides.

Thanks in advance!

The software is amazing!

@petebankhead

This comment has been minimized.

Copy link
Member

commented May 24, 2018

Thanks @pyushkevich :) I'm also curious as to whether this was solved. I'm chipping in to mention that the positive pixel count should be quite a bit better if you use the beta version described here (involves compiling it, but it's not really a painful process...). You might also see some benefits by adjusting the stain vectors - but the staining you mention is intriguingly new and different to me, and not something I've encountered before.

(In the longer term, I plan that there will be much better alternatives to the pixel count - but realistically that is still some months away...)

@Svidro

This comment has been minimized.

Copy link

commented May 24, 2018

An interesting variant of this (brace yourself Pete for more of my crazy), depending on what and how you are measuring things, can be converting your measurement area into a "pathCellObject" (whether it is hand drawn, tiles, etc) and then running Subcellular detection on it for a bit more control. The segmentation allows you to do things like add further color measurements to the objects created, which then allows further thresholding (remove objects that are too much of a color you are not looking for to get rid of black junk).

Can go more into specifics if it is something that would be of interest.

@pyushkevich

This comment has been minimized.

Copy link

commented May 24, 2018

Thanks for the beta suggestion, I will check it out!

I attached an example of the data - it does not seem too different from some of the examples online.

image

Curious, do you offer or plan to offer a supervised learning-based object detection tool, sort of like Ilastik? I develop a 3D image segmentation tool ITK-SNAP (for MRIs, CTs) and we have been successful with using random forests for segmentation. User paints some examples and the software extrapolates to the rest of the image. Unlike Ilastik we don't have the user generate engineered features, but just train using neighboring intensity values and let the random forest figure out which features are important and which aren't. The random forest code (C++) is fairly self-contained in case it is of any interest:

https://sourceforge.net/p/c3d/git/ci/master/tree/itkextras/RandomForest/

Thanks again,
Paul

@Svidro

This comment has been minimized.

Copy link

commented May 24, 2018

Not sure if this is what you are interested in, and I only did a quick run at different types of measurements, but I:

  1. Converted the image to a tiff so that I could have pixel measurements (necessary for Subcellular detections)
  2. Created a whole image annotation
  3. Converted that into a cell
  4. Fixed up my color vectors and ran a subcellular detection on DAB (did not really do a great job there)
  5. Add Intensity Features-most of them
    Found that the residual did a decent job of picking out what I think are the extraneous black dots. I imagine there are better color vector sets that you could use to identify those areas, and eliminate them from analysis. Not sure if this is what you are looking for though before I go too crazy with it.
    image
@mnolan1989

This comment has been minimized.

Copy link
Author

commented May 24, 2018

In the end it actually worked great - a substantial amount of the paper we are about to submit made use of positive pixel detection (QuPath is referenced!)

Tau is normally more heterogenously shaped than pTDP-43, I don't use it routinely as I work on ALS. When using the positive pixel count tool I only quantified user-defined annotations, so I could choose where to place them and avoid any bits of crud on the slide. Tweaking the colour deconvolution for your DAB channel might help. If there's a lot of background I would try raising the primary Ab dilution. Regardless of the antibody, I find that incubating the primary overnight at 4'C pretty much always gives the best signal with minimal background.

Regarding the settings, I basically just played around with the parameters until I found settings that struck a balance between being specific enough and not taking too much time to complete after clicking run. I then copied the generated script and applied it to every section. Hope this helps!

@petebankhead

This comment has been minimized.

Copy link
Member

commented May 25, 2018

@pyushkevich

Curious, do you offer or plan to offer a supervised learning-based object detection tool, sort of like Ilastik?

Yes! That is indeed what I was obscurely referencing I have a working prototype, but it is some way away from being useful (e.g. it shows a live overlay, but this can't readily be converted into any meaningful measurements or objects). I plan to write a bit more about it whenever I get time to work on it again, and have a clearer idea when it'll be ready.

I'll send you a message, it would be great to discuss further and perhaps incorporate some of your experience from ITK-SNAP if you're interested.

@Svidro
Thank you, creative as always and nothing I'd ever have come up with :)

@mnolan1989

In the end it actually worked great - a substantial amount of the paper we are about to submit made use of positive pixel detection (QuPath is referenced!)

Great! Thanks for confirming... and for referencing :) Don't know if you saw I mentioned on Twitter recently that just over half the papers using QuPath this year didn't reference the Sci Reports publication - would be very good to turn that around!

And thanks also for the extra information on the lab side.

@pyushkevich

This comment has been minimized.

Copy link

commented May 25, 2018

Of course, I would be happy to discuss the ITK-SNAP experience, and I hope some of the code can be directly usable.

Regarding your suggestion, how do I actually convert an annotation area to a cell?

@Svidro

This comment has been minimized.

Copy link

commented May 25, 2018

To clarify, since you know more coding than I do, you are replacing an ROI of the exact same coordinates with a pathCellObject.
Here is the code from somewhere on the forum: https://gist.github.com/Svidro/5829ba53f927e79bb6e370a6a6747cfd#file-change-annotations-into-cell-objects-groovy

That script is designed to target second "level" annotations as it was written to ignore the top level annotation and convert hand drawn annotations within into cells. You will probably want to change line 8:

def targets = getObjects{return it.getLevel()!=1 && it.isAnnotation()}

to use something like "getAnnotationObjects()" if you do not have any annotation hierarchy.
If your area is too large, the subcellular detection may fail (it will be obvious if it happens, you get no segmentation). I have had it work successfully over very large areas, but on a whole slide, I had to create subdivisions. I am not 100% sure what the limits are. If you run into that problem, you could also create your annotation area, tile it, and then convert the tiles into cells.

@petebankhead

This comment has been minimized.

Copy link
Member

commented Mar 9, 2019

Closing this because of lack of activity, and it is addressed in the latest milestone release (especially through the pixel classifier).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.