Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: Add Napari viewer to overlay detected spots on images #822

Closed
wants to merge 13 commits into from

Conversation

kevinyamauchi
Copy link
Collaborator

What does this PR do?

This PR is a prototype of using the Napari GUI for viewing detected spots overlayed on an ImageStack. The Napari GUI is still an early prototype, so it does have some bugs/brittle parts, but the performance for viewing spots on 3D volumes is way better than current notebook-based solutions.

There are some more usability updates coming to the Napari GUI (e.g, constrast sliders) that will definitely improve UX. However, the API should remain the same in the near term, so show_spots_napari() will still work.

Demo

To try the viewer, first pip install --upgrade napari-gui and then open test_napari.ipynb in the notebooks directory. This demos the Allen Institute smFISH notebook using the Napari viewer. The notebook loads a saved version of the 3D spot detection (allen_decoded) from the notebook directory for convenience (spot detection takes a while). We will nuke this after initial testing.

Open questions

  • Where should this function go? I also noticed that there is a stub for a show_spots() method in IntensityTable. Maybe it should go there?
  • Is there an easy to way to tell what type of image the spot attributes come from (e.g., 3D stack, MIP)? It would be nice to be able to automatically choose the type of projection of the 5D ImageStack to use for the display.
  • Noob question: how do I get a full r, c, z, x, y coordinate for a given feature in IntensityTable?

@ambrosejcarr
Copy link
Member

@kevinyamauchi Nick says the recent pypi release should be a stable version for us to release against. Does this require any additional updates?

Where should this function go? I also noticed that there is a stub for a show_spots() method in IntensityTable. Maybe it should go there?

For this, I think show_napari() probably makes sense, and should optionally take an ImageStack, that way we can display the results of filters or spot calls.

Is there an easy to way to tell what type of image the spot attributes come from (e.g., 3D stack, MIP)? It would be nice to be able to automatically choose the type of projection of the 5D ImageStack to use for the display.

Not to my knowledge, but there will be when we get around to our data provenance work in #820

Noob question: how do I get a full r, c, z, x, y coordinate for a given feature in IntensityTable?

@ttung do you know the answer to this?

@kevinyamauchi
Copy link
Collaborator Author

For this, I think show_napari() probably makes sense, and should optionally take an ImageStack, that way we can display the results of filters or spot calls.

Currently it does take an ImageStack and displays the markers on it. Do you mean that it should be able to display markers without an ImageStack as well? Or do you mean there should be a single function for displaying ImageStacks or spot calls in Napari that can display just spots, just images, or spots and images?

@kevinyamauchi Nick says the recent pypi release should be a stable version for us to release against. Does this require any additional updates?

It should be good to go, but let me verify.

@kevinyamauchi
Copy link
Collaborator Author

I checked and the napari-gui 0.0.5.1 works, but I was just using it and I discovered a bug for certain ImageStack shapes. I'll fix and update in the next day or two.

@kevinyamauchi
Copy link
Collaborator Author

kevinyamauchi commented Dec 15, 2018

@ambrosejcarr I fixed the bug and added support for projected spots along z if the image is also projected along z. Let me know what you think! test_napari.ipynb is probably a good place to start.

@ambrosejcarr
Copy link
Member

ambrosejcarr commented Dec 16, 2018

This looks great. I made some changes so that all tests except those that touch the test_napari notebook pass:

  1. I made ImageStack and IntensityTable mandatory arguments. The code treats them as such, so I felt comfortable making the change.
  2. I made cosmetic changes to the code

Currently it does take an ImageStack and displays the markers on it. Do you mean that it should be able to display markers without an ImageStack as well? Or do you mean there should be a single function for displaying ImageStacks or spot calls in Napari that can display just spots, just images, or spots and images?

I think we could combine this into show_stack_napari, where the spot-display code is triggered if an (optional) IntensityTable is also provided. What do you think about this?

I also:

  1. Rebased it against origin/master to pick up some recent efficiency upgrades
  2. Added a docstring.

I think the parameters for spot calling in the Allen notebook must be way off, because the results look random, but I verified that it works on the ISS notebook. Really nice! Beyond the question around merging some of the logic into the show_stack_napari function, I think this is good to go.

@kevinyamauchi
Copy link
Collaborator Author

Thanks for all the updates and for rebasing. I think all of your changes make sense.

I think we could combine this into show_stack_napari, where the spot-display code is triggered if an (optional) IntensityTable is also provided. What do you think about this?

This sounds good to me. I'll implement that tomorrow.

I think the parameters for spot calling in the Allen notebook must be way off, because the results look random, but I verified that it works on the ISS notebook. Really nice! Beyond the question around merging some of the logic into the show_stack_napari function, I think this is good to go.

Great! As I mentioned above, I'll move the spot display into the ImageStack.show_stack_napari() method. I was planning to remove the test_napari.ipynb notebook as well. Sound good?

@ambrosejcarr
Copy link
Member

Sound good?

Sounds perfect. :-)

@kevinyamauchi
Copy link
Collaborator Author

Hey @ambrosejcarr . I just made the changes we discussed.

One note about the display of the spots: in the plot.show_spots_napari() function, I had max projected the input ImageStack along the round and channel axes. I saw your note regarding slicing the comments, so I haven't implemented that yet because I wanted to get your thoughts.

As implemented, one would have to do the following to display spots on a multi round/channel image stack:

mp = primary_image.max_proj(Indices.CH, Indices.ROUND)
mp.show_stack_napari(indices={}, spots=decoded)

I think this is fairly reasonable, but if we wanted to make it a one liner, I think we would have to do one of the following. Thoughts?

  • use a different method than ImageStack.get_slice() that can also max project to get the relevant slices
  • Continue using ImageStack.get_slice() and use numpy.amax() to max project along the round and channel axes
  • We could use ImageStack.max_proj() in the show_stack_napari() method, but I think this is a marginal improvment over having the user call it in their notebook.

@codecov-io
Copy link

codecov-io commented Dec 16, 2018

Codecov Report

Merging #822 into master will decrease coverage by 0.16%.
The diff coverage is 9.09%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #822      +/-   ##
==========================================
- Coverage   87.93%   87.77%   -0.17%     
==========================================
  Files         149      149              
  Lines        4891     4900       +9     
==========================================
  Hits         4301     4301              
- Misses        590      599       +9
Impacted Files Coverage Δ
starfish/imagestack/imagestack.py 80.37% <9.09%> (-1.73%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 1fafc61...2b4ef56. Read the comment docs.

@ambrosejcarr
Copy link
Member

As implemented, one would have to do the following to display spots on a multi round/channel image stack:

I think I'd find it most helpful for the stack to not be projected, but to explicitly visualize spots across each round/slice. That would have the benefit of being flexible across multiplex and single-molecule approaches.

What are your thoughts on this? I think it's a slightly different use case, but could be easier to implement. I'm happy to help with this if you think it's a good idea.

@kevinyamauchi
Copy link
Collaborator Author

I think it's a good idea to display spots on detected round/channels, @ambrosejcarr . I had initially intended to implement it this way, but I wasn't how to extract round and channel for a given feature from the IntensityTable, so some guidance there would be useful!

I think there are a few cases we need to consider (see below). What do you think?

  1. Multi-round smFISH: this is the most straightforward case, as each feature in the IntensityTable should have a unique [R, C, Z, X, Y] coordinate
  2. Barcoded methods - raw detected spots: similar to the multi-round smFISH
  3. Barcoded methods - decoded spots (spot-wise detection): Here each decoded spots is actually mapped to multiple [R, C] coordinates. I think the most straightforward thing we can do is project along [R, C] and display on the projected image. We could also consider showing the individual spots that contribute a decoded spot and display on the appropriate [R, C, Z, X, Y] coordinate (i.e., filter out all spots that weren't decoded)
  4. Barcoded methods - decoded spots (pixel-wise detection): I think we would have to project along [R, C] and display the decoded spots on the projected image. If we wanted to be really fancy, we could generate an [Z, X, Y] image from the decoded pixels where target is encoded as a color/intensity and then overlay the spots on that. We should probably aim for the MVP first though.

@ambrosejcarr
Copy link
Member

ambrosejcarr commented Dec 18, 2018

I think it's a good idea to display spots on detected round/channels, @ambrosejcarr . I had initially intended to implement it this way, but I wasn't how to extract round and channel for a given feature from the IntensityTable, so some guidance there would be useful!

I can definitely help mock something up here, which would support (1) below.

Use cases:
I think some your first three use cases could be combined into two that are easy to implement:

  1. spot-wise detection - view detected spots in the rounds/channels they were detected in. This is useful for:
    • multiplex methods where you want to view the spots that contribute to a barcode
    • sequential smFISH methods
    • pixel-based spot finders after connected component aggregation
  2. multiplex projection - view detected spots on a projection of the rounds and channels of the experiment.
    • note: expects that spots are built from all rounds and channels, running this on smFISH would produce giberrish

Unsupported use cases:

  1. pixel-based decoding before aggregation

@kevinyamauchi
Copy link
Collaborator Author

I think your summary captures the use cases, @ambrosejcarr . Looking forward to the mockup of extracting full RCZXY coordinates for features in an IntensityTable.

@neuromusic neuromusic added this to the 0.1.0 milestone Dec 19, 2018
@ambrosejcarr
Copy link
Member

I think your summary captures the use cases, @ambrosejcarr . Looking forward to the mockup of extracting full RCZXY coordinates for features in an IntensityTable.

Great. I'll probably get to this over the weekend.

@ambrosejcarr ambrosejcarr self-assigned this Dec 20, 2018
@ambrosejcarr ambrosejcarr mentioned this pull request Jan 10, 2019
5 tasks
@ambrosejcarr
Copy link
Member

I'm going to close this since the referenced functionality was implemented in #928

@ambrosejcarr ambrosejcarr deleted the ky-napari-spots branch February 12, 2019 16:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants