Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] support mixed 2D / 3D rendering #839

Closed
wants to merge 1 commit into from

Conversation

sofroniewn
Copy link
Contributor

Description

This PR will close #639 by supporting mixed 2D / 3D rendering for all layer types. I'm not quite sure about API yet, and I havn't added full documentation / tests, but I wanted to get this going to see what it is like. It'll also probably intersect with some of the work on physical coordinates - see #763, and also orthoviews, see #760, but I do like the functionality so far.

To make it work you must set the viewer.dims.embedded = True, this will pop up a third slider corresponding to the "embedded" or "sliced" dimension. You must then set the specific layer that is to be embedded to have layer.dims.ndisplay=2 (while keeping the whole viewer in 3D rendering mode, i.e. viewer.dims.ndisplday=3).

Here are a couple gifs of the new functionality, note that the blending modes work really nicely here:
mixed_2D_3D_stent

and mixed surface + image rendering with some cryoET data

mixed_2D_3D_cryoET

Type of change

  • New feature (non-breaking change which adds functionality)

How has this been tested?

  • example: the test suite for my feature covers cases x, y, and z
  • example: all tests pass with my change

Final checklist:

  • My PR is the minimum possible work for the desired functionality
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works

@sofroniewn sofroniewn added the feature New feature or request label Dec 31, 2019
@sofroniewn sofroniewn added this to the 0.2.0 milestone Dec 31, 2019
@sofroniewn
Copy link
Contributor Author

I think here we have to also consider the tradeoffs between this sort of functionality and supporting cropping of the data during 3D rendering - see comment here #846 (comment).

Once #846 is merged I might give cropping a try before we try and finish this PR. Maybe at first just single-ended cropping with our current sliders, though eventually we'll want to use the range slider - probably after the refactor in #844 - to do double ended cropping from both sides

@GenevieveBuckley
Copy link
Contributor

To clarify, can the 2D plane lie slightly off-axis or tilted relative to the volume array?

@sofroniewn
Copy link
Contributor Author

To clarify, can the 2D plane lie slightly off-axis or tilted relative to the volume array?

Not really, this is mainly about slicing the array, and certainly doesn't have any of the concepts that are coming in #885. I think we'll want to wait on this until #885 goes in too, and I'm also still not sure that cropping functionality isn't more natural here too.

I do remember though you were interested in this functionality right? If so can you describe a little more about how you'd like to see this work / what functionality you need

@GenevieveBuckley
Copy link
Contributor

To clarify, can the 2D plane lie slightly off-axis or tilted relative to the volume array?

Not really, this is mainly about slicing the array, and certainly doesn't have any of the concepts that are coming in #885.

That's what I thought. Still a big step forward though!

I do remember though you were interested in this functionality right? If so can you describe a little more about how you'd like to see this work / what functionality you need

Our lab combines scanning electron/ion beam imaging (2D images, often with some "perspective" tilt to the view) with fluorescence microscopy (3D volumes with colour channels) of the same samples. We want to:

  • spatially relate the two datasets and display them together
  • have support for control point matching, so users can do manual registration
  • have better support for affine transforms (for manual or automated image alignment & registration)

@GenevieveBuckley
Copy link
Contributor

Here's a great example of something that I have now that I really want to use napari to make it better.

This is image alignment using manual control point matching of two 2d images of different modalities. I've added some image transforms on top of this cpselect tool, and it's what we're using for now:
image
What I like about this is that it's very easy to add/delete matched control points. It requires you to click once in each image, and won't let you accidentally add unmatched points if you click several times in the same image.

How could napari make this better?

  • A huge disadvantage right now is that users need to be able to adjust the image brightness/contrast while selecting matched control points, but need different display settings in the final figure for papers. We need to crank up the brightness in the fluorescence display to see the grid bars on the sample holder (used as reference markers) which are barely distinguishable from the auto-fluorescence background.
  • If napari gets good support for nD transforms (and mixing 2D/3D data), I might not have to use a maximum intensity projection to squash my 3D fluorescence volume into a 2D image. It would be more accurate overall, especially the two cameras are often at very different angles relative to the sample.
  • I don't want to lose the ease of use of adding matched control points across images/layers, but I'm sure we can work on making sure that support is also added to napari.

Longer term, I'd love to have our main microscope control GUI display be napari based too. I can launch napari instances from my own PyQt GUI, so that's great. But I'm not sure about best practices to get information back out of napari (so I'm following the pluggy discussions closely), or how to make data displayed in several separate napari instances play nicely together (napari doesn't have the right support for showing two workspaces side by side - grid mode isn't good for this).

@sofroniewn
Copy link
Contributor Author

This is very helpful @GenevieveBuckley, I really appreciate the screenshot - helps make everything a little more concrete for me. Once we get the physical coordinates, and basic plugin stuff in, we can put some time towards the multicanvas stuff, which I think will be important for your usage, as I see how grid mode isn't sufficient. Keeping this use case in mind then will be important. It's also great to see you weighing in and helping out with #885!!

I don't want to lose the ease of use of adding matched control points across images/layers, but I'm sure we can work on making sure that support is also added to napari.

Absolutely, this will make a great plugin! I think many people interested in multimodal registration will be excited about this.

Copy link
Contributor

@alisterburt alisterburt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sofroniewn - I had a quick look at how this worked (I want to make animations with it)

What do you think is the main thing stopping this from moving forward? (besides the presence of a million other things to work on, of course 😆)

I found one hardcoded variable dims.sliced which would need sorting and in general the functionality would need to be exposed in a useful/intuitive way - do we have any idea what this may look like?

My immediate thought is

  • an checkbox connected to layer.dims.embedded
  • a spinbox/combobox for the layer.dims.sliced
    these would only be exposed in the 3D viewer mode

I'm assuming there isn't something you fundamentally don't like about what you've done here?

def sliced(self):
"""int: Dimension that is sliced if embedded."""
if self.embedded and self.ndim >= 3:
return self.order[-3]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this will need switching to a variable rather than hardcoding

@sofroniewn
Copy link
Contributor Author

What do you think is the main thing stopping this from moving forward? (besides the presence of a million other things to work on, of course 😆)
...
I'm assuming there isn't something you fundamentally don't like about what you've done here?

I'd say nothing fundamental, though I think we've got better machinery now in place to deal with this (like our world coordinate system) then we did back when this started, and we've improved some of the slider handling.

I think we've also come further along in our ideas of multiscale projection like #1820, so I'd want to step back and think about the overall API and user interactivity, before just cleaning up this PR and merging it.

We've also removed layer.dims from each layer, and are in general trying to move to a world where an individual layer don't know how it is sliced (i.e. slicing is something that happens to a layer rather than a slice is a thing that lives on a layer, see #1353).

For example, do we need an embedded property on each layer, or are the global controls we can use instead?

I had a quick look at how this worked (I want to make animations with it)

Can you share a little more detail about your particular need. Do you have multiple 3D volumes, 2D? etc. we might be able to get something more simple in faster

@alisterburt
Copy link
Contributor

thanks for the links - goldmine of info, especially #1353 !

Happy to take a step back and think about the API and interactivity, I'll add it as a discussion point for the next meeting if there aren't already too many other things 😃

re: the particular use case - I often find myself showing 2D slices when presenting and think this hides the 3D nature of the data. To get around this, I often have a slide like this where I show 2D and 3D renderings side by side...
2d3d

I'd basically like to not have to explain this, having the 2D slice moving through the 3D volume then moving the camerato give the equivalent 2D view basically solves the problem - you immediately get a feel for the scale of the z-steps and the 3D-ness of the data.

@jni
Copy link
Member

jni commented Feb 8, 2021

@alisterburt side note: what software did you use to generate that isosurface? It's gorgeous!

@alisterburt
Copy link
Contributor

alisterburt commented Feb 8, 2021

@jni more ChimeraX loveliness!
specific settings: ambient occlusion (64 evenly distributed point sources) with black silhouettes around the isosurface

edit: just had a quick dig and it turns out ChimeraX is open source

@jni
Copy link
Member

jni commented Feb 9, 2021

edit: just had a quick dig and it turns out ChimeraX is open source

Under a rather restrictive license, unfortunately, so we can't make use of the source code:

Licensee agrees that it will use the Software, and any modifications, improvements, or derivatives to the Software that the Licensee may create (collectively, “Improvements”) solely for internal, non-commercial purposes and shall not distribute or transfer the Software or Improvements to any person or third parties without prior written permission from The Regents

I presume there's an OpenGL recipe somewhere for those rendering settings, though, so maybe we can add this kind of view to vispy/napari???

@sofroniewn
Copy link
Contributor Author

I presume there's an OpenGL recipe somewhere for those rendering settings, though, so maybe we can add this kind of view to vispy/napari???

There are actually PRs already in progress at vispy that add better lighting with shading to the surface layer, see discussion in vispy/vispy#1665 and vispy/vispy#1463 so if we help get that finished we should get this nice functionality too in napari without having to look at chimeraX

@sofroniewn sofroniewn added the stale PR that is not currently being worked on label Apr 15, 2021
@sofroniewn sofroniewn added the vispy Vispy integration label May 20, 2021
@sofroniewn
Copy link
Contributor Author

It's been a while since this PR was made and we've now got some other vispy ones in progress like #1820 and won't merge this as is, so i will close now. If we want to pick up this conversation again we can go back to #639 the original issue

@sofroniewn sofroniewn closed this May 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request stale PR that is not currently being worked on vispy Vispy integration
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support mixed 2D and 3D rendering
4 participants