New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] support mixed 2D / 3D rendering #839
Conversation
I think here we have to also consider the tradeoffs between this sort of functionality and supporting cropping of the data during 3D rendering - see comment here #846 (comment). Once #846 is merged I might give cropping a try before we try and finish this PR. Maybe at first just single-ended cropping with our current sliders, though eventually we'll want to use the range slider - probably after the refactor in #844 - to do double ended cropping from both sides |
To clarify, can the 2D plane lie slightly off-axis or tilted relative to the volume array? |
Not really, this is mainly about slicing the array, and certainly doesn't have any of the concepts that are coming in #885. I think we'll want to wait on this until #885 goes in too, and I'm also still not sure that cropping functionality isn't more natural here too. I do remember though you were interested in this functionality right? If so can you describe a little more about how you'd like to see this work / what functionality you need |
That's what I thought. Still a big step forward though!
Our lab combines scanning electron/ion beam imaging (2D images, often with some "perspective" tilt to the view) with fluorescence microscopy (3D volumes with colour channels) of the same samples. We want to:
|
Here's a great example of something that I have now that I really want to use napari to make it better. This is image alignment using manual control point matching of two 2d images of different modalities. I've added some image transforms on top of this cpselect tool, and it's what we're using for now: How could napari make this better?
Longer term, I'd love to have our main microscope control GUI display be napari based too. I can launch napari instances from my own PyQt GUI, so that's great. But I'm not sure about best practices to get information back out of napari (so I'm following the pluggy discussions closely), or how to make data displayed in several separate napari instances play nicely together (napari doesn't have the right support for showing two workspaces side by side - grid mode isn't good for this). |
This is very helpful @GenevieveBuckley, I really appreciate the screenshot - helps make everything a little more concrete for me. Once we get the physical coordinates, and basic plugin stuff in, we can put some time towards the multicanvas stuff, which I think will be important for your usage, as I see how grid mode isn't sufficient. Keeping this use case in mind then will be important. It's also great to see you weighing in and helping out with #885!!
Absolutely, this will make a great plugin! I think many people interested in multimodal registration will be excited about this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sofroniewn - I had a quick look at how this worked (I want to make animations with it)
What do you think is the main thing stopping this from moving forward? (besides the presence of a million other things to work on, of course 😆)
I found one hardcoded variable dims.sliced
which would need sorting and in general the functionality would need to be exposed in a useful/intuitive way - do we have any idea what this may look like?
My immediate thought is
- an checkbox connected to
layer.dims.embedded
- a spinbox/combobox for the
layer.dims.sliced
these would only be exposed in the 3D viewer mode
I'm assuming there isn't something you fundamentally don't like about what you've done here?
def sliced(self): | ||
"""int: Dimension that is sliced if embedded.""" | ||
if self.embedded and self.ndim >= 3: | ||
return self.order[-3] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this will need switching to a variable rather than hardcoding
I'd say nothing fundamental, though I think we've got better machinery now in place to deal with this (like our world coordinate system) then we did back when this started, and we've improved some of the slider handling. I think we've also come further along in our ideas of multiscale projection like #1820, so I'd want to step back and think about the overall API and user interactivity, before just cleaning up this PR and merging it. We've also removed For example, do we need an
Can you share a little more detail about your particular need. Do you have multiple 3D volumes, 2D? etc. we might be able to get something more simple in faster |
thanks for the links - goldmine of info, especially #1353 ! Happy to take a step back and think about the API and interactivity, I'll add it as a discussion point for the next meeting if there aren't already too many other things 😃 re: the particular use case - I often find myself showing 2D slices when presenting and think this hides the 3D nature of the data. To get around this, I often have a slide like this where I show 2D and 3D renderings side by side... I'd basically like to not have to explain this, having the 2D slice moving through the 3D volume then moving the camerato give the equivalent 2D view basically solves the problem - you immediately get a feel for the scale of the z-steps and the 3D-ness of the data. |
@alisterburt side note: what software did you use to generate that isosurface? It's gorgeous! |
@jni more ChimeraX loveliness! edit: just had a quick dig and it turns out ChimeraX is open source |
Under a rather restrictive license, unfortunately, so we can't make use of the source code:
I presume there's an OpenGL recipe somewhere for those rendering settings, though, so maybe we can add this kind of view to vispy/napari??? |
There are actually PRs already in progress at vispy that add better lighting with shading to the surface layer, see discussion in vispy/vispy#1665 and vispy/vispy#1463 so if we help get that finished we should get this nice functionality too in napari without having to look at chimeraX |
Description
This PR will close #639 by supporting mixed 2D / 3D rendering for all layer types. I'm not quite sure about API yet, and I havn't added full documentation / tests, but I wanted to get this going to see what it is like. It'll also probably intersect with some of the work on physical coordinates - see #763, and also orthoviews, see #760, but I do like the functionality so far.
To make it work you must set the
viewer.dims.embedded = True
, this will pop up a third slider corresponding to the "embedded" or "sliced" dimension. You must then set the specific layer that is to be embedded to havelayer.dims.ndisplay=2
(while keeping the whole viewer in 3D rendering mode, i.e.viewer.dims.ndisplday=3
).Here are a couple gifs of the new functionality, note that the blending modes work really nicely here:
and mixed surface + image rendering with some cryoET data
Type of change
How has this been tested?
Final checklist: