Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a sample_ray method to layers #2653

Closed
kevinyamauchi opened this issue May 4, 2021 · 4 comments 路 Fixed by #3037
Closed

Add a sample_ray method to layers #2653

kevinyamauchi opened this issue May 4, 2021 · 4 comments 路 Fixed by #3037
Labels
feature New feature or request
Milestone

Comments

@kevinyamauchi
Copy link
Contributor

kevinyamauchi commented May 4, 2021

馃殌 Feature

In support of 3D picking, I think it would be nice to add a method to the base layer that calculates end points of the ray going through the data volume from a point in the canvas in the direction of the view.

Motivation

This ray (sample_ray?) can be used for 3D picking or sampling/exploring data along the ray (e.g., plotting intensity, finding voxels with specific values)

Pitch

We add a method to the base layer class that when given a point on the canvas (in canvas coordinates), finds the intersection between the ray cast from that point in the direction of the view with the axis-aligned bounding box of the data. In this context, "axis-aligned" means the bounding box is aligned with the axes of the data. For now, the bounding box is the extents of the data (layer._extent_data?).

In a next step, we could add some basic 3D picking to the layers using the sample_ray method. For labels, it's fairly straight forward. For Points and Shapes, we might need to be a bit more careful about performance (e.g., make axis aligned bounding boxes for each object and then test for intersection using those).

I have made a (quick and dirty) prototype implementation of finding the intersections sample ray with the data bounding box (used to perform 3D picking) here (near_point and far_point are the end points of the sample_ray in data coordinates).

In action:
3d_picking

Alternatives

Additional context

@kevinyamauchi
Copy link
Contributor Author

@sofroniewn , where is the best place to get caught up on the napari transform system? In particular, I would like to get the transformation between the screen coordinates (i.e., coordinates returned by the event.pos in the mouse event) and the data coordinates.

@sofroniewn sofroniewn added this to the 0.4.9 milestone May 4, 2021
@sofroniewn sofroniewn added the feature New feature or request label May 4, 2021
@sofroniewn
Copy link
Contributor

This is awesome @kevinyamauchi! We've been trying to move cursor position information off the layer, so I'd prefer if this wasn't directly on the base layer, but instead was something that could be calculated with the layer. One option would be to add viewer.cursor.ray which contained the ray in world coordinates (note viewer.cursor.position contains the position in world coordinates in 2D - I'm not actually sure what it has in 3D!)

Given the ray in world coordinates each layer can map that back to make a ray in data coordinates

@sofroniewn , where is the best place to get caught up on the napari transform system? In particular, I would like to get the transformation between the screen coordinates (i.e., coordinates returned by the event.pos in the mouse event) and the data coordinates.

The key method to go from canvas coordinates (what you call screen above, but the origin is in the top left of the canvas) to world coordinates uses the function defined here

def _map_canvas2world(self, position):
"""Map position from canvas pixels into world coordinates.
Parameters
----------
position : 2-tuple
Position in canvas (x, y).
Returns
-------
coords : tuple
Position in world coordinates, matches the total dimensionality
of the viewer.
"""
nd = self.viewer.dims.ndisplay
transform = self.view.camera.transform.inverse
mapped_position = transform.map(list(position))[:nd]
position_world_slice = mapped_position[::-1]
position_world = list(self.viewer.dims.point)
for i, d in enumerate(self.viewer.dims.displayed):
position_world[d] = position_world_slice[i]
return tuple(position_world)

and updates the cursor here

# Update the cursor position
self.viewer.cursor.position = self._map_canvas2world(list(event.pos))

I think you could also use/ adapt that method to make the ray and update viewer.cursor.ray.

That cursor position then gets added to the mouse event event.position, you could also add event.ray here

# Add the cursor position to the event
event.position = self.viewer.cursor.position

And then inside each layer mouse function callback there is a line

    coordinates = layer.world_to_data(event.position)

see

coordinates = layer.world_to_data(event.position)
for example

and you then have something like

    data_ray = layer.world_to_data(event.ray)

How does this sound?

@kevinyamauchi
Copy link
Contributor Author

kevinyamauchi commented May 5, 2021

Thank you, @sofroniewn ! This is super helpful. I agree that we should try to keep the cursor stuff off of the layers and I think the general approach you outlined looks good.

I have a couple of follow up questions:

  • in napari, are "world coordinates" the same as vispy "scene coordinates"
  • Why is the camera transform used in the _map_canvas2world() method? From this issue, it looks like the camera transform isn't the complete transformation from canvas to scene. From my reading/experimentation, I think the ViewBox.scene.transform is what we want. That being said, I'm still trying to wrap my head around the vispy coordinates systems and transforms, so I am not totally sure.

@sofroniewn sofroniewn modified the milestones: 0.4.9, 0.4 Jun 2, 2021
@sofroniewn
Copy link
Contributor

in napari, are "world coordinates" the same as vispy "scene coordinates"

Yes, but "world coordinates" in napari are fully nD, I think we at places might say "worldslice coordinates" to mean the "scene coordinates"

From this issue, it looks like the camera transform isn't the complete transformation from canvas to scene.

Hmm that could be - i think in 2D some of this stuff might not matter / stuff might be equivalent so we didn't notice, but in 3D the differences might be more significant.

My hope though is that we could keep changes localized to the _map_canvas2world method and just make it so that it does the right thing for us with 3D canvases

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants