Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(segmentation_user_layer) implemented getObjectPosition for MeshLayer #531

Merged
merged 3 commits into from
Mar 18, 2024

Conversation

chrisj
Copy link
Contributor

@chrisj chrisj commented Feb 15, 2024

finds the closest loaded associated mesh vertex to the global position

I also did an implementation that just returns the first vertex (in the order that I iterate through them here) and one where it finds the vertex closest to the mean vertex position. I still need to get some feedback to confirm if this is preferred but this seems to be ideal as it minimizes movement.

I might be making an assumption on using vertexPositions. The type is EncodedVertexPositions but it seems to be work without any kind of decoding.

I'm not using localPosition when comparing the globalPosition with the vertexPositions, only globalToRenderLayerDimensions and transform.modelToRenderLayerTransform. I tried to do the inverse of how globalPosition is modified in moveToSegment but I am a little confused why I don't see how localPosition comes into play.

@chrisj chrisj force-pushed the cj-mesh-layer-move-to-segment branch 2 times, most recently from fbafcf0 to ffc9e70 Compare February 15, 2024 23:07
@jbms
Copy link
Collaborator

jbms commented Feb 25, 2024

Thanks for this change!

finds the closest loaded associated mesh vertex to the global position

I also did an implementation that just returns the first vertex (in the order that I iterate through them here) and one where it finds the vertex closest to the mean vertex position. I still need to get some feedback to confirm if this is preferred but this seems to be ideal as it minimizes movement.

Finding the closest in order to minimize movement does seem preferable. I am a bit worried about UI hangs in the case of a very large mesh, though this is also only in response to a user action so it is less concerning than if it were happening without user action. Perhaps if the mesh is large the closest point calculation could stop after looking at a certain amount of mesh data, or randomly sample some number of points and pick the closest.

I might be making an assumption on using vertexPositions. The type is EncodedVertexPositions but it seems to be work without any kind of decoding.

See VertexPositionFormat --- for the non-multiscale mesh the format is always float32 so no deocding is needed. For multiscale meshes other formats are supported.

I'm not using localPosition when comparing the globalPosition with the vertexPositions, only globalToRenderLayerDimensions and transform.modelToRenderLayerTransform. I tried to do the inverse of how globalPosition is modified in moveToSegment but I am a little confused why I don't see how localPosition comes into play.

localPosition would be needed if some of the dimensions of mesh were marked as local dimensions rather than global dimensions. However, that isn't currently supported for mesh sources, so you don't have to worry about that.

@chrisj
Copy link
Contributor Author

chrisj commented Mar 5, 2024

Some performance numbers.

Using this neuron (10M vertices)
https://spelunker.cave-explorer.org/#!middleauth+https://global.daf-apis.com/nglstate/api/v1/6557565452288000

It takes 150 MS on avg to calculate the closest vertex. It spikes up to 500-600 ms near the start, probably while the JIT compiler is working.

I can calculate the vertex count in under 4ms. We could use that to sample the vertices so that we only check something like 1M or 100K to keep things snappy.

@jbms
Copy link
Collaborator

jbms commented Mar 5, 2024

Some more thoughts on this:

  • Sampling sounds like a good solution. I think you could just sample a small number of points per fragment.
  • For graphene, as far as I understand, each fragment corresponds to a chunk with bounds that can be inferred from its identifier. Maybe this could be done entirely client side with no changes on the server. Then there is no need to look at the actual vertex data or even load the mesh fragments at all. Either special support could be added to the graphene datasource, or the existing single-resolution graphene meshes could be represented as multiscale meshes in Neuroglancer. It is perfectly fine to have a "multiscale" mesh in Neuroglancer that is actually just a single resolution, and treating it as a "multiscale" mesh would have the additional advantage that Neuroglancer would know about the location of the fragments and would only download and draw visible fragments.

@chrisj chrisj force-pushed the cj-mesh-layer-move-to-segment branch from 86cd8d1 to 743862c Compare March 6, 2024 21:51
@chrisj
Copy link
Contributor Author

chrisj commented Mar 6, 2024

I updated it so that it samples up to 100,000 vertices using a naive but fast approach. This brings the execution down to 8-15 ms.

We could do the graphene specific optimization but I don't think it is necessary, hopefully there are other datasources that can benefit from this.

@jbms jbms merged commit 8c95c63 into google:master Mar 18, 2024
13 of 19 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants