-
Notifications
You must be signed in to change notification settings - Fork 574
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mesh.nearest.signed_distance out of memory error #976
Comments
I had the same problem. It seems to depend on the size (e.g. number of vertices) of the input mesh. If you can work with a simplified mesh, you shouldn't run into memory issues. Maybe, the used acceleration structure is somehow inefficient. It looks to me like O(num_mesh_vertices * num_query_points). |
Yeah the broad-phase data structure is an rtree-bucket-per-triangle so on large meshes it isn't great. It has the advantage of not needing to implement a data structure, but is absolutely the limiting factor for performance and memory usage for signed distance (in a profile on some test data just now it was 8.0/12.9 seconds). In previous issues we had thrown around using octree/fixed-voxel-bucket or similar, but never found a suitable dependency or easy way to implement it using vectorized checks. PR's or ideas definitely appreciated! |
How about using a scipy.spatial.KDTree? https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.KDTree.html |
Yeah I couldn't figure out a way to use |
While using
mesh.nearest.signed_distance
I found out that for some meshes this method uses a lot of memory. Hence, all my laptop's memory become used and process being killed. I found a solution - calculate distance in chunks:My suggestion: would it be beneficial to add an opportunity for processing mesh.nearest.signed_distance iterativly in memory constrained environments? Or maybe add a constraint for algorithm you used?
The text was updated successfully, but these errors were encountered: