Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

axis-aligned bounding box around output triangle mesh is larger than expected #21

Closed
mikeroberts3000 opened this issue Nov 29, 2016 · 5 comments

Comments

@mikeroberts3000
Copy link

mikeroberts3000 commented Nov 29, 2016

Hello there,

I have a question about the PoissonRecon command-line program. I'm interested in sampling the output voxel grid (i.e., output by the --voxel flag) over the output triangle mesh. In other words, I'd like to color each vertex of the output triangle mesh according to the corresponding value in the voxel grid.

I am aware that the output voxel grid sampled over the output triangle mesh should always be 0, because the triangle mesh is exactly the 0 level set of the voxel grid. But I still want to be able to sample the voxel grid in this way, because I ultimately want to want to compute derived quantities over the voxel grid, and sample these derived quantities over the triangle mesh.

Sampling the voxel grid in this way requires care, because the voxel indices need to be mapped into the coordinate system of the output triangle mesh.

In my code, I assume that the axis-aligned bounding box (AABB) of the voxel grid is exactly 1.1x the size of the AABB of the input point cloud along each dimension (i.e., I am using the default value for the --scale parameter). I also assume that the AABB of the voxel grid has the same center as the AABB of the input point cloud. Together, these assumptions uniquely determine the mapping from triangle mesh coordinates to voxel indices.

These assumptions also seem to imply that the AABB of the output triangle mesh should be at most 1.1x the size of the AABB of the input point cloud along each dimension. But I am finding that this isn't the case. I am getting a larger-than-expected AABB for the output triangle mesh. In my particular case, the size of the of the output triangle mesh's AABB along each dimension, relative to the input point cloud's AABB is [ 1.02068782 1.16858709 1.34913611]. Note that the relative size of the output triangle mesh's AABB is noticeably larger than the maximum expected relative size of 1.1x along the last two dimensions.

Is the voxel grid's AABB indeed 1.1x the size of the input point cloud's AABB? If so, where are these output vertices coming from that are so far outside the AABB of the voxel grid? I understand that PoissonRecon implements marching cubes, but as far as I am aware, marching cubes only produces triangles that are interior to a voxel grid (i.e., marching cubes does not extrapolate).

This behavior is potentially problematic, because it suggests that I'm not aligning my voxel grid and my output triangle mesh correctly.

I'm using PoissonRecon version 9.0 on Mac, and I'm invoking it as follows:

./PoissonRecon --in pset_mvs_si.ply --out poisson_surface_si.ply --voxel poisson_function_si.bin --depth 9 --color 16 --density --verbose

In case it's helpful, I've uploaded my input point cloud and output triangle mesh below.

Input point cloud: https://www.dropbox.com/s/ybhjsj3iejiix7c/pset_mvs_si.ply?dl=0
Output triangle mesh: https://www.dropbox.com/s/8oqe0jtkxujohqi/poisson_surface_si.ply?dl=0

Cheers,
Mike

@mkazhdan
Copy link
Owner

Hi Mike,

I think there are two things going on:

  1. In the code, the voxel grid / octree discretizes the (scaled) bounding-cube, not the bounding-box, of the point-set (Lines 366-378 of PoissonRecon.cpp.)

  2. In general, the reconstructed mesh should fit pretty snugly around the input points, so the bounding box of the points should provide a good proxy for the bounding box of the triangle mesh. However, in the case that the points do not sample a water-tight surface, the mesh may not close and the triangles can extrude out to the boundary of the working cube.

Taken in conjunction, this could create a situation where the reconstructed surface extends to the sides of the bounding cube, which (on the narrower sides of your point-set) can be far from the sides of the bounding box, giving you the larger scale factors you are seeing.

Looking at your output, that appears to be the case. (If you can view the mesh in wire-frame, these triangles will appear larger, as they are extracted from coarser cells of the octree.)

-- Misha

@mikeroberts3000
Copy link
Author

Ah, makes total sense, thanks so much for the pointers! Especially to the lines of code where this scaling stuff is happening :)

@mikeroberts3000
Copy link
Author

mikeroberts3000 commented Aug 22, 2017

Hi @mkazhdan,

I'd like re-visit this issue. I am again trying to align the volume returned by the --voxel flag into the same coordinate system as my input point cloud. But I am having problems aligning these two representations. My question here is informed by your previous post, which was very helpful.

For debugging purposes, I have transformed my point cloud exactly as suggested by the GetPointXForm function (see here). After performing this transformation, my point cloud lies within the unit cube from (0,0,0) to (1,1,1). The exact coordinate-wise min and max of my transformed point cloud is as follows:

Coordinate-wise min of transformed point cloud: [ 0.04545449  0.21783513  0.14679344]
Coordinate-wise max of transformed point cloud: [ 0.95454544  0.78216463  0.85320663]

This intermediate result seems correct to me. First, my point cloud has been transformed so that it lies entirely within the unit cube from (0,0,0) to (1,1,1). Second, the aspect ratio of the transformed point cloud is the same as in my original point cloud. Third, there is a symmetric amount of padding along each coordinate, so my transformed point cloud lies at the center of the unit cube, which is sensible. Fourth, there is almost exactly 5% padding on each side along the widest coordinate, which makes sense because I'm using the default value of 1.1x for the --scale parameter. So far, so good.

However, I am having problems aligning the Poisson volume to the transformed point cloud without resorting to magic fudge factor constants. I would expect the Poisson volume to extend from (0,0,0) to (1,1,1), and for each voxel to be isotropic. But when I try to visualize the volume, I need to set its spatial extent using the following constants:

# note the magic fudge factor constants
poisson_func_extent_min_coords = array([0.00, 0.00, 0.15])
poisson_func_extent_max_coords = array([1.00, 0.78, 1.00])

Using these magic constants, the transformed point cloud and the volume line up almost perfectly. Here are some images of the transformed point cloud and the volume lining up almost perfectly. The zero level set of the volume, transformed to have the above spatial extent, is shown in green. The point cloud is shown in blue. The rgb lines are unit-length coordinate axes for xyz. Note that the blue and green representations agree almost exactly on silhouette boundaries. I am rendering with a standard perspective projection using the Mayavi contour3d and points3d functions.

image

image

image

But where do these magic constants come from? They do not seem to follow directly from the aspect ratio of the transformed point cloud. Is there some other scaling mechanism in the PoissonRecon code that would make the volume output by the --voxel flag not extend from (0,0,0) to (1,1,1)?

@mkazhdan
Copy link
Owner

mkazhdan commented Aug 22, 2017 via email

@mikeroberts3000
Copy link
Author

mikeroberts3000 commented Aug 22, 2017

Hi @mkazhdan, Thanks so much for your help. I figured out the problem. This issue was caused by a bug on my end.

When visualizing isosurfaces using Mayavi's contour3d function, the extent parameter actually specifies the spatial extent of the triangle mesh obtained using marching cubes, not the spatial extent of the underlying volume, which can lead to some confusion. In case anyone else is reading this and having similar issues, the following code is wrong:

poisson_func_extent_min_coords = array([0.0, 0.0, 0.0])
poisson_func_extent_max_coords = array([1.0, 1.0, 1.0])

poisson_func_extent = \
    poisson_func_extent_min_coords [0], poisson_func_extent_max_coords [0], \
    poisson_func_extent_min_coords [1], poisson_func_extent_max_coords [1], \
    poisson_func_extent_min_coords [2], poisson_func_extent_max_coords [2]

# might be correctly or incorrectly scaled depending on the data in poisson_func
mayavi.mlab.contour3d(poisson_func, extent=poisson_func_extent, opacity=0.5, contours=[0.0])

But the following code is right:

X,Y,Z = mgrid[0:1:1j*poisson_func_size, 0:1:1j*poisson_func_size, 0:1:1j*poisson_func_size]

# will be correctly scaled regardless of the data in poisson_func
mayavi.mlab.contour3d(X, Y, Z, poisson_func, opacity=0.5, contours=[0.0])

Anyway, now I'm getting a perfectly aligned volume and point cloud with no magic constants. Yay! 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants