Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mergeVertices truncates coordinates which misses some neighbors #24621

Closed
erasta opened this issue Sep 10, 2022 · 9 comments · Fixed by #26746
Closed

mergeVertices truncates coordinates which misses some neighbors #24621

erasta opened this issue Sep 10, 2022 · 9 comments · Fixed by #26746

Comments

@erasta
Copy link
Contributor

erasta commented Sep 10, 2022

Describe the bug

BufferGeometryUtils.mergeVertices() truncates all coordinates causing it to miss some neighboring vertices which are closer than the threshold sensitivity given to mergeVertices().

Two vertices should merge when all their coordinates (x, y, z, u, v, nx, etc...) have values closer than the threshold. But if one vertex has at least one coordinate with value that is lower than an integer multiplication of the threshold, they would not be merged.
For example if the threshold is 1e-4 and vertices a and b are the same except x:

  • if a.x = 1 and b.x = 1.000001 they would merge.
  • if a.x = 1 and b.x = 0.999999 they would not merge.

This is caused when creating the hash for each vertex using ~~ to truncate/floor each of the vertices coordinates.

(wanted to know what are anyone's thoughts about the matter before i might just solve this in a PR)

To Reproduce

Steps to reproduce the behavior:

  1. Go to https://codesandbox.io/s/threejs-cubes-merge-problem-zhtbtt?file=/src/index.js
  2. See the lower cube is not merged well.

Code

const material = new THREE.MeshStandardMaterial({ color: 0x00ff00, side: THREE.DoubleSide });
{
    let geometry = new THREE.BoxGeometry(1, 1, 1);
    geometry.deleteAttribute("normal");
    geometry.deleteAttribute("uv");
    geometry.attributes.position.array[0] += 0.000001;
    geometry = BufferGeometryUtils.mergeVertices(geometry);
    geometry.computeVertexNormals();
    const cube = new THREE.Mesh(geometry, material);
    cube.position.y = 2;
    scene.add(cube);
}
{
    let geometry = new THREE.BoxGeometry(1, 1, 1);
    geometry.deleteAttribute("normal");
    geometry.deleteAttribute("uv");
    geometry = BufferGeometryUtils.mergeVertices(geometry);
    geometry.computeVertexNormals();
    const cube = new THREE.Mesh(geometry, material);
    cube.position.y = 0;
    scene.add(cube);
}
{
    let geometry = new THREE.BoxGeometry(1, 1, 1);
    geometry.deleteAttribute("normal");
    geometry.deleteAttribute("uv");
    geometry.attributes.position.array[0] -= 0.000001;
    geometry = BufferGeometryUtils.mergeVertices(geometry);
    geometry.computeVertexNormals();
    const cube = new THREE.Mesh(geometry, material);
    cube.position.y = -2;
    scene.add(cube);
}

Expected behavior

There are almost three identical cubes, applying mergeVertices() on their positions (after removing normals and uvs).
The difference between them is the x-coordinate of the first vertex: the top is 0.500001, the middle is 0.5 and the bottom has 0.499999.
Since the difference is well below the 1e-4 default threshold of mergeVertices() we would expect the vertices to be merged the same way.
But the top and middle vertex is merged to its neighbors (since they are truncated to 0.5) and the bottom is not merged (since it is truncated to 0.4999)

Screenshots

image

Platform:

  • Device: Desktop
  • OS: MacOS
  • Browser: Chrome
  • Three.js version: r144

(the problem exists on all platforms)

@donmccurdy
Copy link
Collaborator

donmccurdy commented Sep 10, 2022

It sounds like we should be explicitly rounding to N digits, rather than truncating? That sounds like a bug to me as well, and a PR to fix it would be welcome I think. Thank you!

FWIW I recently implemented a version of this that explicitly compares against the tolerance value rather than rounding or truncating. It isn't as slow as N^2 if you sort the vertices first and are selective about which comparisons you make using sort order, but it's still quite a bit slower than what three.js currently does.

@gkjohnson
Copy link
Collaborator

It sounds like we should be explicitly rounding to N digits, rather than truncating?

This will have the same problem - both truncating and rounding bin the space and merge vertices in common bins. There will always be cases where two numbers are very close to each other but otherwise fall on opposite sides of the binning operation (rounding or truncation in this case).

FWIW I recently implemented a version of this that explicitly compares against the tolerance value rather than rounding or truncating. It isn't as slow as N^2 if you sort the vertices first and are selective about which comparisons you make using sort order, but it's still quite a bit slower than what three.js currently does.

Performance is the reason I chose to use the truncation method. If there are other methods that are comparably performant I can support a change (or possibly an option if the performance is noticeably worse). I've been wondering if there's a way to rely on two binning methods (ie rounding and truncation) to enable a fast search for neighbor vertices without falling into this issue.

@erasta
Copy link
Contributor Author

erasta commented Sep 11, 2022

Hi @gkjohnson and thank you for writing this method!
I agree with you and @donmccurdy that:

It sounds like we should be explicitly rounding to N digits, rather than truncating?

This will have the same problem - both truncating and rounding bin the space and merge vertices in common bins. There will always be cases where two numbers are very close to each other but otherwise fall on opposite sides of the binning operation (rounding or truncation in this case).

rounding will present the same problem as the bin will just shift lower a half-threshold.
That will still be a bug, but a little less severe one: the bin around 1.0 is now [1.0, 1.0001) and when rounding would be [0.9995, 1.0005) which is less sensitive to the subtracting epsilon=1e-6 from integers.

sort the vertices first and are selective about which comparisons you make using sort order

That sounds interesting, as far as I know there's at least 3 dimensions to sort the points by, and could be much more. can you elaborate on your sorting metric?

Performance is the reason I chose to use the truncation method.

The other methods i can think of are:

  1. checking if adjacent cells of the hashtable exist and comparing their exact euclidean distance - it is o(1) more comparisons but to the factor of at least 26 more comparisons.
  2. binning the cells with a larger threshold so each bin has multiple vertices, exhaustive search inside, or checking other cells when close to the edge.
  3. octree / bvh / other space partitioning tree structures - this should solve the problem in theoretical o(nlogn) time, (the current hashtable search is supposedly o(nlogn)) but the amount of allocations could degrade performance greatly. There's an Octree for triangles in the examples, how fast is it?
  4. flat linear representation of an octree - need to research if possible, but this could be a winner.
  5. intersecting binning methods (as suggested) would need at least 8 bin searches, one for each dimension, which is an optimization of (1) from x27 to x8.

@donmccurdy
Copy link
Collaborator

donmccurdy commented Sep 11, 2022

That sounds interesting, as far as I know there's at least 3 dimensions to sort the points by, and could be much more. can you elaborate on your sorting metric?

I'm just sorting on one axis, you can see the implementation here:

https://github.com/donmccurdy/glTF-Transform/blob/8dd601845270686279dbf0b79f4e646bf6b89394/packages/functions/src/weld.ts#L115-L133

That could be improved, on average, by choosing the axis with the widest range.

Even better (as you suggest) would be to build a BVH or similar, so you can do search outward from any vertex. @gkjohnson would know better than me on this, but I believe the cost would be O(n * log(n)) for this approach. It is probably more complex to implement, however, so I'm not sure if we want to build and maintain that approach in BufferGeometryUtils...

There is one other, related, issue I've noticed in BufferGeometryUtils.mergeVertices — using the same tolerance for all vertex attributes isn't optimal. The choice of tolerance=1e-4 is generally a good default for positions, but a good default for vertex normals would probably be a bit larger. Ideally default tolerances would be based on the range of each attribute.

@gkjohnson
Copy link
Collaborator

gkjohnson commented Sep 12, 2022

That will still be a bug, but a little less severe one: the bin around 1.0 is now [1.0, 1.0001) and when rounding would be [0.9995, 1.0005) which is less sensitive to the subtracting epsilon=1e-6 from integers.

I'm not sure I follow this. Rounding vs truncation just shifts where the bin boundaries are - it will just change where failures occur. It may make some models work more correctly given the way (ie your box would merge correctly with rounding instead of truncation) but numerically it's the same issue.

using the same tolerance for all vertex attributes isn't optimal.

Agreed - I'd support changing the tolerance option to take a map of attribute to tolerance values if it's posing a problem somewhere.

--

Building a spatial data structure seems overkill and expensive for this especially for a three.js example file. Three-mesh-bvh can be built relatively fast and perform fast spatial queries - I wouldn't be against adding a utility to that project for merging vertices, but I'd hope there'd be a simpler solution. I'd prefer to start by taking a look at how other engines, tools, or papers address this. Do we have any reference implementations we can take a look at? Meshlab and Blender are both open source and there are certainly more.

@donmccurdy
Copy link
Collaborator

donmccurdy commented Sep 14, 2022

I would guess that tools like Blender and Meshlab are building a half-edge or similar data structure, without GPU-related constraints around per-face normals on the same vertex. Perhaps there's some way to build and maintain such a data structure on top of BufferGeometry, which could be an interesting problem in its own right, but may not be any cheaper or simpler than spatial indexing.

@erasta
Copy link
Contributor Author

erasta commented Sep 14, 2022

building a half-edge or similar data structure, without GPU-related constraints around per-face normals on the same vertex.

@donmccurdy Actually just implemented halfedge structure above threejs's ConvexHull sub-classes, and had to implement mergeVertices only on vertices, since most built-in geometries are non-indexed. So merging is not assisted by dcel, but quite vice versa, in order to find edge twins. (If you want i can merge my code into this repo, but not sure if that is deemed fitting)
Can report that @gkjohnson's truncating method is x6 times faster than using Octree.

Rounding vs truncation just shifts where the bin boundaries are - it will just change where failures occur.

I meant that it would cause the problem to appear less often, especially in parametric models. However, you are correct, this is not a viable solution to the problem.

@donmccurdy
Copy link
Collaborator

... and had to implement mergeVertices only on vertices, since most built-in geometries are non-indexed. So merging is not assisted by dcel, but quite vice versa, in order to find edge twins

An important distinction here is that you only need to merge based on vertex position to create the halfedge. You can (optionally) store per-face normals and other attributes for the same vertex in the halfedge. But yeah, if the geometry is non-indexed you still need to merge positions without the assistance of a halfedge, and in any case this a halfedge probably more complex than we want to add in BufferGeometryUtils..

I have no issue with rounding instead of truncating if it helps in real use cases.

@gkjohnson
Copy link
Collaborator

gkjohnson commented Sep 14, 2022

I meant that it would cause the problem to appear less often, especially in parametric models.

I have no issue with rounding instead of truncating if it helps in real use cases.

I agree with this. Instead of rounding we can also shift the bin boundaries away from the round locations where vertices are more likely to land naturally in models. Something like changing this line so it shifts the boundaries by the threshold epsilon or comparable value:

~ ~ ( attribute[ getters[ k ] ]( index ) * shiftMultiplier ) + 1

// or 

~ ~ ( ( attribute[ getters[ k ] ]( index ) + tolerance ) * shiftMultiplier ) 

I'm not sure if there are other thoughts on a better way to pick an offset for these bin boundaries.

@erasta erasta changed the title mergeVertice truncates coordinates which misses some neighbors mergeVertices truncates coordinates which misses some neighbors Sep 14, 2022
@Mugen87 Mugen87 closed this as completed Sep 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants