Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overlap removal #289

Open
elalish opened this issue Nov 22, 2022 · 5 comments
Open

Overlap removal #289

elalish opened this issue Nov 22, 2022 · 5 comments
Labels
enhancement New feature or request

Comments

@elalish
Copy link
Owner

elalish commented Nov 22, 2022

This will be a large feature; I'm not sure when I'll have time to tackle it, but I want to write down my thoughts since I now have a sketch of the algorithm.

Removing overlaps would have two major uses: as the final stage of a manifold fixing algorithm for non-manifold input meshes, and also as a cleanup after Warp() that would take care of produced intersections. Technically it could also replace the Batch Boolean algorithm, but that's not a good idea as it probably won't be as fast.

In the paper this library's Boolean algorithm is based on, it makes clear that it only works for two input manifolds, not for removing the self-intersections of a single manifold. The reason is that the paper's method of linking edges can be done independently and still guarantee a manifold result, which is why the algorithm is parallelizable, but examples are given that break this guarantee in the single-manifold case.

However, the first half of the Boolean should still work basically the same. The collider will need to be updated so it can automatically skip intersections between a triangle and any edges that share a vertex with it. Then the calculation of intersection points and the winding numbers of vertices can proceed as it currently does, mostly. The verts will check for shadowing with their neighboring triangles, and use their neighbor triangles results to determine which two winding numbers this surface separates. Only verts separating winding 0 and 1 will be kept (strictly-positive fill rule).

The difference is there are a few new vertices that will not be found by these intersections of triangles with edges. These new verts will be the intersection of three non-neighboring triangles, which will be findable because they are also the intersection of three new edges, and this intersection exists if and only if the three new edges share a single set of three triangles as neighbors.

The bigger problem for manifoldness is that previously the new verts on a given partially-retained edge could be paired up arbitrarily (and independently) and still guarantee a manifold result. In the case of these new triple-intersections, that is no longer true - they depend on each other topologically. This means the assembly algorithm cannot be parallelized (that I know of), but by working serially it should be possible to start at an intersection and work our way along each seam, checking any previous neighbor pair-ups and basing the new pair-ups on those to maintain topological consistency.

Additionally, another type of new edge can form, which has only one intersection vert. This is when a manifold is folded, producing an intersection curve that terminates on both ends instead of forming a loop. In the case when a vert opposite the intersected edge (the other vert of either neighboring triangle) is shared with the triangle it is intersecting, then this new edge will go from the intersection vert to this opposite vert.

@pca006132
Copy link
Collaborator

Apart from Warp, Refine can also cause overlap when the surfaces are too close.

@elalish
Copy link
Owner Author

elalish commented Sep 11, 2023

The tricky part to making this robust is these new triplet vert (the name I'll give to verts at the intersection of three tris) pair-ups. In the Boolean version, we know for each vert on a retained edge whether it is a start or an end vert, and this pre-arranged consistency is fundamentally where our manifoldness guarantee stems from. In the case of these triplet verts, it is not known for each new edge whether they are start or end, only that within each partially-retained face, the two edges linking to that vert must be opposite (one start and one end), while on a given new edge there must be an equal number of start and end verts.

This looks like a kind of binary integer programming problem, except all the constraints are equalities and there is no cost function - we only want an arbitrary valid solution. I'm not too concerned about NP-hardness, since in practice these knots of related constraints will be quite small and local. I'm guessing NP-hard means there is no clever way to solve this, so probably just need a little decision tree we can search exhaustively.

@pca006132
Copy link
Collaborator

No, the problem may just be NP hard in general, but exists efficient algorithms for special cases. For a lot of NP hard graph problems, we can solve them efficiently for planar graphs or tree like graphs. You can have a look at parameterized algorithms if you are interested. One important property is whether or not the problem can be split into subproblems locally, and it seems that it is possible.

@elalish
Copy link
Owner Author

elalish commented Sep 11, 2023

Well, and actually on further reflection, I believe this is in fact an XOR-SAT problem, which is P because it can be solved by Gaussian elimination. Not sure if that's the route we want to take, but it's nice that this isn't as hard.

@elalish
Copy link
Owner Author

elalish commented May 14, 2024

Okay, upon further reflection, the above approach isn't going to work. While an XOR-SAT might fix issues akin to figure 6.5 from Julian Smith's paper, I can't see a straightforward way to deal with the more common problems shown in figure 6.4.

I'm now inspired by his approach to robustifying the Bently-Ottmann algorithm for 2D overlap removal, from Section 7.4 and after. Here, instead of using shadow functions and symbolic perturbation to make every comparison into a reliable boolean value, now there are three options: above, below, or on. on is any comparison that is not definitely above or below (more like is a vert on one side of a line or the other - I'm not trying to imply coordinate direction). For a vert on an edge, that edge is split, so degeneracies are effectively snapped together. This has an extra benefit which is that it keeps degeneracies from entering the output in the first place (very much unlike the Boolean), which means we shouldn't have to spend as much time in our serial simplification post-process.

I'm not a huge fan of the sweep line algorithm, especially since extending it to 3D sounds like an indexing nightmare and there's no opportunity for parallelism. However, I think with Julian's degenerate snapping, there's no need - the sweep line was a nice way to get from O(n^2) to O(nlogn), but we can do that with our BVH tree anyway (and it's parallel).

So, for the 2D problem (output an ε-valid polygon with e.g. a positive fill rule from an invalid polygon), it could look like this:

  1. Merge all verts that are within ε of each other. ε is chosen as the maximum of a safe bound on inexact arithmetic error and the smallest feature size that is desired to retain. Care must be taken regarding chains of nearby verts so they don't move too far. They need to use a weighted-average position and calculate distance based on up-to-date values. The broad phase can be parallel, but the narrow phase probably needs to be serial for this reason.
  2. Discard collapsed edges - the remaining set are the "original" edges.
  3. Every edge builds an ordered list of all verts that are within ε of it. The order is reliable even with inexact arithmetic because any verts that were near each other have already been collapsed.
  4. Find new edge-edge intersections - they cannot include any where an end-vert of one edge is in the vert list of the other. When sorting the new vert into each edge's list, if it is within ε of its neighbor, it is not inserted and that neighbor vert is inserted instead into the other edge's list. There can be no nearly-parallel cases, and so the cyclic ordering of the edges around an intersection vertex is always clear, even with inexact arithmetic.
  5. Break the edges into sub-edges according to their internal sorted vert list. None of these sub-edges intersect each other, but there can be duplications. Combine duplicates into a single edge with a multiplicity value that shows how much the winding number is incremented across this edge. Any edges with multiplicity zero (opposite directions cancel) are discarded.
  6. Output only the set of edges that separate winding numbers <=0 from >0. This can be done in parallel by ray-casting vertically or horizontally from the midpoint of each edge (to avoid the ray being parallel to its edge) to evaluate the winding number. This is robust by just using symbolic perturbation to check edge shadowing, since no verts or edges are close to the midpoint vert.

For the 3D case, things get a bit more complicated, but follow the same general concept. Before, after step 1, verts went from having exactly 2 edges to having an even number of edges attached. Now edges will go from 2-manifold to even-manifold (number of triangles they're attached to). And instead of outputting sub-edges that divide winding numbers 0 and 1, we need to output sub-polygons of input triangles that divide winding numbers 0 and 1. These polygons will be triangulated as a post process, just like in the Boolean. Steps:

  1. Same as 1 above.
  2. Same as 2 above, though triangles also get collapsed and discarded naturally. Edges with the same two verts are merged and create a list of their attached triangles (an even number).
  3. Same as 3 above.
  4. Same as 4 above - this is now another degenerate case, as edges would all be skew with input in general position.
  5. Every triangle builds a list of verts that are within ε of its interior. None need to be merged.
  6. Find new edge-tri intersections - they cannot include any pairs already covered by steps 4 or 5. Insert the new vert into the edge and tri lists, unless it is close to one of the existing verts in the list, in which case it is merged instead, as in 4.
  7. Add a new edge for each tri-tri intersection. The endpoints will be the exactly two verts that are shared between the edges of one tri and the edges and interior of the other tri and vice-versa. I believe this will always be true based on the merging we've done, but a proof would be nice.
  8. Add to this new edge vert list any verts from the interior of either triangle that are within ε of it.
  9. Find new intersection verts between new edges within each tri. Insert or merge the new vert into both edges' lists as in 4.
  10. Same as 2D step 5, but now each triangle also gets a list of halfedges and the sub-edges are pushed as half-edges into each of their triangles's lists (the new edges insert both a forward and backward halfedge into each triangle) .
  11. Each triangle now has a set of halfedges that looks like the output of 2D step 5. Join them into a valid polygon partition by following the cyclic ordering around each vert, which is unambiguous since no sub-edges are near parallel.
  12. Find any polygons that are equivalent (reference all of the same vertices) and merge them into one with a multiplicity value, as in 2D step 5, discarding those that cancel out.
  13. Output only the polygons that separate winding numbers <=0 from >0. The polygons may not be convex, so a midpoint can't work for ray-casting. We could triangulate at this stage and use the midpoint of one of the interior edges, though it would mean doing twice as much triangulation as we need to. We could also ray-cast from a vert and handle the other faces that share it by checking face shadowing. That might be a better approach for 2D as well.

This feels implementable - more serial than the Boolean, but still parallel where it really matters (the BVH broad phase). Curious if anyone can think of an example case where this could break down?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants