Skip to content

Manifold Library

Emmett Lalish edited this page Feb 18, 2024 · 15 revisions

This page describes the theory and interesting implementation details of these algorithms. As I haven't had time to write any of this up for proper peer review yet, this is as close to a paper as you'll get for now. Before the technical write-up, here is a road map of the features implemented so far and what could come next:

Road Map

  • Write a parallel mesh Boolean algorithm.
  • Write a parallel broad phase collider to speed up the Boolean.
  • Create a manifold triangulator.
  • Verify mesh Boolean's manifoldness guarantee.
  • Make triangulator geometrically robust to degeneracies.
  • Flesh out Manifold's API.
  • Change symbolic perturbation to improve behavior for coplanar faces.
  • Add extrusion constructors for Manifold.
  • Build interesting samples.
  • Upgrade to CUDA 11, Ubuntu 20.04, Assimp 5.0.
  • Import/Export 3MF & glTF.
  • Add mesh decimation to remove degenerate triangles.
  • Do a perf pass to look into simple optimizations.
  • Enable smooth interpolation of triangle meshes.
  • Add support for textures and other mesh properties.
  • Integrate code coverage results to the CI.
  • Add Python bindings.
  • Add a WASM build for web.
  • Build an SDF voxel system to generate complex Manifolds.
  • Build a web-based editor akin to OpenSCAD/OpenJSCAD.
  • Add C bindings
  • Publish a glTF extension to save manifolds to disk.
  • Expand the Polygon API to mirror the Manifold API.
  • Add Convex Hull support
  • Add slicing function to output Polygons from Manifolds.
  • Build a sweep-plane algorithm to remove self-overlaps from Manifolds.
  • Build an OpenSCAD cross compiler to import objects and libraries.
  • Publish a glTF extension for smooth interpolation of meshes.
  • Build a straight skeleton library for Polygon offsets and building roofs.

Manifoldness

What does it mean to be manifold and why should you care? Here we are dealing with triangle meshes, which are a simple form of B-rep, or boundary representation, which is to say a solid that is represented implicitly by defining only its boundary, or surface. The trouble is, this implicit definition of solidity is only valid if the boundary is manifold, meaning has no gaps, holes or edges. Think of a Mobius strip: it is a surface, but it does not represent the boundary of any solid.

This property is not important in 3D rendering, and is thus largely ignored in the animation world, and often even in the CAD world. It is important in engineering and manufacturing, because a non-manifold mesh does not define a solid and so any processing related to analyzing or building that object will either fail or become unreliable. Thus a large amount of middle-ware, both automated and manual, exists for fixing CAD models so they can be used downstream. Closing the design loop with powerful analysis tools like FEA and CFD in an automated optimization could vastly improve the efficiency of design, especially in conjunction with additive manufacturing's ability to easily build complex geometry. However, the lack of reliability of the computational geometry foundations, combined with the manual nature of CAD, means this level of round-trip automation is not currently feasible.

The goal of this library is to create a robust computational geometry foundation from which advanced use cases like the above can be realized.

Manifoldness definition

There are many definitions of manifoldness, most of them from the realm of mathematics. Common definitions will tell you things like the set of manifold objects is not closed under Boolean operations (joining solids) because if you join two cubes along an edge, that edge will join to four surfaces instead of two, so it is no longer manifold. This is not a terribly useful definition in the realm of computational geometry.

Instead, we choose here to separate the concepts of geometry and topology. We will define geometry as anything involving position in space. These will generally be represented in floating-point and mostly refers to vertex properties like positions, normals, etc (see Overlaps). Topology on the other hand, will be anything related to connection. These will generally be represented as integers and applies to faces and edges, which refer to each other and vertices by index.

The key thing to remember is that topology is exact, while geometry is inexact due to floating-point rounding. For this reason, we will choose a definition of manifoldness which relies solely on topology, so as to get consistent results.

As such we will use the same definition of manifoldness as is used in the 3D Manufacturing Format spec: 3MF Core Spec, section 4.1 (disclosure: I wrote most of the 3MF core spec while I was a member of their working group). Paraphrased:

Every edge of every triangle must contain the same two vertices (by index) as exactly one other triangle edge, and the start and end vertices must switch places between these two edges. The triangle vertices must appear in counter-clockwise order when viewed from the outside of the manifold.

This definition has the convenient property that the set of manifold meshes is closed under Boolean operations, since duplicate vertices are allowed as that is a geometric property.

Mesh Boolean

A mesh Boolean operation is the 3D equivalent of the logical Boolean operation, where every point in space is either true (inside the object) or false (outside). The result of a Boolean operation between two solids is another solid, where a Boolean OR is the union of the solids, AND NOT is the difference, while AND is the intersection.

The method used here to create a numerically stable, guaranteed manifold result comes from Julian Smith, whose dissertation includes not only several very powerful computational geometry algorithms, but also a very clear description of the numerical difficulties associated with computational geometry generally. I highly encourage reading it in its entirety if you want a solid grounding in this field. A very brief synopsis is that Euclidean geometry is based on real numbers, and does not hold in a finite-precision computer. Most existing methods either assume "general position" (none of the inputs are close to coincident or coplanar, which is rarely true in practice) or they use exact or arbitrary precision arithmetic, which is slow, complex, and often still suffers failures in edge cases.

Smith's approach was to instead accept the error inherent both in floating-point arithmetic and in float-point representations (after all, even the inputs are rounded so treating them as exact is somewhat meaningless) and focus instead on creating a result that is manifold by construction. One of the best things about his solution is that it does not rely on any kind of exact arithmetic kernel, but instead ensures that each geometric computation is based on those before it, in such a way that the same question is never asked in two different ways. This is key because while Euclidean geometry might say that one result will imply another, with floating-point error that is often no longer true, resulting in conflicting answers that derail an algorithm.

Novel contributions

My contributions, besides building this as an open source library, include parallelizing his algorithm and adding a parallel broad phase collider to get it down from O(n2) to O(nlogn) - more detail below. I also changed his symbolic perturbation to give more useful and intuitive results. The Boolean does not account for any tolerance, so symbolic perturbation is used to break ties for geometry that is exactly floating-point equal in any of the three axes. For axis-aligned, coincident faces (which are very common), it is convenient to avoid creating infinitely-thin shards and holes. This version perturbs the first mesh in the surface normal direction for a Union operation and the opposite for a Difference or Intersection operation, thus ensuring that e.g. touching cubes are merged, equal-height differences produce through-holes, and a mesh minus itself produces an empty mesh.

A couple of key points glossed over in Smith's dissertation were the difficulties of triangulation and degenerate removal, which I have now overcome after much effort. My other contributions are in developing what I hope is a clear and pleasant API to use, including the ability to keep track of arbitrary properties on these manifolds as they undergo operations so that things like textures can be naturally supported without putting any restrictions on which properties or how they should be handled. I also created a way to smoothly refine a manifold, allowing simple triangle meshes to approach the arbitrary precision of NURBS, which are the basis of most CAD software.

Performance

It's difficult to apply the big-O notion of performance to the mesh Boolean operation because of its very nature. Even if we assume that both input meshes have O(n) triangles, the resulting number of intersections (new vertices/triangles) could be anywhere from zero to O(n2) for sufficiently contrived geometry. However, to make the analysis somewhat meaningful for realistic cases, it is convenient to consider spheres, where the input's O(n) refers to partitioning the surface into n triangles of roughly equal edge lengths. If the two input spheres have a non-trivial overlap, then it is easy to see that there will be O(n1/2) intersections, since n refers to a two-dimensional surface, while the intersections refer to a one-dimensional curve.

Smith's algorithm would still be O(n2), since each triangle of the first mesh would need to be checked for intersection with each triangle from the second mesh. By using a Bounding Volume Hierarchy (BVH) broad phase, this is reduced to O(nlogn) work, and better yet it is parallelized across a GPU. The rest of the computation ideally involves just the intersections, and so is only O(n1/2), though in practice this often takes longer than the broad phase. Largely this is because the triangulation and degenerate removal steps have not been parallelized.

This code base makes extensive use of NVidia's Thrust library, which is convenient for the vector-based parallelization used here and also allows the code to be compiled for either CPU or GPU. This is especially important since many computers don't have an NVidia GPU, for instance the free CI built into GitHub Actions. However, optimizing code for GPU parallelization also helps performance on CPU-based machines, as these practices aid in efficient cache usage and in pipelining operations. Now that C++20 parallel algorithms are available, it's on the road map to switch, since that's what the Thrust library led to anyway and it will simplify and probably speed up the code.

Overlaps

While manifoldness is the exact, topological part of our system, this is a geometry library and the geometric aspects are the important and tricky parts to solve. The whole point of building a mesh Boolean algorithm is to take care of geometric overlaps in a manifold way. If that is not necessary for your application, you can simply use something like CSG (Constructive Solid Geometry), which tends to just take care of rendering the result correctly, rather than actually calculating all of the intersections.

Given that the whole point of the mesh Boolean is to remove overlaps, why not make that part of the guarantee? The reason is better described by Julian Smith, but to paraphrase: it's hard. Limited machine precision means it is nearly impossible to get consistent geometric results. This is a problem not just for a Boolean algorithm, but even for simple linear operations like rotation. If we loosely define a marginal mesh as one which is close to self-overlapping, but not quite (by some arbitrary metric), then the rounding involved with rotating it by some angle can easily make it cross over to actually self-overlapping.

This mesh Boolean algorithm guarantees manifold output, only requiring that all input is also manifold. In the case of a self-overlapping input mesh, it is possible the triangulator will build output surfaces incorrectly. However, this algorithm is designed to safely handle marginal input meshes. To be precise about this, we need a definition of valid geometry, so that we can say geometrically-valid input will produce geometrically-valid output.

Definition of ε-valid

We define non-overlapping geometry in the usual fashion: taking the inputs as exact, every point in space that is not on the boundary has a winding number of either 0 or 1 (outside or inside, respectively). An ε-valid input is one for which there exists a perturbation of the vertices (each by a magnitude < ε) for which this perturbation is non-overlapping.

For convenience, when describing a manifold as self-overlapping, what we mean precisely is: not ε-valid. ε is known as the manifold's precision, which is generally a bound on its accrued numerical error rather than the precision of its floating-point representation. This definition applies equally to 3D manifolds and 2D polygons.

By this definition, the inputs and results are considered approximate, bounded by their respective precisions which are tracked internally. Therefore, any geometric differences within this precision are considered "in the noise" and are not considered to affect the geometric validity of the result. This is used extensively by the triangulation and degenerate removal steps to simplify the manifold and increase robustness of the algorithm.

This library's goal is to always produce ε-valid output from ε-valid input meshes. This cannot be mathematically proven in the way of the manifoldness guarantee, but any example to the contrary will be taken as a serious bug. The manifoldness guarantee still applies even when the input is not ε-valid, and for many cases the resulting geometry is as expected, but it cannot be relied upon. The key is that marginal geometry is processed correctly, as this is very common in practice.

Removing self-overlaps

Of course wouldn't it be great to remove the overlaps from a single mesh rather than just between two independent meshes? Yes it would, but Julian Smith describes why his Boolean algorithm cannot be applied to the single mesh case. A sweep plane algorithm should be capable of this, but it won't be as parallelizable or simple, but it's still on my road map.

Polygon triangulation

This mesh Boolean algorithm takes as input triangle meshes, but in its natural form, returns a polygonal mesh. Since the purpose here is to cycle back into further Boolean calculations, this necessitates the triangulation of these polygonal faces. This has turned out to be perhaps the trickiest part of this whole Boolean algorithm, in part because it was largely glossed over in the original paper.

Prior art

Why so tricky? A number of polygon triangulators exist, in both open and closed source. I had hoped to use one, but determined that in fact none of them met my needs. Probably the most promising was FIST, but it does not maintain manifoldness because instead of respecting the input edge orientation, it determines the orientation itself based on how it finds the contours to encapsulate each other.

Also promising was Earcut, based on Dave Eberly's code and writeup, but the problem here is in the handling of degenerate cases and self-intersections. The Boolean is robust to, and tends to generate, degenerate shapes (e.g. zero-length edges, coincident edges/faces, etc). Therefore it is critical that the triangulator take degenerate (ε-valid) polygons (and these are not necessarily simple polygons either; any depth of nested contours is possible) and output an ε-valid triangulation. This is a problem I have not seen any polygon triangulator attempt to solve. Usually they throw up their hands when self-intersection is detected, which is usually a toss-up in the case of degeneracy.

My solution

The triangulators linked above work by ear-clipping, which is a method of triangulating a simple polygon that is fairly straight-forward. However, in order to handle non-simple polygons, they must first keyhole the contours together to form a simple polygon. Unfortunately this key-holing step is where most of the code complexity comes in, and tends to be the source of most of the bugs. My first polygon triangulator used monotone subdivision instead, which is a sweep-line algorithm. It has the advantage of being O(nlogn) instead of O(n2) like ear-clipping. Unfortunately, it become very complicated trying to make a sweep-line algorithm handle the uncertainty of ε-valid input, since it meant you can't rely on sort ordering.

Our current triangulator is based on Dave Eberly's Ear Clipping algorithm, but adjusted to handle ε-valid input by making small loops that only run in degenerate cases. If a right-left check is too close to call, the algorithm walks the neighbors until the check becomes certain. This algorithm is much simpler than monotone subdivision and therefore more robust. Sorting the ears improves the quality of the triangulation and the algorithm naturally terminates with a manifold result even when the input is ε-invalid. The only downside is the O(n2), which is rarely a problem, but a big one when it is. We're working to add our BVH collider to fix this for the large polygon cases.

Degenerate removal

Mesh simplification by edge collapse & edge swap is pretty well-trod territory, and considering I'm not even trying to operate on non-degenerate situations, it seemed that it should be fairly straightforward. Still, it was educational for me since there are a lot of tricky edge collapse situations that break manifoldness, generally by making four or more triangles meet at one edge. I handle these by duplicating the vertices and splitting the edge. An interesting effect of this strategy is that it only reduces the genus (number of handles) of the mesh, which means it helps clean up the topology of the mesh as well as the degenerate triangles. Note that it can also split the mesh into separate manifolds, as increasing the number of closed meshes also decreases the genus. The only way the genus increases is when a mesh collapses so far that it is completely removed, which I also consider topological cleanup.

Topological cleanup is a key step in this algorithm because the mesh Boolean operates exactly using symbolic perturbation, but on rounded floating-point input, which means it tends to create not just a lot of degenerate triangles, but also non-Euclidean topology in geometric features thinner than ε. To avoid confusing other computational geometry algorithms down the line, it's important to remove these invisible bits of cruft.

I stress-test degenerate removal (and triangulation and the Boolean itself) by creating a Menger Sponge through three difference operations (cutting the holes through a cube in the X, Y, and Z directions). At the fourth order, this object can be shown to have a genus of 26,433. And since nearly every intersection is degenerate, even though the degenerate removal step in this case consumes a significant fraction of the run time, it still runs much faster than without it because it decreases the number of triangles being processed by the other steps by nearly an order of magnitude.

It turns out I cannot guarantee that all degenerate triangles (height < ε) are removed, since there are cases where the only way to do so would involve perturbing the shape of the object by more than ε. Therefore you cannot expect the output to be completely free of degenerate triangles, but rather to have the vast majority removed so as to increase the speed of many chained operations.

Vertex properties

Vertices and properties are not 1:1, since along a Boolean intersection curve each vertex will have different properties on each side. Graphics libraries are generally not concerned with manifoldness, so they keep the properties 1:1 with the vertices and situations like this are handled by duplicating these vertices and effectively carving up the mesh into disjoint primitives. However, this operation cannot in general be undone because merging the nearby vertices would be a geometric operation and floating-point error means it would not necessarily come back to a manifold topological result.

To address this gap and allow for easy interoperability with graphics libraries, we developed the MeshGL input/output structure. This contains flat buffers designed to drop straight into GL commands, with vertices duplicated as necessary so that all vertex properties are interleaved into a single buffer. However, it contains additional merge vectors which define the vertex duplication topologically, so that the manifold can be reliably and losslessly reproduced.

glTF interop

The above merge vectors have been added to a published glTF manifold extension, allowing glTF to become an efficient, lossless transmission format suitable for the input and output of this library. Within our repository you can also find a small JS library for glTF I/O, suitable for use with our WASM bindings. This is used by ManifoldCAD.org to generate glTF with the manifold extension.

Of course most existing glTF models do not use the new manifold extension, and many do not even attempt to represent a solid mesh. However, some glTFs look manifold and are only stopped by the requirement to duplicate vertices along property boundaries. For these we have the MeshGL.Merge method, which fills in the merge vectors by matching up all vertices along open edges within precision of each other. There is no guarantee this will produce a manifold mesh, but if it does, the result can be re-exported as a glTF with the manifold extension, ensuring its integrity from then on. You can try this live in our glTF example, which converts all the meshes it can to manifolds and exports them using the manifold extension.

Keeping track of relationships

Manifold has no understanding of vertex properties, only that there are n channels of floats that can be linearly interpolated over each triangle. It is up to the user to keep track of their meaning, for instance by keeping a mapping of channel indices to semantics such as UV coordinates or vertex colors. Manifold makes no assumption that different objects have consistent property channels, or even number of channels. Instead, it keeps the portions of a manifold from separate objects separate, so that there is no chance of mixing and matching properties that do not relate.

Each Manifold that is constructed from scratch gets a unique, sequential originalID. Any Manifold built from multiple Manifolds, e.g. by Boolean operations, will have each triangle refer back to the input face it is a subset of by its originalID and faceID. The faceID is a triangle index of the input mesh. If that triangle was coplanar with others (including the vertex properties), then they are considered a face and the index of the largest area triangle of that face is used as the faceID for all of those triangles.

The originalID is useful for reapplying materials, since objects with different materials may have been combined using Boolean operations. Internally, each copy of a given input is tracked separately to ensure their triangles do not get incorrectly merged together. In the output MeshGL this is visible as the separate triangle runs - each run is from exactly one input mesh instance. These instances are sorted by originalID so that if the distinction between copies is unimportant, you can easily pull an overall run of consistent material instead, e.g. for a draw call.

You can also use ReserveIDs to generate an input MeshGL with multiple triangle runs that will be kept through operations for reference later. Alternately, reference information can be removed by calling AsOriginal, which recalculates coplanar faces and collapses unnecessary edges. This can be useful e.g. if a new material is being applied uniformly to this object, or if no vertex properties are present and one wants to collapse all unnecessary edges that may have arisen from Boolean operations between coplanar faces.

Smooth mesh interpolation

Most commonly, the graphics world tends to work in faceted triangle meshes. However, both artists and CAD engineers tend to want to describe smooth shapes with arbitrary precision from relatively few control points. Most of the methods for this are based on quads of some kind: animators often use subdivision surfaces, while CAD tends to use NURBS patches (which are quads in the sense that their UV-map extents form a rectangle). The trouble is that when intersected they are no longer quads, and while any polygon can be triangulated, not everything can be divided into quads again. Even if it could, it would not necessarily be able to represent the same surface as its parent. For this reason, CAD software tends to avoid calculating explicit geometric intersections and instead retains the original NURBS quads and keeps their intersections implicit. This works fine for graphics purposes and pushes off the numerical difficulties of the Boolean operation to the analysis programs that take CAD data as input.

Goals of mesh interpolation

The primary goal is to attain G1 continuity across the mesh (continuous tangents), since the manifold mesh already yields G0 continuity. However, sharp creases should also be possible, as well as smooth transitions between an edge being smooth and sharp. Also, a sharp crease should be able to span multiple pairs of triangles and the curve of this crease should itself be smooth. There should be a function mapping the barycentric coordinates of a given triangle up to its interpolated surface and this function should be local: using only data associated with this triangle and its nearest neighbors. And finally, the function should be a true interpolant, meaning the surface contains the original vertices of the mesh.

G2 continuity (continuous curvature) is specifically a non-goal, which is somewhat unusual. My reasoning is that I would prefer (aesthetically) to focus on minimizing maximum curvature than ensuring the derivative of curvature is finite, which is the only guarantee G2 gives. As an example, a box with edges and corners rounded to circular arcs is not G2 continuous, but it is a popular and pleasing shape. Likewise, I would say minimal surfaces are a good point to aim toward in terms of what makes a smooth interpolant pleasing. They are defined as having zero mean curvature everywhere, which is not particularly practical for manifold solids, but I would say a close corollary would be a surface which minimizes the maximum absolute mean curvature given constraints on some positions and normals. A sphere is a simple example of such a surface.

Edge Beziers

I decided to take a transfinite approach to triangle interpolation, specifically using a side-vertex method. However, instead of basing the function directly on vertex normals, I instead specify two extra control points on each edge that interpolate the edge as a weighted cubic Bezier curve. The tilt of the tangent plane along this curve is defined by the two tangent vectors along the triangle's other two sides. This is done in such a way as to guarantee that if the nearest control points of two adjacent triangles are coplanar with their vertex and likewise for the other end of the edge, then the surface will be G1 continuous across this edge.

To make a surface G1 everywhere, simply ensure all the control points for each vertex are on its tangent plane. Creased edges appear where two different tangent planes meet; to ensure the crease curve itself is G1, simply ensure its control points are colinear with their vertex. By using weighted Bezier control points we can create circular arcs, which are good for minimizing maximum curvature. The triangular patch certainly doesn't make any guarantees about minimizing anything, but it is generally well-behaved with respect to the goals stated above, and is reasonably efficient to calculate.

Smoothing API

I consider the edge control points to be the real way to define the surface, as they are very flexible and the function to turn them into a surface is not particularly arbitrary. If I were to try to standardize mesh interpolation into a format, I would choose this type of representation. However, it is not always easy choose the positions for all these control points, and tends to be full of arbitrary decisions. Therefore I've included a Smooth constructor that calculates vertex normals from the geometry and then builds control points in their tangent planes, converging to circular arcs for symmetric edges.

Additionally, you can specify a sparse vector of sharpened half-edges, where you can specify a smoothness between 0 and 1 (1 is the default) for each side of an edge. The edge is sharpened by moving the off-edge control points closer to their vertex, which tends to increase curvature along the specified edge. A crease is only formed at a smoothness of zero, when the cubic Bezier degenerates into a quadratic Bezier. Linked sharpened edges are also detected and their control points made colinear to ensure the edge curve is G1 continuous. Currently you cannot specify your own normals, but this could be easily added.

This higher-level API is simple to use, but certainly not optimal for all desired surfaces. It tends to work best when the mesh has enough resolution that the edges do not need to be highly curved and the triangles have fairly consistent edge lengths. It is ideal for sampling other surfaces (NURBS, marching cubes, level sets, etc.) and getting a high-quality approximation without too many vertices. I would love to see other approaches to generating these control points: anything from deep optimizations to manual design controls are possible.

Use in the Boolean

The mesh Boolean operates only on flat triangles, so it ignores control points and interpolation. In order to get an intersection between curved meshes, the mesh must first be refined to the level of precision desired before the Boolean is run on this larger number of flat triangles. Currently the API has just a single function Refine(n) that splits every edge into n pieces and fits the new triangles onto the interpolated surface. Eventually I'd like to have functions that refine each edge by different amounts, to target length, precision, etc. It could even be possible for the Boolean to refine on the fly as its broad phase finds potential intersections, thus only refining where needed.

Purpose of this library

So, why have I built this? Partly it's because I have become fascinated by computational geometry, and frustrated by the lack of performance and reliability even in extremely expensive state-of-the-art CAD systems. I saw an industry where most of the CAD kernels have their roots in the same 1970's Cambridge research lab, yet are so entrenched that disruption is nearly impossible. I decided open source was the only thing that stood a chance against not-invented-here and might attract enough collaborators to eventually become something useful.

This problem convinced me to become a software engineer (my PhD is in aerospace) because those were the tools I needed to solve it. I spent years working on it on and off by myself because I honestly didn't know if what I was promising was possible, and the world already has plenty of unreliable mesh Boolean libraries. I've finally released it because I now have confidence that it really is something new.

So, what about you? Do you have an idea for a new CAD application, analysis or optimization tool? Are you looking for a high-performance and reliable kernel? Or do you want to contribute to this kernel? This is really just an MVP; there is a whole world of things to try, of which my road map is just a teaser. I welcome you! I intentionally put this under an open license - I don't have much desire to pursue commercialization as my last startup experience left me with a bad taste in my mouth, but I would be overjoyed to help others who want to try. I'll be happy with recognition.