Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clash detection features #4374

Merged
merged 6 commits into from
Mar 26, 2024
Merged

Clash detection features #4374

merged 6 commits into from
Mar 26, 2024

Conversation

Moult
Copy link
Contributor

@Moult Moult commented Feb 29, 2024

No description provided.

@Moult
Copy link
Contributor Author

Moult commented Feb 29, 2024

This supersedes #4282 - after purging all the irrelevant chunking and Python stuff to make it easier to review.

@aothms
Copy link
Member

aothms commented Mar 1, 2024

I'm trying to get the line count done on IfcGeomTree.h because I still get lost in the huge set of added lines. Do you think sharing the code of the bvh clashing and iteration makes sense? Have a look at what I committed to test_intersection_2(), which should be equivalent to test_intersection(). I think we can apply the same to the others. I still hope we can reduce the length of the function bodies even further and factor out shared logic into the util module.

@Moult
Copy link
Contributor Author

Moult commented Mar 2, 2024

It's a good idea to move the generic functions out to bvh_utils.h :)

Is test_intersection_2 equivalent? There's this logic which seems to be missing at the end of all the BVH tests?

                if (protrusion > tolerance) {
                    return {0, tA, tB, protrusion, protrusion_point, surface_point};
                }

                if (pierce > tolerance) {
                    return {1, tA, tB, pierce, pierce_point1, pierce_point2};
                }

                return {-1, tA, tB, 0, {0, 0, 0}, {0, 0, 0}};

I must admit personally I would find the code more confusing if it did indirect to a process_bvh_intersections function, especially because there would then also be a corresponding process_bvh_collisions and process_bvh_clearance. But if it is functionally equivalent and the return earlies still works then go for it :)

@Moult
Copy link
Contributor Author

Moult commented Mar 3, 2024

BTW is the reason you deleted the write_h5 because you think it should be done in Python instead? In that case should C++ expose a buffer of AABB/OBB/BVH data too?

@aothms
Copy link
Member

aothms commented Mar 3, 2024

BTW is the reason you deleted the write_h5 because you think it should be done in Python instead

Well we could, but moreso, because it should be a separate file/serializer.

I think we should regardless offer a better interface to the tree though, to be able to traverse it client-side code, both py as well as cpp.

There's this logic which seems to be missing at the end of all the BVH tests?

Ok, right, I was in a hurry, didn't complete it properly. Thanks for the headsup.

function, especially because there would then also be a corresponding ...

No that's the thing... I thought this logic is shared between the various clash functions, maybe there's one or more arguments to the lambda that needs to be added or that go unused in some of the prototypes. Otherwise consider it as me trying to understand the shared and distinct bits of logic, writing is as such is the quickest way for me to understand.

@Moult
Copy link
Contributor Author

Moult commented Mar 3, 2024 via email

@aothms
Copy link
Member

aothms commented Mar 5, 2024

I continued this exercise as it's very helpful for me to try and understand the logic 71ad98b. They looked like three big blobs first to me, but now I understand at least conceptually how they operate. The deviating part that prevents rewriting all three as a clean triangle-to-triangle predicate, sped up by BVH, is the "pierce shape" routine.

The logic there is a bit funky, because protrusions have priority over pierces, but pierces are executed first because if there are no points contained in B the triangle can not be protruding B. And most of this runs on triangle level, but involves individual verts, which requires caching because verts are shared by triangles. But you also don't want to do containment test on all vertices first because you can still encounter a protruding vertex causing to return early.

I wonder though about the following. Wouldn't it be possible that in quite a few cases you keep re-evaluating the same vertex for protrusion distance? You cache whether it's inside B, but you don't cache the protrusion distance you found for it. So, let's say in the case below the red vertex is inside the cube but not by an amount exceeding the tolerance ratio. You would keep reevaluating that point for every triangle that uses that vertex. Or am I still not understanding things correctly?

afbeelding

I wonder whether the code isn't easier to follow and more performant if we execute the protrusion first and rewrite it as a vertex-to-triangle predicate, because that is what it is. I guess pierces are also rather rare, but shapes touching each other with some tolerance ratio is very common.

For pierces I would try if it makes sense to rewrite them based on trianglesIntersect() and seeing if one triangle in shape A intersects with two or more triangles in shape B. I don't think with the BVH acceleration it would have widely different performance characteristics than the current approach based on counting the number of vertices outside. But it would allow to get rid of a bunch of code paths. There's something else though that maybe you can clarify for me, where maybe it makes more sense in general to count the number of intersections. Consider the situation below: a cube with an intersecting triangle, the intersection between plane of the triangle and cube is drawn in dashed lines. The triangle is not piercing, because there is one vertex inside, but the protrusion result is minimal or not even beyond the tolerance. Could it be that nothing is reported at all in this case? If we instead take all intersections for the red triangle and calculate the length between them we get something? Alternatively maybe you can consider also checking for pierces when there is one vertex contained.

afbeelding

In short, I find the coupling between calculating protrusion and piercing on the one hand ingenious, but it's hard to gather intuition on whether it's worth it in terms of performance and maintainability.

Something else I noticed. You do the triangle-ray intersection along the triangle normal, but that is a bit of a special case I assume. If you take P - (P - v0).dot(norm) * norm. You have the projection of P' on the triangle plane. You can take the barycentric coordinates of P' to see if it's contained [0]. The distance (P - v0).dot(norm) you already calculated then. Things are already pretty optimized, so it's not going to make a difference of night and day, but it's just something I noticed. Especially since we already store the normal so don't need to recalculate it.

[0] https://gamedev.stackexchange.com/a/23745

Some thing you/me still might want to try:

  • points_in_b_cache+points_not_in_b_cache try changing to std::vector<uint8_t> with 0 meaning untested, 1 meaning inside, 2 meaning outside. The int used a key v_id is a contiguous range.
  • Instead of:
    std::unordered_map<T, bool> is_manifold_;
    std::unordered_map<T, std::vector<std::array<int, 3>>> tris_;
    std::unordered_map<T, std::vector<gp_Pnt>> verts_;
    std::unordered_map<T, std::vector<gp_Vec>> normals_;
    
    do
      struct shape_data {
          bool is_manifold
          std::vector<std::array<int, 3>> tris
          std::vector<gp_Pnt> verts
          std::vector<gp_Vec> normals
      }
      std::unordered_map<T, shape_data> 
    
    Poor CPU must be getting tired of looking up the same key in different maps.
  • instead of storing
    double protrusion = -std::numeric_limits<double>::infinity();
    std::array<double, 3> protrusion_point;
    std::array<double, 3> surface_point;
    
    essentially this is already more or less the data structure of clash. So what we can do is to add a comparison operator
      bool operator<(const clash& other) const {
          return distance < other.distance;
      }
    
    and just assign clashes if (current_clash < global_clash) global_clash = current_clash.
  • As you see, I changed clash to boot::optional<clash> so that we don't need to rely on -1.

Let me know what you think based on all of this. It's really exciting that we have this now. I don't want to slow things down.

@Moult
Copy link
Contributor Author

Moult commented Mar 6, 2024

Wouldn't it be possible that in quite a few cases you keep re-evaluating the same vertex for protrusion distance?

Yes, this is true. We do re-evaluate the same vertex again and again for protrusion distance. This is wasteful, however there is a slight trick to it: when we evaluate the vertex, we only compare to certain faces (which differ based on the triangle's normal to which the vertex belongs), and we only compare to triangles in that BVH leaf (it may be that depending on the triangle that vertex is evaluated from, it considers different BVH leafs).

In short, it is wasteful, it could be cached further, but there is more to consider when writing the caching logic.

I guess pierces are also rather rare

I don't think so, e.g. a pipe going completely through a column. Or a pipe going completely through a beam or duct, etc. From test models I recall finding a surprising and significant amount of meaningful pierces.

I wonder whether the code isn't easier to follow and more performant if ...

Completely agree, the current tanglement of piercing and protrusion is confusing :(

No clue about performance though. The code is what it is because it ... uh, evolved that way :) I started writing protrusions first then bolted on piercing.

The triangle is not piercing, because there is one vertex inside, but the protrusion result is minimal or not even beyond the tolerance. Could it be that nothing is reported at all in this case?

In this particular example the protrusion will be measured to the nearest black surface triangle that does not have the same approximate normal as the red triangle. So in this scenario I believe there will be a non-minimal protrusion result above the tolerance.

But you're right, when somehow there is a protrusion > 0 && protrusion < tolerance it'll cause pierce checks to be skipped which is wrong. Pierce checks should only be skipped if protrusion > tolerance (this is arguable, some people might argue that pierce checks always have priority over protrusion but for now I find the opposite to give more intuitive results - later we can make this configurable). So this predicate needs to change to if (protrusion <= tolerance && pierce != max_protrusion)

In short, I find the coupling between calculating protrusion and piercing on the one hand ingenious, but it's hard to gather intuition on whether it's worth it in terms of performance and maintainability.

Is it OK to leave the messy code as is just temporarily as a baseline to measure both performance and correctness against? I completely agree with all your suggestions to improve it, I'm just scared that refactoring might have unintended behaviour changes. E.g. for this weekend's release have two code paths, the original mess plus the WIP cleaned version and by the next release, aim to completely delete the original mess?

Something else I noticed. You do the triangle-ray intersection along the triangle normal ...

I really struggle with this type of math. I trust you 100% :) go for it!

Some thing you/me still might want to try

All those changes make perfect sense! Let me know if you'd like me to implement any of them, or if its easier for you to make them.

In the meantime I can work on the Python IfcClash interface, and building a nicer UI in Blender to navigate clash results quickly and visualise clashing points and measured distances. This will make it easier to audit the results and check for breaking changes.

Should I still aim for a release this weekend?

@aothms
Copy link
Member

aothms commented Mar 6, 2024

I think we can backspace my commits and save them for later.

What I think we can also add as later discussion points is how the topology affects outcomes. On the left top is registered as a pierce, left bottom is registered as a protrusion. Top right is registered as only cube pierces triangle, bottom right is in addition registered as triangle piercing cube. I think this is another reason to consider rewriting pierces as triangle intersection. Pierces could be seen as any path of intersection followed by zero or more vertices contained in volume, followed by another intersection.

afbeelding

What we do need to work around is:

  • Reintroduce cleaning of triangulations
  • Compatibility with OCCT 7.5 (MapNodeArray)

Both of these can be solved by directly feeding the tree the triangulation element, but I'm a bit puzzled by the code we have there now and how this reordering of triangles should be handled. Do you have a proposal?

@Moult
Copy link
Contributor Author

Moult commented Mar 6, 2024

I can try have a shot at implementing it with triangle reordering but the thing I don't know is how to get the aabb and obb from the triangulation element.

If you show me how to do that part I can try write the rest of the function.

@aothms
Copy link
Member

aothms commented Mar 6, 2024

I'm working on porting the logic over here:

vs = numpy.array(elem.geometry.verts).reshape((-1, 3))
fs = numpy.array(elem.geometry.faces).reshape((-1, 3))
def _():
for f in fs:
p, q, r = vs[f]
pq = q - p
pr = r - p
pq /= numpy.linalg.norm(pq)
pr /= numpy.linalg.norm(pr)
pqr = numpy.cross(pq, pr)
pqr /= numpy.linalg.norm(pqr)
yield pqr
tri_norms = numpy.array(list(_()))
def _():
for f in fs:
p, q, r = vs[f]
pq = q - p
pr = r - p
pqr = numpy.cross(pq, pr)
yield numpy.linalg.norm(pqr) / 2.
tri_areas = numpy.array(list(_()))
_, inv, cnts = numpy.unique(numpy.int_(tri_norms * 1000), return_counts=True, return_inverse=True, axis=0)
di = utils.make_default(sorted((j, i) for i, j in enumerate(inv)))
summed_area = [v[1] for v in sorted((k, sum(tri_areas[v])) for k, v in di.items())]
sorted_summed_areas = numpy.argsort(summed_area)
V = numpy.average(tri_norms[di[sorted_summed_areas[-1]]], axis=0)
candidates = []
for i in range(1, min(10, len(cnts))):
ref = numpy.average(tri_norms[di[sorted_summed_areas[-i]]], axis=0)
candidates.append((abs(ref @ V), ref))
if not candidates:
refs = [(0, 0, 1), (1, 0, 0)]
for ref in refs:
candidates.append((abs(ref @ V), ref))
ref = min(candidates, key=operator.itemgetter(0))[1]
Y = numpy.cross(V, ref)
X = numpy.cross(V, Y)
M = numpy.array((X, -Y, V))
Mi = numpy.linalg.inv(M)
vsi = numpy.array([v @ Mi for v in vs])
vsimi = vsi.min(axis=0)
vsima = vsi.max(axis=0)

I only have a rought draft, hope to be able to continue later today. My attempt uses the most prominent face directions to construct the reference frame. OCCT uses PCA. My naive guess/hope is that my approach actually works better in typical orthogonal elements with few vertices, but large faces.

void add_triangulation_element(IfcGeom::TriangulationElement* elem) {

    Bnd_Box B;
    auto& trsf = elem->transformation().data();
    auto& vs = elem->geometry().verts();
    for (size_t i = 0; i < vs.size(); i += 3) {
        gp_Pnt p(vs[i + 0], vs[i + 1], vs[i + 2]);
        B.Add(p.Transformed(trsf));
    }

    {
        auto& fs = elem->geometry().faces();
        std::unordered_map<std::tuple<int, int, int>, std::vector<size_t>> quantized_normal_counts;

        std::vector<double> tri_areas;
        std::vector<gp_XYZ> tri_norms;
        for (size_t i = 0; i < fs.size(); i += 3) {
            gp_Pnt p(vs[3 * fs[i + 0] + 0], vs[3 * fs[i + 0] + 1], vs[3 * fs[i + 0] + 2]);
            gp_Pnt q(vs[3 * fs[i + 1] + 0], vs[3 * fs[i + 1] + 1], vs[3 * fs[i + 1] + 2]);
            gp_Pnt r(vs[3 * fs[i + 2] + 0], vs[3 * fs[i + 2] + 1], vs[3 * fs[i + 2] + 2]);
            auto cross = (q.XYZ() - p.XYZ()).Crossed(r.XYZ() - p.XYZ());
            auto mag = cross.Modulus();
            tri_areas.push_back(mag / 2.);
            cross /= mag;
            tri_norms.push_back(cross);
            auto quantized = std::make_tuple(
                static_cast<int>(cross.X() * 1000),
                static_cast<int>(cross.Y() * 1000),
                static_cast<int>(cross.Z() * 1000)
            );
            quantized_normal_counts[quantized].push_back(i / 3);
        }

        std::vector<std::pair<double, decltype(quantized_normal_counts)::const_iterator>> area_to_it;

        for (auto it = quantized_normal_counts.cbegin(); it != quantized_normal_counts.cend(); ++it) {
            double area_sum = 0.;
            for (auto& i : it->second) {
                area_sum += tri_areas[i];
            }
            area_to_it.push_back({ area_sum, it });
        }

        std::sort(area_to_it.begin(), area_to_it.end(), [](auto& p1, auto& p2) { return p1.first < p2.first; });

        auto calc_average_norm = [&tri_norms](const std::vector<size_t>& idxs) {
            gp_XYZ normal_sum;
            for (auto& i : idxs) {
                normal_sum.Add(tri_norms[i]);
            }
            normal_sum.Normalize();
            return normal_sum;
        };

        auto Z = calc_average_norm(area_to_it.back().second->second);

        std::vector<std::pair<double, gp_XYZ>> candidates;

        size_t num_candidates = 0;
        for (auto it = ++area_to_it.rbegin(); it != area_to_it.rend() && num_candidates < 10; ++it, ++num_candidates) {
            auto ref = calc_average_norm(it->second->second);
            candidates.push_back({ std::abs(Z.Dot(ref)), ref });
        }

        if (candidates.empty()) {
            {
                gp_XYZ ref(0, 0, 1);
                candidates.push_back({ std::abs(Z.Dot(ref)), ref });
            }
            {
                gp_XYZ ref(1, 0, 0);
                candidates.push_back({ std::abs(Z.Dot(ref)), ref });
            }
        }

        auto X = std::min_element(candidates.begin(), candidates.end(), [](auto& p1, auto& p2) { return p1.first < p2.first; })->second;

        gp_Trsf trsf2;
        gp_Ax3 ax3(gp::Origin(), Z, X);
        trsf2.SetTransformation(gp::XOY(), ax3);

        Bnd_Box tmp;
                
        for (size_t i = 0; i < vs.size(); i += 3) {
            gp_Pnt p(vs[i + 0], vs[i + 1], vs[i + 2]);
            tmp.Add(p.Transformed(trsf2));
        }

        gp_Pnt cent = (tmp.CornerMax().XYZ() + tmp.CornerMin().XYZ()) / 2;
        auto halfsize = tmp.CornerMax().XYZ() - cent.XYZ();
        Bnd_OBB obb;

        obb.SetXComponent(ax3.XDirection(), halfsize.X());
        obb.SetYComponent(ax3.YDirection(), halfsize.Y());
        obb.SetZComponent(ax3.Direction(), halfsize.Z());
        obb.SetCenter(cent.Transformed(trsf2.Inverted()));
    }

@Moult
Copy link
Contributor Author

Moult commented Mar 6, 2024

Cheers I'll take a closer look tomorrow. BTW you probably already noticed but OCCT has as "is_optimal" arg or something when creating an OBB, I have it disabled (enabled is far too slow).

@Moult
Copy link
Contributor Author

Moult commented Mar 6, 2024

I just tried to compile this branch and got quite a few errors:

/home/dion/Projects/ifcopenshell/src/ifcgeom/../ifcgeom_schema_agnostic/IfcGeomTree.h:145:73: error: there are no arguments to ‘clash_bvh’ that depend on a template parameter, so a declaration of ‘clash_bvh’ must be available [-fpermissiv]
  145 |                 std::unordered_map<int, std::vector<int>> bvh_clashes = clash_bvh(bvh_a, bvh_b, extend);
      |                                                                         ^~~~~~~~~
/home/dion/Projects/ifcopenshell/src/ifcgeom/../ifcgeom_schema_agnostic/IfcGeomTree.h:145:73: note: (if you use ‘-fpermissive’, G++ will accept your code, but allowing the use of an undeclared name is deprecated)
/home/dion/Projects/ifcopenshell/src/ifcgeom/../ifcgeom_schema_agnostic/IfcGeomTree.h: In lambda function:
/home/dion/Projects/ifcopenshell/src/ifcgeom/../ifcgeom_schema_agnostic/IfcGeomTree.h:268:46: error: ‘tolerance’ is not captured
  268 |                                 if (p_dist > tolerance && p_dist > pierce) {
      |                                              ^~~~~~~~~
/home/dion/Projects/ifcopenshell/src/ifcgeom/../ifcgeom_schema_agnostic/IfcGeomTree.h:204:17: note: the lambda has no capture-default
  204 |                 ](
      |                 ^
/home/dion/Projects/ifcopenshell/src/ifcgeom/../ifcgeom_schema_agnostic/IfcGeomTree.h:174:89: note: ‘double tolerance’ declared here
  174 |             boost::optional<clash> test_intersection_2(const T& tA, const T& tB, double tolerance, bool check_all = true) {
      |                                                                                  ~~~~~~~^~~~~~~~~
/home/dion/Projects/ifcopenshell/src/ifcgeom/../ifcgeom_schema_agnostic/IfcGeomTree.h:273:42: error: ‘check_all’ is not captured
  273 |                                     if (!check_all) {
      |                                          ^~~~~~~~~
... etc etc etc

Have you forgotten to commit something or is my environment more strict / different / or?

@Moult
Copy link
Contributor Author

Moult commented Mar 6, 2024

OK here is a variant of add_element that feeds the tree a triangulation element directly. It's missing the portion of code which does AABB, OBB, and the max_protrusion (based on OBB).

I haven't tested it and I don't even know if it compiles (see above) but fingers crossed it works and that should allow us to remove the hacky commented out BRepTools::Clean(s);.

In case you're wondering, yes, it is wasteful to essentially have original_tris/normal and tris/normal. A better solution might be to subclass BVH_Triangulation and overload the function that does the swapping.

        void add_element(IfcGeom::TriangulationElement* elem) {
            // TODO: Add aabb and obb
            // TODO: add max_protrusion from obb

            const auto& t = elem->product();
            const std::vector<double>& matrix = elem->transformation().matrix().data();
            const std::vector<double>& elem_verts_local = elem->geometry().verts();
            const std::vector<int>& elem_faces = elem->geometry().faces();
            std::vector<double> elem_verts;
            apply_matrix_to_flat_verts(elem_verts_local, matrix, elem_verts);

            int original_tris_index = 0;
            std::vector<std::array<int, 3>> original_tris;
            std::vector<gp_Pnt> verts;
            std::vector<gp_Vec> original_normals;

            // Attempt to copy exactly what BRepExtrema_TriangleSet is doing under the hood.
            const auto builder = new BVH_LinearBuilder<Standard_Real, 3> (BVH_Constants_LeafNodeSizeDefault, BVH_Constants_MaxTreeDepth);
            BVH_Triangulation<Standard_Real, 3> triangulation(builder);

            for (int i=0; i<elem_verts.size(); i+=3) {
                triangulation.Vertices.push_back (BVH_Vec3d (elem_verts[i], elem_verts[i+1], elem_verts[i+2]));
                verts.push_back(gp_Pnt(elem_verts[i], elem_verts[i+1], elem_verts[i+2]));
            }

            for (int i=0; i<elem_faces.size(); i+=3) {
                const auto& v1_pnt = verts[elem_faces[i]];
                const auto& v2_pnt = verts[elem_faces[i+1]];
                const auto& v3_pnt = verts[elem_faces[i+2]];
                gp_Vec dir1(v1_pnt, v2_pnt);
                gp_Vec dir2(v1_pnt, v3_pnt);
                gp_Vec cross_product = dir1.Crossed(dir2);
                if (cross_product.Magnitude() > Precision::Confusion()) {
                    triangulation.Elements.push_back (BVH_Vec4i (
                        elem_faces[i], elem_faces[i+1], elem_faces[i+2], original_tris_index
                    ));
                    original_tris_index++;
                    original_tris.push_back({
                        elem_faces[i], elem_faces[i+1], elem_faces[i+2]
                    });
                    original_normals.push_back(cross_product.Normalized());
                }
            }

            triangulation.MarkDirty();
            const auto bvh = triangulation.BVH();

            // After BVH is constructed, triangles are reordered
            std::vector<std::array<int, 3>> tris(triangulation.Size());
            std::vector<gp_Vec> normals(triangulation.Size());

            for (int i=0; i<triangulation.Size(); ++i) {
                const auto& el = triangulation.Elements[i];
                tris[i] = original_tris[el[3]];
                normals[i] = original_normals[el[3]];
            }

            bvhs_[t] = bvh;
            is_manifold_[t] = IfcGeom::util::is_manifold(s);
            tris_[t] = std::move(tris);
            verts_[t] = std::move(verts);
            normals_[t] = std::move(normals);
        }

@aothms
Copy link
Member

aothms commented Mar 6, 2024

Ok, that's the nice thing about timezones. IfcOpenShell is worked on 24h a/day. I committed the combination of our add_element(). I don't know if as part of your investigations have collected some metrics on good and bad obbs. Could be there are still some bugs in addition to general uncertainty about the effectiveness of this approach. I'd be happy to keep refining this. Still requires adding the max protrusion.

@Moult
Copy link
Contributor Author

Moult commented Mar 26, 2024

Just returning to this - I've built the frontend bit and I've noticed that the results after 796760a seem to be a little different. For an intersection test, only collision clash types are returned which suggests that the new is_manifold always returns false. I haven't investigated yet, would you mind taking a peek? It could be obvious to you but it'll probably require a lot of deciphering for me :)

@aothms
Copy link
Member

aothms commented Mar 26, 2024

You're 100% correct, code was not correct. Is working now, but does assume weld_vertices=true because we only look at the indices.

#include <vector>
#include <set>
#include <iostream>

static bool is_manifold(const std::vector<int>& fs) {
    // @nb this assumes geometry is processed with the WELD_VERTICES setting
    std::set<std::pair<size_t, size_t>> dict;
    for (size_t i = 0; i < fs.size(); i += 3) {
        for (size_t j = 0; j < 3; ++j) {
            auto k = (j + 1) % 3;
            auto it = dict.find({ fs[i + j], fs[i + k] });
            if (it != dict.end()) {
                dict.erase(it);
            } else {
                dict.insert({ fs[i + k], fs[i + j] });
            }
        }
    }
    return dict.empty();
}

int main(int, char**) {
    std::cout << is_manifold({ 0,1,2 }) << is_manifold({ 1, 0, 3, 2, 1, 3, 2, 3, 5, 4, 2, 5, 4, 5, 7, 6, 4, 7, 6, 7, 0, 1, 6, 0, 3, 0, 5, 0, 7, 5, 4, 1, 2, 4, 6, 1 }) << is_manifold({ 2, 3, 5, 4, 2, 5, 4, 5, 7, 6, 4, 7, 6, 7, 0, 1, 6, 0, 3, 0, 5, 0, 7, 5, 4, 1, 2, 4, 6, 1 });
}

@Moult
Copy link
Contributor Author

Moult commented Mar 26, 2024

Nice! It seems to "work"! There's obviously still a lot of work to be done, both on the general code cleaning (i.e. feature-clash3) as well as in the actual collision logic (especially with the new viewer it's easier to review clashes and spot questionable results).

Do you think it's possible to merge this? (Maybe BRepTools::Clean(s); should be uncommented?) I can finalise polishing up the Blender side of things and do a BBIM release after an openbot build / update geom tree docs so that at least users can access it and start playing with it, and then we can target another around of clean ups for the next release cycle?

2024-03-26-230404_1045x671_scrot

@aothms aothms merged commit 24257d2 into v0.7.0 Mar 26, 2024
3 of 5 checks passed
@aothms
Copy link
Member

aothms commented Mar 26, 2024

That UI looks absolutely stunning! I've started a build.

@aothms
Copy link
Member

aothms commented Mar 27, 2024

Hm, build failed due to chocolatey errors. Let's try again:


Chocolatey installed 0/1 packages. 1 packages failed.
 See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).

Failures
 - visualstudio2017buildtools - visualstudio2017buildtools not installed. The package was not found with the source(s) listed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants