-
-
Notifications
You must be signed in to change notification settings - Fork 681
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clash detection features #4374
Clash detection features #4374
Conversation
This supersedes #4282 - after purging all the irrelevant chunking and Python stuff to make it easier to review. |
I'm trying to get the line count done on IfcGeomTree.h because I still get lost in the huge set of added lines. Do you think sharing the code of the bvh clashing and iteration makes sense? Have a look at what I committed to test_intersection_2(), which should be equivalent to test_intersection(). I think we can apply the same to the others. I still hope we can reduce the length of the function bodies even further and factor out shared logic into the util module. |
It's a good idea to move the generic functions out to bvh_utils.h :) Is test_intersection_2 equivalent? There's this logic which seems to be missing at the end of all the BVH tests? if (protrusion > tolerance) {
return {0, tA, tB, protrusion, protrusion_point, surface_point};
}
if (pierce > tolerance) {
return {1, tA, tB, pierce, pierce_point1, pierce_point2};
}
return {-1, tA, tB, 0, {0, 0, 0}, {0, 0, 0}}; I must admit personally I would find the code more confusing if it did indirect to a |
BTW is the reason you deleted the write_h5 because you think it should be done in Python instead? In that case should C++ expose a buffer of AABB/OBB/BVH data too? |
Well we could, but moreso, because it should be a separate file/serializer. I think we should regardless offer a better interface to the tree though, to be able to traverse it client-side code, both py as well as cpp.
Ok, right, I was in a hurry, didn't complete it properly. Thanks for the headsup.
No that's the thing... I thought this logic is shared between the various clash functions, maybe there's one or more arguments to the lambda that needs to be added or that go unused in some of the prototypes. Otherwise consider it as me trying to understand the shared and distinct bits of logic, writing is as such is the quickest way for me to understand. |
From memory they look suspiciously repetitive until you notice little return earlies and shortcuts specific to each clash type but could be wrong.
Dion Moult
Sent from Proton Mail mobile
…-------- Original Message --------
On 3 Mar 2024, 11:30 pm, Thomas Krijnen wrote:
> BTW is the reason you deleted the write_h5 because you think it should be done in Python instead
Well we could, but moreso, because it should be a separate file/serializer.
I think we should regardless offer a better interface to the tree though, to be able to traverse it client-side code, both py as well as cpp.
> There's this logic which seems to be missing at the end of all the BVH tests?
Ok, right, I was in a hurry, didn't complete it properly. Thanks for the headsup.
> function, especially because there would then also be a corresponding ...
No that's the thing... I thought this logic is shared between the various clash functions, maybe there's one or more arguments to the lambda that needs to be added or that go unused in some of the prototypes. Otherwise consider it as me trying to understand the shared and distinct bits of logic, writing is as such is the quickest way for me to understand.
—
Reply to this email directly, [view it on GitHub](#4374 (comment)), or [unsubscribe](https://github.com/notifications/unsubscribe-auth/AAAVR3T725CHCDEVMF2E72LYWMJVRAVCNFSM6AAAAABD7KVQO2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNZVGE2DIOJZGA).
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
I continued this exercise as it's very helpful for me to try and understand the logic 71ad98b. They looked like three big blobs first to me, but now I understand at least conceptually how they operate. The deviating part that prevents rewriting all three as a clean triangle-to-triangle predicate, sped up by BVH, is the "pierce shape" routine. The logic there is a bit funky, because protrusions have priority over pierces, but pierces are executed first because if there are no points contained in B the triangle can not be protruding B. And most of this runs on triangle level, but involves individual verts, which requires caching because verts are shared by triangles. But you also don't want to do containment test on all vertices first because you can still encounter a protruding vertex causing to return early. I wonder though about the following. Wouldn't it be possible that in quite a few cases you keep re-evaluating the same vertex for protrusion distance? You cache whether it's inside B, but you don't cache the protrusion distance you found for it. So, let's say in the case below the red vertex is inside the cube but not by an amount exceeding the tolerance ratio. You would keep reevaluating that point for every triangle that uses that vertex. Or am I still not understanding things correctly? I wonder whether the code isn't easier to follow and more performant if we execute the protrusion first and rewrite it as a vertex-to-triangle predicate, because that is what it is. I guess pierces are also rather rare, but shapes touching each other with some tolerance ratio is very common. For pierces I would try if it makes sense to rewrite them based on trianglesIntersect() and seeing if one triangle in shape A intersects with two or more triangles in shape B. I don't think with the BVH acceleration it would have widely different performance characteristics than the current approach based on counting the number of vertices outside. But it would allow to get rid of a bunch of code paths. There's something else though that maybe you can clarify for me, where maybe it makes more sense in general to count the number of intersections. Consider the situation below: a cube with an intersecting triangle, the intersection between plane of the triangle and cube is drawn in dashed lines. The triangle is not piercing, because there is one vertex inside, but the protrusion result is minimal or not even beyond the tolerance. Could it be that nothing is reported at all in this case? If we instead take all intersections for the red triangle and calculate the length between them we get something? Alternatively maybe you can consider also checking for pierces when there is one vertex contained. In short, I find the coupling between calculating protrusion and piercing on the one hand ingenious, but it's hard to gather intuition on whether it's worth it in terms of performance and maintainability. Something else I noticed. You do the triangle-ray intersection along the triangle normal, but that is a bit of a special case I assume. If you take [0] https://gamedev.stackexchange.com/a/23745 Some thing you/me still might want to try:
Let me know what you think based on all of this. It's really exciting that we have this now. I don't want to slow things down. |
Yes, this is true. We do re-evaluate the same vertex again and again for protrusion distance. This is wasteful, however there is a slight trick to it: when we evaluate the vertex, we only compare to certain faces (which differ based on the triangle's normal to which the vertex belongs), and we only compare to triangles in that BVH leaf (it may be that depending on the triangle that vertex is evaluated from, it considers different BVH leafs). In short, it is wasteful, it could be cached further, but there is more to consider when writing the caching logic.
I don't think so, e.g. a pipe going completely through a column. Or a pipe going completely through a beam or duct, etc. From test models I recall finding a surprising and significant amount of meaningful pierces.
Completely agree, the current tanglement of piercing and protrusion is confusing :( No clue about performance though. The code is what it is because it ... uh, evolved that way :) I started writing protrusions first then bolted on piercing.
In this particular example the protrusion will be measured to the nearest black surface triangle that does not have the same approximate normal as the red triangle. So in this scenario I believe there will be a non-minimal protrusion result above the tolerance. But you're right, when somehow there is a protrusion > 0 && protrusion < tolerance it'll cause pierce checks to be skipped which is wrong. Pierce checks should only be skipped if protrusion > tolerance (this is arguable, some people might argue that pierce checks always have priority over protrusion but for now I find the opposite to give more intuitive results - later we can make this configurable). So this predicate needs to change to
Is it OK to leave the messy code as is just temporarily as a baseline to measure both performance and correctness against? I completely agree with all your suggestions to improve it, I'm just scared that refactoring might have unintended behaviour changes. E.g. for this weekend's release have two code paths, the original mess plus the WIP cleaned version and by the next release, aim to completely delete the original mess?
I really struggle with this type of math. I trust you 100% :) go for it!
All those changes make perfect sense! Let me know if you'd like me to implement any of them, or if its easier for you to make them. In the meantime I can work on the Python IfcClash interface, and building a nicer UI in Blender to navigate clash results quickly and visualise clashing points and measured distances. This will make it easier to audit the results and check for breaking changes. Should I still aim for a release this weekend? |
I think we can backspace my commits and save them for later. What I think we can also add as later discussion points is how the topology affects outcomes. On the left top is registered as a pierce, left bottom is registered as a protrusion. Top right is registered as only cube pierces triangle, bottom right is in addition registered as triangle piercing cube. I think this is another reason to consider rewriting pierces as triangle intersection. Pierces could be seen as any path of intersection followed by zero or more vertices contained in volume, followed by another intersection. What we do need to work around is:
Both of these can be solved by directly feeding the tree the triangulation element, but I'm a bit puzzled by the code we have there now and how this reordering of triangles should be handled. Do you have a proposal? |
I can try have a shot at implementing it with triangle reordering but the thing I don't know is how to get the aabb and obb from the triangulation element. If you show me how to do that part I can try write the rest of the function. |
I'm working on porting the logic over here: IfcOpenShell/src/exterior-shell-extractor/main.py Lines 215 to 263 in 28c7da2
I only have a rought draft, hope to be able to continue later today. My attempt uses the most prominent face directions to construct the reference frame. OCCT uses PCA. My naive guess/hope is that my approach actually works better in typical orthogonal elements with few vertices, but large faces. void add_triangulation_element(IfcGeom::TriangulationElement* elem) {
Bnd_Box B;
auto& trsf = elem->transformation().data();
auto& vs = elem->geometry().verts();
for (size_t i = 0; i < vs.size(); i += 3) {
gp_Pnt p(vs[i + 0], vs[i + 1], vs[i + 2]);
B.Add(p.Transformed(trsf));
}
{
auto& fs = elem->geometry().faces();
std::unordered_map<std::tuple<int, int, int>, std::vector<size_t>> quantized_normal_counts;
std::vector<double> tri_areas;
std::vector<gp_XYZ> tri_norms;
for (size_t i = 0; i < fs.size(); i += 3) {
gp_Pnt p(vs[3 * fs[i + 0] + 0], vs[3 * fs[i + 0] + 1], vs[3 * fs[i + 0] + 2]);
gp_Pnt q(vs[3 * fs[i + 1] + 0], vs[3 * fs[i + 1] + 1], vs[3 * fs[i + 1] + 2]);
gp_Pnt r(vs[3 * fs[i + 2] + 0], vs[3 * fs[i + 2] + 1], vs[3 * fs[i + 2] + 2]);
auto cross = (q.XYZ() - p.XYZ()).Crossed(r.XYZ() - p.XYZ());
auto mag = cross.Modulus();
tri_areas.push_back(mag / 2.);
cross /= mag;
tri_norms.push_back(cross);
auto quantized = std::make_tuple(
static_cast<int>(cross.X() * 1000),
static_cast<int>(cross.Y() * 1000),
static_cast<int>(cross.Z() * 1000)
);
quantized_normal_counts[quantized].push_back(i / 3);
}
std::vector<std::pair<double, decltype(quantized_normal_counts)::const_iterator>> area_to_it;
for (auto it = quantized_normal_counts.cbegin(); it != quantized_normal_counts.cend(); ++it) {
double area_sum = 0.;
for (auto& i : it->second) {
area_sum += tri_areas[i];
}
area_to_it.push_back({ area_sum, it });
}
std::sort(area_to_it.begin(), area_to_it.end(), [](auto& p1, auto& p2) { return p1.first < p2.first; });
auto calc_average_norm = [&tri_norms](const std::vector<size_t>& idxs) {
gp_XYZ normal_sum;
for (auto& i : idxs) {
normal_sum.Add(tri_norms[i]);
}
normal_sum.Normalize();
return normal_sum;
};
auto Z = calc_average_norm(area_to_it.back().second->second);
std::vector<std::pair<double, gp_XYZ>> candidates;
size_t num_candidates = 0;
for (auto it = ++area_to_it.rbegin(); it != area_to_it.rend() && num_candidates < 10; ++it, ++num_candidates) {
auto ref = calc_average_norm(it->second->second);
candidates.push_back({ std::abs(Z.Dot(ref)), ref });
}
if (candidates.empty()) {
{
gp_XYZ ref(0, 0, 1);
candidates.push_back({ std::abs(Z.Dot(ref)), ref });
}
{
gp_XYZ ref(1, 0, 0);
candidates.push_back({ std::abs(Z.Dot(ref)), ref });
}
}
auto X = std::min_element(candidates.begin(), candidates.end(), [](auto& p1, auto& p2) { return p1.first < p2.first; })->second;
gp_Trsf trsf2;
gp_Ax3 ax3(gp::Origin(), Z, X);
trsf2.SetTransformation(gp::XOY(), ax3);
Bnd_Box tmp;
for (size_t i = 0; i < vs.size(); i += 3) {
gp_Pnt p(vs[i + 0], vs[i + 1], vs[i + 2]);
tmp.Add(p.Transformed(trsf2));
}
gp_Pnt cent = (tmp.CornerMax().XYZ() + tmp.CornerMin().XYZ()) / 2;
auto halfsize = tmp.CornerMax().XYZ() - cent.XYZ();
Bnd_OBB obb;
obb.SetXComponent(ax3.XDirection(), halfsize.X());
obb.SetYComponent(ax3.YDirection(), halfsize.Y());
obb.SetZComponent(ax3.Direction(), halfsize.Z());
obb.SetCenter(cent.Transformed(trsf2.Inverted()));
} |
Cheers I'll take a closer look tomorrow. BTW you probably already noticed but OCCT has as "is_optimal" arg or something when creating an OBB, I have it disabled (enabled is far too slow). |
I just tried to compile this branch and got quite a few errors:
Have you forgotten to commit something or is my environment more strict / different / or? |
OK here is a variant of add_element that feeds the tree a triangulation element directly. It's missing the portion of code which does AABB, OBB, and the max_protrusion (based on OBB). I haven't tested it and I don't even know if it compiles (see above) but fingers crossed it works and that should allow us to remove the hacky commented out In case you're wondering, yes, it is wasteful to essentially have void add_element(IfcGeom::TriangulationElement* elem) {
// TODO: Add aabb and obb
// TODO: add max_protrusion from obb
const auto& t = elem->product();
const std::vector<double>& matrix = elem->transformation().matrix().data();
const std::vector<double>& elem_verts_local = elem->geometry().verts();
const std::vector<int>& elem_faces = elem->geometry().faces();
std::vector<double> elem_verts;
apply_matrix_to_flat_verts(elem_verts_local, matrix, elem_verts);
int original_tris_index = 0;
std::vector<std::array<int, 3>> original_tris;
std::vector<gp_Pnt> verts;
std::vector<gp_Vec> original_normals;
// Attempt to copy exactly what BRepExtrema_TriangleSet is doing under the hood.
const auto builder = new BVH_LinearBuilder<Standard_Real, 3> (BVH_Constants_LeafNodeSizeDefault, BVH_Constants_MaxTreeDepth);
BVH_Triangulation<Standard_Real, 3> triangulation(builder);
for (int i=0; i<elem_verts.size(); i+=3) {
triangulation.Vertices.push_back (BVH_Vec3d (elem_verts[i], elem_verts[i+1], elem_verts[i+2]));
verts.push_back(gp_Pnt(elem_verts[i], elem_verts[i+1], elem_verts[i+2]));
}
for (int i=0; i<elem_faces.size(); i+=3) {
const auto& v1_pnt = verts[elem_faces[i]];
const auto& v2_pnt = verts[elem_faces[i+1]];
const auto& v3_pnt = verts[elem_faces[i+2]];
gp_Vec dir1(v1_pnt, v2_pnt);
gp_Vec dir2(v1_pnt, v3_pnt);
gp_Vec cross_product = dir1.Crossed(dir2);
if (cross_product.Magnitude() > Precision::Confusion()) {
triangulation.Elements.push_back (BVH_Vec4i (
elem_faces[i], elem_faces[i+1], elem_faces[i+2], original_tris_index
));
original_tris_index++;
original_tris.push_back({
elem_faces[i], elem_faces[i+1], elem_faces[i+2]
});
original_normals.push_back(cross_product.Normalized());
}
}
triangulation.MarkDirty();
const auto bvh = triangulation.BVH();
// After BVH is constructed, triangles are reordered
std::vector<std::array<int, 3>> tris(triangulation.Size());
std::vector<gp_Vec> normals(triangulation.Size());
for (int i=0; i<triangulation.Size(); ++i) {
const auto& el = triangulation.Elements[i];
tris[i] = original_tris[el[3]];
normals[i] = original_normals[el[3]];
}
bvhs_[t] = bvh;
is_manifold_[t] = IfcGeom::util::is_manifold(s);
tris_[t] = std::move(tris);
verts_[t] = std::move(verts);
normals_[t] = std::move(normals);
} |
Ok, that's the nice thing about timezones. IfcOpenShell is worked on 24h a/day. I committed the combination of our add_element(). I don't know if as part of your investigations have collected some metrics on good and bad obbs. Could be there are still some bugs in addition to general uncertainty about the effectiveness of this approach. I'd be happy to keep refining this. Still requires adding the max protrusion. |
Just returning to this - I've built the frontend bit and I've noticed that the results after 796760a seem to be a little different. For an intersection test, only collision clash types are returned which suggests that the new is_manifold always returns false. I haven't investigated yet, would you mind taking a peek? It could be obvious to you but it'll probably require a lot of deciphering for me :) |
You're 100% correct, code was not correct. Is working now, but does assume weld_vertices=true because we only look at the indices. #include <vector>
#include <set>
#include <iostream>
static bool is_manifold(const std::vector<int>& fs) {
// @nb this assumes geometry is processed with the WELD_VERTICES setting
std::set<std::pair<size_t, size_t>> dict;
for (size_t i = 0; i < fs.size(); i += 3) {
for (size_t j = 0; j < 3; ++j) {
auto k = (j + 1) % 3;
auto it = dict.find({ fs[i + j], fs[i + k] });
if (it != dict.end()) {
dict.erase(it);
} else {
dict.insert({ fs[i + k], fs[i + j] });
}
}
}
return dict.empty();
}
int main(int, char**) {
std::cout << is_manifold({ 0,1,2 }) << is_manifold({ 1, 0, 3, 2, 1, 3, 2, 3, 5, 4, 2, 5, 4, 5, 7, 6, 4, 7, 6, 7, 0, 1, 6, 0, 3, 0, 5, 0, 7, 5, 4, 1, 2, 4, 6, 1 }) << is_manifold({ 2, 3, 5, 4, 2, 5, 4, 5, 7, 6, 4, 7, 6, 7, 0, 1, 6, 0, 3, 0, 5, 0, 7, 5, 4, 1, 2, 4, 6, 1 });
} |
Nice! It seems to "work"! There's obviously still a lot of work to be done, both on the general code cleaning (i.e. feature-clash3) as well as in the actual collision logic (especially with the new viewer it's easier to review clashes and spot questionable results). Do you think it's possible to merge this? (Maybe |
That UI looks absolutely stunning! I've started a build. |
Hm, build failed due to chocolatey errors. Let's try again:
|
No description provided.