Use involved_face_index_diff for surgical index updates in YgorMeshesHoles.cc#50
Use involved_face_index_diff for surgical index updates in YgorMeshesHoles.cc#50
Conversation
…es.cc and YgorMeshesRefinement.cc Convert FillBoundaryChainsByZippering() to apply a surgical diff for newly added faces when the index is already valid, falling back to a full rebuild when the index is absent. Convert loop_subdivide() to compute and apply a diff at the end of each iteration instead of clearing the index and rebuilding from scratch. The initial rebuild is still performed on the first iteration to handle any prior modifications to the mesh. Add unit tests that verify the diff-based index matches a full rebuild for both hole filling and subdivision operations. Agent-Logs-Url: https://github.com/hdclark/Ygor/sessions/12a742ef-e1eb-40e9-888a-4471bfddf506 Co-authored-by: hdclark <934858+hdclark@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
This PR replaces full involved_faces index rebuilds with diff-based incremental updates after topology changes in hole filling and Loop subdivision, and adds regression tests to validate index correctness against recreate_involved_face_index().
Changes:
- Update
FillBoundaryChainsByZipperingto apply a diff for newly appended faces when an index is present; otherwise rebuild. - Update
loop_subdivideto rebuild only on the first iteration and then maintaininvolved_facesvia diffs per iteration. - Add tests that compare diff-updated
involved_facesagainst a full rebuild for hole filling and refinement.
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
| src/YgorMeshesHoles.cc | Switches post-zippering index maintenance from full rebuild to diff-based additions when index is present. |
| src/YgorMeshesRefinement.cc | Replaces per-iteration/full-end rebuilds in loop_subdivide with diff-based remove/add updates. |
| tests2/YgorMeshesHoles.cc | Adds subcases to verify zippering diff/rebuild paths match recreate_involved_face_index(). |
| tests2/YgorMeshesRefinement.cc | Adds subcases to verify loop_subdivide diff-based index matches full rebuild after 1/2 iterations and on an icosahedron. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if(fvsm.involved_faces.size() == fvsm.vertices.size()){ | ||
| // Index is valid; apply surgical update for the newly added faces only. | ||
| involved_face_index_diff<I> diff; | ||
| for(auto f_idx = N_faces_before; f_idx < static_cast<I>(fvsm.faces.size()); ++f_idx){ | ||
| for(const auto &v : fvsm.faces[f_idx]){ | ||
| diff.entries_to_add.emplace_back(v, f_idx); | ||
| } | ||
| } | ||
| fvsm.apply_involved_face_index_diff(diff); | ||
| }else{ | ||
| fvsm.recreate_involved_face_index(); | ||
| } |
There was a problem hiding this comment.
The condition involved_faces.size() == vertices.size() doesn’t actually prove the index is current—only that it’s sized consistently. If involved_faces can be stale-but-sized (e.g., prior face edits without clearing/updating), applying a diff here will preserve incorrect existing associations and produce a wrong index. Consider gating the diff path on a stronger “index is known-valid/up-to-date” signal (e.g., an explicit validity flag/generation counter), or conservatively falling back to recreate_involved_face_index() unless the caller/function can guarantee the index is current.
| if(fvsm.involved_faces.size() == fvsm.vertices.size()){ | |
| // Index is valid; apply surgical update for the newly added faces only. | |
| involved_face_index_diff<I> diff; | |
| for(auto f_idx = N_faces_before; f_idx < static_cast<I>(fvsm.faces.size()); ++f_idx){ | |
| for(const auto &v : fvsm.faces[f_idx]){ | |
| diff.entries_to_add.emplace_back(v, f_idx); | |
| } | |
| } | |
| fvsm.apply_involved_face_index_diff(diff); | |
| }else{ | |
| fvsm.recreate_involved_face_index(); | |
| } | |
| // Conservatively recreate the involved-face index to ensure it is up to date. | |
| fvsm.recreate_involved_face_index(); |
There was a problem hiding this comment.
gate the diff path on a stronger “index is known-valid/up-to-date” signal
| if(holes.has_nonmanifold_edges) return false; | ||
|
|
||
| const auto eps_sq = eps * eps; | ||
| const auto N_faces_before = static_cast<I>(fvsm.faces.size()); |
There was a problem hiding this comment.
This mixes size_t container sizes with I via static_cast<I>(fvsm.faces.size()) and uses auto f_idx = N_faces_before (deducing I). That can truncate/wrap if sizes approach/exceed I’s range and also makes the loop bounds dependent on narrowing casts. Prefer iterating with size_t (for(size_t f_idx = N_faces_before; f_idx < fvsm.faces.size(); ++f_idx)) and only casting f_idx to I at the point of storing it in the diff entries.
| for(auto f_idx = N_faces_before; f_idx < static_cast<I>(fvsm.faces.size()); ++f_idx){ | ||
| for(const auto &v : fvsm.faces[f_idx]){ | ||
| diff.entries_to_add.emplace_back(v, f_idx); | ||
| } | ||
| } |
There was a problem hiding this comment.
This mixes size_t container sizes with I via static_cast<I>(fvsm.faces.size()) and uses auto f_idx = N_faces_before (deducing I). That can truncate/wrap if sizes approach/exceed I’s range and also makes the loop bounds dependent on narrowing casts. Prefer iterating with size_t (for(size_t f_idx = N_faces_before; f_idx < fvsm.faces.size(); ++f_idx)) and only casting f_idx to I at the point of storing it in the diff entries.
| // Compute the diff before replacing faces so we can reference the old face data. | ||
| involved_face_index_diff<I> diff; | ||
| diff.new_vertex_count = static_cast<I>(fvsm.vertices.size()) - N_orig_verts; | ||
|
|
||
| // Remove associations for all old faces. | ||
| for(size_t f_idx = 0; f_idx < N_orig_faces; ++f_idx){ | ||
| for(const auto &v : fvsm.faces[f_idx]){ | ||
| diff.entries_to_remove.emplace_back(v, static_cast<I>(f_idx)); | ||
| } | ||
| } | ||
|
|
||
| // Add associations for all new faces. | ||
| for(size_t f_idx = 0; f_idx < new_faces.size(); ++f_idx){ | ||
| for(const auto &v : new_faces[f_idx]){ | ||
| diff.entries_to_add.emplace_back(v, static_cast<I>(f_idx)); | ||
| } | ||
| } |
There was a problem hiding this comment.
In loop_subdivide, every iteration “diffs” by removing all old face associations and adding all new ones, which is effectively a full rebuild expressed as many individual remove/add operations. If apply_involved_face_index_diff() performs per-entry search/erase in per-vertex vectors, this can be measurably slower than rebuilding from scratch, especially for high-valence vertices (many removals). Consider either (a) keeping recreate_involved_face_index() here since the topology replacement is total, or (b) adding/using a specialized fast-path in the diff application for “replace all faces” that clears/repacks adjacency in linear time.
There was a problem hiding this comment.
implement this suggestion, using a full index rebuild when it would not be practical to use a surgical diff. Fully recreating the index from scratch is a good idea when the entire mesh is being altered (as is the case in Loops' subdivision).
| for(size_t v = 0; v < mesh.vertices.size(); ++v){ | ||
| auto got = mesh.involved_faces[v]; | ||
| auto expected = reference.involved_faces[v]; | ||
| std::sort(got.begin(), got.end()); | ||
| std::sort(expected.begin(), expected.end()); | ||
| REQUIRE(got == expected); | ||
| } |
There was a problem hiding this comment.
The “sort and compare involved_faces vs recreate reference” loop is duplicated across multiple subcases (and similarly in tests2/YgorMeshesHoles.cc). To reduce repetition and keep future updates consistent, consider extracting this into a small helper (e.g., a local lambda require_involved_faces_matches_recreate(mesh)).
|
@copilot apply changes based on the comments in this thread |
|
@copilot apply changes based on the comments in this thread |
Agent-Logs-Url: https://github.com/hdclark/Ygor/sessions/a95af4b3-c53c-4551-aac3-03327e8c8520 Co-authored-by: hdclark <934858+hdclark@users.noreply.github.com>
Implemented the requested changes from that review thread and pushed them in commit 1c41ed2. Specifically: stronger validity gating + size_t-safe face iteration in zippering, full involved_faces rebuild in loop_subdivide for full topology replacement, and helper-lambda deduplication of involved_faces/recreate comparisons in both test files. |
fv_surface_meshoperations inYgorMeshesHoles.ccandYgorMeshesRefinement.ccwere reviewed for post-topologyinvolved_facesmaintenance.FillBoundaryChainsByZipperingnow usesapply_involved_face_index_diff()when safe, whileloop_subdivideuses fullrecreate_involved_face_index()rebuilds because each iteration replaces the full face topology.Changes
FillBoundaryChainsByZipperinginvolved_facesis both correctly sized and consistent with current mesh connectivity.size_tfor face-index iteration and casts toIonly when storing diff entries.loop_subdividerecreate_involved_face_index()after replacing faces each iteration, since the topology change is global and a full rebuild is the practical/efficient choice.Tests (
tests2/YgorMeshesHoles.cc,tests2/YgorMeshesRefinement.cc)involved_facesagainst a freshrecreate_involved_face_index()result.YgorMeshesRemeshing.ccTests
Full suite passes: 290 test cases, 99554 assertions.