New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mesh size approximation #764
Comments
I take it this is memory footprint, rather than dimensions? |
yes, by the size of the mesh I actually want to mean the number of vertices :) perhaps reading too much complexity theory stuff recently |
Basically, the use-case is that we cache Manifold results to allow subsequent evaluations to run faster if the underlying code of any sub-tree didn't change, or for repeated equivalent code (like a geometry version of common subexpression elimination). |
Why do you need an estimate? Why not just run that sub-tree and measure its size? Is this to support the |
This is run for every intermediate CSG result in our CSG tree, and I was under the impression that evaluating the Manifold tree would be too expensive to run that often. Note: We run this often because CGAL is really slow. With Manifold, we could potentially limitecache insertion to operations taking more than some threshold computation time. |
Well, it sounds like both OpenSCAD and Manifold have an internal CSG tree. If you're doing fancy caching with your tree, then it probably has more benefit than ours, in which case you might want to circumvent our CSG tree (by forcing execution via e.g. |
For now, OpenSCAD is not having a more powerful CSG tree rewriting, so they probably don't want complete control over this (yet). |
Perhaps we’re using Manifold in a slightly awkward way. I need to learn something about how Manifold actually works :) |
OK, reading some code, it looks like, when performing an operation on Manifold objects (objects of the This kind of makes the concept of caching Manifold operations moot. ..until we actually hit some expensive operation (like minkowski which will perform a Nef3 minkowski operation inline). Since we may not want to rely on which operations are expensive or not, perhaps caching should really be done based on observed resource usage, not on a per-node basis. I'll go back and think about this for a bit. If any of my observations above are grossly inaccurate, I'd appreciate some guidance :) |
Yeah, Manifold is very lazy about evaluation, and also does some interesting reordering of the CSG tree to help optimize. I didn't realize OpenSCAD did any fancy caching - I thought it was all user-driven with the |
Probably not. The main issue is that you will not know the resource usage of the operation until you eventually evaluate it. If nothing forces the evaluation, everything will look unbelievably fast, which is one common issue when users try to benchmark manifold's performance. Consider the following: union() {
expensive_thing();
simple_thing();
} If openscad does not cache the |
Right, but in that case, shouldn’t we cache manifold::Mesh objects, rather than manifold::Manifold objects as we do today? |
No, but if you want to actually cache a manifold, just make sure to call NumVerts on it. That forces evaluation. If you don't do that, then all you're caching is a promise of a CSG tree, but you probably already have that outside of manifold. Although I probably shouldn't have much confidence without getting familiar with your code. |
caching the promise without forcing evaluation is fine, we do take that into consideration and make sure that there will not be double evaluation. |
Let me know if this discussion is better suited elsewhere. Reading more code, I think I get it: When the Once the subtree is dereferenced, it's lost - unless someone else decides to hang onto a shared_ptr to interesting subtrees. ..so what OpenSCAD needs to do in the |
Yes, your understanding is correct. |
Thanks for the confirmation! I guess we're back to square one in terms of this ticket: We don't have any way of estimating the size of a ManifoldNode until it's evaluated. We could proactively evaluate every node we create (bottom-up), but it sounds like we may then kill any optimizations done by manifold. |
Considering the point is to make it faster to evaluate when someone changes their script, another approach would be to choose cache points based on usage or code shape (modules without many parameters, or where parameters haven't been changed much), rather than execution time. Otherwise, the simplest approximation is probably just sum the number of verts of all the ancestor meshes, but it'll be very approximate. |
I think what manifold can do now is just give them the sum of the number of verts. Any more advanced cache system should be implemented on openscad side and tailored to their usecases. |
@kintel is this still needed if we don't cache manifold geometries and just cache polysets? |
Let's pause this request for now, until we have a better sense of how to manage such caching. |
Add a mesh size method that can approximate the size of the mesh without actually triggering the underlying CSG evaluation. Maybe just sum the size of the operands. This can help openscad to compute the cache size.
@kintel
The text was updated successfully, but these errors were encountered: