-
Notifications
You must be signed in to change notification settings - Fork 73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added : VHACD & karimnaaji qhull implementation #781
base: master
Are you sure you want to change the base?
Conversation
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
Thanks; what are your next steps? Do you need anything from us right now? And have you managed to sign the CLA (the individual one should be instant)? |
In this PR I have not made any changes to any other file, except adding the two new implementations, as well as adding some testcases, I am not sure as to why some testcases are failing, could you suggest what I can try to solve those issues in my PR? Also, for my further plan I am looking into adding some other implementations as suggested by zalo like https://github.com/tomilov/quickhull https://github.com/andreacasalino/Fast-Quick-hull , I am almost done with another implementation (I will prioritize this first). I plan of adding implementations of Graham's Scan and Chan's algorithm, as they seem like suitable alternatives. I also plan to optimize the piece of code in quickhull using parallelization. And yes I signed the individual CLA now. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #781 +/- ##
===========================================
- Coverage 91.84% 74.10% -17.74%
===========================================
Files 37 64 +27
Lines 4976 12115 +7139
Branches 0 1737 +1737
===========================================
+ Hits 4570 8978 +4408
- Misses 406 3137 +2731 ☔ View full report in Codecov by Sentry. |
It looks like you've already made progress - most of it is building with tests passing. It looks like MSVC may be a bit pickier about implicit type conversions. And the fact the WASM is running out of memory is not a great sign - this probably implies the memory usage of that algorithm doesn't scale well. It looks like you're investigating several different approaches to Convex Hull, but at some point you'll want to pick one to focus on optimizing. What is your sense so far? How do you plan to choose between them? |
For the WASM, I realised, I wasn't freeing up memory at one place, so I will try to see if that solves the issue, apart from that, I'm not sure as to why build_windows is failing the error message isn't clear to me, since I haven't made any changes to minimize_testcase.cpp, maybe it's an issue with the memory as well, I will fix that and see. So, yeah right now as suggested, I am exploring different approaches and I think some part of this project is about exploring approaches so that we can narrow down our options to say 2 or 3, and then I think I'll spend some time understanding the theory of the algorithm in depth as well as the code, to account for edge cases, and once we have the options, we can study them properly to identify bottlenecks to optimize the algorithm. About choosing, the algorithms I was reading about some common algorithms, and I found that Graham's scan and Chan's algorithm were fairly fast and handled some edge cases that quickhull couldn't. I feel while identifying which 2 or 3 to narrow it down to, we can consider some complex testcases to identify time constrains of algorithms as well as look at certain cases where the quickhull algorithm suffers, to see if we can identify an algorithm which performs better in those, and certainly we should also consider the cases where the new alogirthm we plan to use suffers, to understad the tradeoff between the algorithms better. I would appreciate your input on my thoughts as well, so that I can better understand your expectations. |
That sounds good - more speed with less maintenance is ideal. As far as correctness, I'd love to see some examples of incorrect solutions. Keep in mind that our library does inexact geometry, so I'm less picky about how nearly coplanar points are handled, so long as the result is sane. You'll notice we tend to check our test cases on volume and surface area - I think this is a good approach for Hull as well. If all the points are coplanar, I'm equally okay with returning a flat manifold or an empty manifold - same volume after all. |
Yeah, regarding when I said incorrect solution, I was mainly referring to coplanar points, but I understand your arguments, as to why that isn't an issue, the other case I was considering was deleted point deletions cases I can't quite recall where I read about it, but since we are mainly considering volume and surface area that shouldn't be a major problem as well. I will still try to find cases, to see if we can identify some edge case. Also regarding this PR, I have now passed all cases but build_windows and it's TBB, and I am working on identifying the issues but it's taking some time. I will update you once it's done so you can review it, so that I can use that feedback to improve |
@elalish , so I attempted to fix the issues with build_windows, but haven't succeeded yet I have switched to windows to fix the issue, I hope to get it done, but with the proposal deadline coming closer, I wanted to prioritize writing a good proposal, to convey my thoughts better. I was wondering if you could review my code (I know it doesn't pass all checks yet, but as far as I understand it's merely an implicit typecasting issue, and I would appreciate it if you could go through the code) and give me some feedback, so that I can consider that in my proposal. I also went through the guidelines on the opencax gsoc website, and since they were quite general ( for example need for IRC username even though it's not mentioned as mode of communication used by Manifold), I was wondering if there are separate guidelines that I need to consider for this project (I will follow those on opencax as well). PS: Sorry have been busy with institute exams lately, will certainly get back to fixing this issue soon. |
We are first year in GSoC, so we don't really have any additional guideline on that. I think as long as your proposal can demonstrate your interest as long as show some related experience in courses/other work, it should be fine. And you can replace the irc username with github handle, it should be fine. |
I have a submitted a draft of my GSoC proposal on the website and would appreciate it if you could go through it once and give me feedback on it. |
which website did you submit to? |
GSoC website - https://summerofcode.withgoogle.com/proposals , under BRL-CAD organization |
Alright, sorry I was not aware of that, I will send an email now |
I've pushed the code for the menger sponge rotation test as well as the high quality sphere test. Just to clarify, when we meant a high quality sphere, a sphere with a very large number of circular segments was expected right? Also, while testing for a large number of circular segments, my local machine couldn't handle more than 6000 circular segments, would that be enough, and similarly for checking lots of values of rotations in menger sponge, the time taken for each iteration is not very feasible, so I was wondering, if there's a way I can run it on some external server, which is faster than my machine, so as to test a higher quality sphere and test several rotations? (Like one sometimes uses google collab to perform ML tasks) |
I have a spare machine that I rarely use, it is not a high-end machine, just i5-13600 with 32GB of RAM, but I can add more RAM if needed. The problem though is that the department policy does not allow others to ssh into it (and it is behind a firewall, I have no proxy server to bridge that), but you can send me the script to run. And can you complete the CGAL perf test? I think it is precise enough when we consider 32 bit floating points. |
Yeah, I'm able to run the CGAL perf test (the number of circular segments was 4096 for that test), I am able to run till 6000. Regardless, your spare machine has higher specifications than mine, so I can maybe increase the count to 10000, and then ask you to run it once. Also, for the menger sponge, I'll modify the code for the test, and ask you to run it, considering it can run faster. Also, I was thinking, since we can execute bash commands in collab, like cloning a repository, maybe I can get our code to run as well. I will try to see if I can get that to work. I will get back to you regarding this. |
With the spheres specifically, it may be more helpful to look at vertex count, so you can tell how many points are being dropped. The surface area and volume calculations may be getting some rounding error since it's adding lots of small values. |
Note that CGAL probably uses precise arithmetic, while we only use floating points. When things differ by orders of magnitude, floating point precision issue is more apparent. |
btw thinking about it, is this really using the TPU or just the TPU configuration has more RAM? I know nothing about TPUs, thought they are just accelerators which requires special kernels to work, similar to GPU programming. |
I'm not sure, the TPU configuration has significantly more RAM, and it did end up using around 50GB of RAM, which wasn't there in either the GPU or CPU configuration. So it probably doesn't use the TPU, just the extra RAM. |
…ripts for statistical analysis
Note : Hull2 is VHACD implementation, and Hull is our current implementation 1750623.stl
Unnormalized values:
39202.stl
Unnormalized values:
These are two cases where the outputs for volume and area for our current implementation varies from VHACD and CGAL outputs (which give the same outputs), Hull3 and Hull4 give errors on these hulls (although that is a fault of the implementations) I was able to identify these cases when I was looking for errors in volume and area over the range of 1%, there are more occurrences when checking for 0.1% but I think going through these cases could be a good starting point for trying to identify why there's a difference. |
The 5 cases of segfaults that were happening in all algorithms, were due to corrupted files or files with incorrect format, I verified that the error was in the ImportMesh function call. So, I think we can safely ignore them in further analysis. Also, for the cases where our algorithm showed the warning of dropping horizon edges, I checked the outputs of those cases and they gave the same outputs as the other algorithms, so the outputs are accurate even when warnings are present. For reference, For normalized starter statistics for the algorithms
Mainly for the time metric, we discussed that it's helpful to omit the cases with very small values of time, to more accurately identify the difference in time by various algorithms, so I've filtered out times less than 1ms (0.001 seconds)
From these results it's clear that Hull3 and Hull4 are not suitable alternatives, despite Hull4 showing a lot of "success" counts, it actually returns an empty hull if it's not able to compute the output, and thus it's volume and area result in many zeroes, ultimately affecting the mean volume and hull metric. Despite, several attempts to fix this, I've not been able to, so I think it's safe to say we can drop Hull4 from our list of options. Hull2 is certainly slower but is quite accurate, so is still a suitable alternative, especially considering the discrepancy in volume and area in the cases in the prior comment #781 (comment) With a basic idea of how to compare the algorithms, I now plan to integrate the other implementations we had discussed and run tests, so that we have a larger set of algorithms to compare with. |
Keeping this in mind, we discussed how to simplify the problem to identify what could be the issue, As suggested by @pca006132 I plan to experiment with a sort of binary search where I remove half points randomly and check the volume and area to identify the set of points that could have caused the issue and then we can evaluate it better. I also plan to subtract the two convex hulls produced and then look at the output to see if there's a clear visual indicator of what could be the issue @elalish, these are a couple of ideas we had to better identify the issue, if there are some other ideas that you think we can look into for the same, I will try to test them as well. PS : @pca006132 were u able to find something about the epsilon value you were mentioning earlier |
No, I haven't yet have time to do that. For randomly removing points to reduce the test case, it was about the warning from quickhull that says failure to resolve horizon edges. |
Excellent work! So it looks like our current hull is the fastest, but may have some significant accuracy problems. I very much like the idea of taking the difference of the two hulls - save it to a GLB and link it here. I think that will tell us a lot quickly. |
…ats to keep a filter on time
Sorry for the delay, I've added the glb files here. [ Note : hull1 refers to our current implementation, hull2 refers to VHACD implementation and hull1_hull2 is the file obtained from subtracting hull2 from hull1 and similarly for hull2_hull1] |
Great, do you see where the problem is occurring? Would be nice to put some renders up here perhaps with a colored overlay so we can see where the difference is. |
As discussed, after testing the other algorithms (Hull 5 and Hull 6 didn't give satisfactory results in our tests and for Hull 7 (https://github.com/melax/sandbox/blob/master/include/hull.h) the time complexity wasn't feasible in theory, and Hull 8 (https://github.com/andoresu47/ConvexHull3D) had constraints like no 3 points are colinear, which are not feasible), we have decided to continue with our current implementation and VHACD implementation. As for the cases that gave different volumes and surface areas for our implementation and VHACD implementation, here are the important observations: 39202.stl: From these images it's clear to see that the excess volume and area is caused due to the piece contained inside the object, this extra piece is produced by our current implementation, and is clearly something that should not be present. The VHACD implementation does not have this issue. Similary for 1750623.stl : The extra piece is contained inside the object, and it is also produced by our implementation but not the VHACD implementation. I hope these renders highlight the problem clearly. I plan to slowly reduce the number of points that persist with this issue, to investigate what caused this issue, On a side note, for the "Failed to solve Horizon edge" warning message in our current algorithm. I've added two .bin files that contain the points that caused this issue, and their glb files. I'm attaching the photos of the hull produced in blender as well. I don't think the renders help us a lot in identifying what could have caused the issue, so I plan to use gbd to run the code to better identify what causes the warning message, and if it actually affects the output in anyway (by comparing the outputs of VHACD implementation and CGAL). points_1411702.bin: Also, I was thinking I should post the same example for the volume discrepancy in an issue of the repo of our current implementation, to get some ideas on what could cause the issue, while I work on reducing the number of points, does that sound alright? |
Nice figures. Yeah, you should open an issue upstream and ping us. For debugging, you can probably try to add some print statements to print the edge/point IDs that the algorithm is processing, as well as a list of neighbors (or something similar in their algorithm). This can allow you to highlight specific edges/points in the mesh for visualization, and that should help with debugging. Think of the sorting algorithm visualization, you probably want something similar but with a mesh. I am not sure if directly using GDB to debug this kind of algorithmic issue can help you much. |
Yeah, so the bug is that our quickHull implementation is generating a very non-convex shape for this convex hull operation. Good find! Does it have any interior verts? It kind of looks like it only generated incorrect triangles. Any idea where the algorithm is going off the rails? Have you filed this as a bug upstream yet? We should try to get their opinion sooner than later. |
After some testing and a bit of brute force I was able to reduce the number of points for our problematic volume down to 18. This will make it significantly easier for testing. The points are added to the hull in each algorithm in this order Hull1 Point Base -18.2869 31.6738 2.175 Hull2 Point Base -17.6332 -17.342 89.9628 The first 5 points forming the hull are same for the both algorithms, so we can safely check for the points after it. Also, I have highlighted the two differing points (these points lead to the excess volume), These 5 points are coplanar for our case I will make the upstream issue tonight after our discussion, I wanted to reduce the testcase to a smaller size and have some basic idea before I posted this issue, thus the delay. |
Note: Hull1 is our current implementation, Hull2 is VHACD implementation |
I've created the issue upstream akuukka/quickhull#23, and will now start working on a check for whether the intermediate output of the algorithm is a convex hull. I will create a pull request upstream, when I've made some progress |
For issue 89 Optimize Convex Hull Algorithm, I have added the Karimanaaji implementation : https://github.com/karimnaaji/3d-quickhull/tree/master (and the VHACD implementation from zalo's reference is also added, but I just added it as an attempt to get used to the codebase), I have written comments, as to where the algorithm is having issues with larger hulls, there are segfaults that occur in the functions of the header files, I have tried to identify the faulty function and have written that in comments as well.