-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why offload the points and weights to CPU before DT? #7
Comments
Hi, in the current implementation, we first run non-differentiable DT (on CPU) on the weighted point set. |
Got it, thanks for your nice reply! BTW, I don't know much about the implementation of DT, and I want to know whether DT running on CPU is an large overhead for computation time compared with being executed on CUDA (if possible)? |
Yes, unfortunately, the DT running on CPU is our computational bottleneck when it comes to a large-scale point cloud ( > 50K). We also searched for a possible CUDA implementation for DT, and there were several papers about it, we could not find suitable one for our paper. So we assume we need a whole new different approach if we truly want to handle a very large point cloud (~= 1M). |
Thanks again for your nice reply! :) |
dmesh/diffdt/cgalwdt.py
Lines 26 to 38 in 8a76623
I notice that the points and weights are offloaded to cpu before delaunay triangulation, wasn't this process executed in CUDA? And why make the differential DT under
torch.no_grad()
context?The text was updated successfully, but these errors were encountered: