-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about the difference between PVCNN and SPVCNN #19
Comments
Hello, We actually didn't use sparse 3D convolution in PVCNN. Besides this apparent difference, another important difference in SPVCNN and PVCNN is that the voxelization / devoxelization operations are more challenging, therefore we don't want to have too many SPVConv layers. Best, |
Hi @kentangSJTU, I had a quick follow-up on this discussion:
Do you mean that the voxelization/devoxelization operations for sparse tensors are more time costly in the SPVConv layers compared to PVConv (i.e. dense 3D convolution)? Any further insight would be very helpful. |
Hi @chaitjo, Thanks for your interest. For dense tensors, we can directly infer a point's memory location given its coordinates (coordinate x,y,z will correspond to memory location x * y_max * z_max + y * z_max + z). However, for sparse tensors, there is no such property, and we have to rely on hash map query to obtain the memory location of a point given its coordinates. Therefore, voxelization and devoxelization operations can be slower. Best, |
Thanks for the explanation! |
Awesome performance!
If the NAS part is excluded, is it just that the sparse 3D convolution is added to SPVCNN, compared to PVCNN? However, I thought you already used the sparse 3D convolution in PVCNN when I read PVCNN some time ago. After all, you have already cited the sparse 3D convolution and SECOND in PVCNN.
If the answer is yes, I feel very regret that I missed this idea.
The text was updated successfully, but these errors were encountered: