You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Table 1 in your papers says that training time < 12hours per net work.But it takes about 19 hours when I train the undc float network with point cloud input(gpu rtx A6000). And the batch size must be 1 in data loader,I want to know why.Because actually the data loader and network can be changed to process multiple shapes in one batch.
The text was updated successfully, but these errors were encountered:
Please change the title to a summary of the issue.
12 hours is for training on SDF inputs, since that table is comparing NMC and NDC. It takes longer to train on point cloud inputs.
The batch size is 1 because each shape may have a different size, e.g., one shape is 4x4x4 and another 3x4x5, so you cannot put them in the same batch. Of course, you can do some tricks to have more shapes in a batch. I just used one shape per batch for simplicity.
hmax233
changed the title
Thanks for your great work!
About training time and batchsize
Nov 5, 2022
Table 1 in your papers says that training time < 12hours per net work.But it takes about 19 hours when I train the undc float network with point cloud input(gpu rtx A6000). And the batch size must be 1 in data loader,I want to know why.Because actually the data loader and network can be changed to process multiple shapes in one batch.
The text was updated successfully, but these errors were encountered: