Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About training time and batchsize #9

Closed
hmax233 opened this issue Nov 5, 2022 · 2 comments
Closed

About training time and batchsize #9

hmax233 opened this issue Nov 5, 2022 · 2 comments

Comments

@hmax233
Copy link

hmax233 commented Nov 5, 2022

Table 1 in your papers says that training time < 12hours per net work.But it takes about 19 hours when I train the undc float network with point cloud input(gpu rtx A6000). And the batch size must be 1 in data loader,I want to know why.Because actually the data loader and network can be changed to process multiple shapes in one batch.

@czq142857
Copy link
Owner

Please change the title to a summary of the issue.

12 hours is for training on SDF inputs, since that table is comparing NMC and NDC. It takes longer to train on point cloud inputs.

The batch size is 1 because each shape may have a different size, e.g., one shape is 4x4x4 and another 3x4x5, so you cannot put them in the same batch. Of course, you can do some tricks to have more shapes in a batch. I just used one shape per batch for simplicity.

@hmax233 hmax233 changed the title Thanks for your great work! About training time and batchsize Nov 5, 2022
@hmax233
Copy link
Author

hmax233 commented Nov 5, 2022

Thanks for your reply!

@hmax233 hmax233 closed this as completed Nov 5, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants