-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about the training time on DTU #4
Comments
Hi there! Thanks a lot for mentioning this issue! 90min is definitely way too long and indicates an issue somewhere. There are a couple of things that may have contributed to the long training time:
I hope there were no more performance regressions that have occurred due to refactoring so I will keep the issue open until I double check everything and also merge the async branch. One tangential point to mention is that you can get significantly faster training by compressing the schedule using |
Hello,$\approx$ 30mins). I'm curious if it was normal. Was it out of that the default parameters in the code were not the same as the parameters you used to measure the training time, or was it for some other reason? I'll be appreciate to get the answer.
First of all, thanks for your excellent work and code release!
When I was trying to repeat the experiments with the given docker environment on my local workstation (a single RTX 3090), I noticed that the training process (20w iters with default hyper-params) on a single DTU scan (
dtu_scan24
) took about 90mins to complete, which is much longer than the officially claimed training time in the paper conclusion part (The text was updated successfully, but these errors were encountered: