New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training on NTU 60 XSUB #10
Comments
I modify the dataloader to only sample one sequence for each video, and here are part of the training logs. |
Hi, I think the training time is reasonable. One reason is that we used all frames with all combinations, which is a common problem with video processing. Selecting a fixed frame combination for each video can significantly reduce time but may slightly affect the accuracy. Another reason is that we did not implement PSTConv in a parallel way like the standard convolutions in torch.nn. Point tubes are sequentially constructed over time. Best |
Thanks for your quick reply, that resolves my question. |
Thanks for your awesome work and generous code sharing.
Recently, I have tried to reproduce the results on NTU 60 XSUB benchmark with one 3090 GPU.
However, it seems the training is really slow and it would take more than a week to train 20 epochs following the original setting.
I have tried to enlarge the batch size, and it still quite slow.
So, what is the reasonable training time for NTU 60 dataset?
Is there anything I have missed?
Looking forward to your reply.
Thanks
The text was updated successfully, but these errors were encountered: