-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory cost for training #21
Comments
Hi @FrontierBreaker , training replica sequences by downsampling images to (340,600) should take about ~3GB GPU memory. you can change the resolution here. |
Thank you for your rapid reply!! So, how about training with the original resolution image? Also, are the results in the main paper produced by the original resolution image on Replica? Thank you! |
2-3GB of memory is for half-resolution. The original resolution (680,1200) is around ~9 GB GPU memory. In the paper, we indicate which results use full resolution vs half resolution. |
Closing this for now. Feel free to reopen it in case of any discrepancies. |
Hi, Thanks for your interest in our work. An additional comment regarding the GPU memory requirement: We store the keyframes on the GPU to prevent data transfer (CPU to GPU) & data read operation overhead during the map optimization using overlapping-view keyframes. Therefore, GPU memory can be reduced with further optimizations to the code. |
Hi, thanks for the wonderful work! Something strange happened to my experiment. I only change the results folder from replica to my custom dataset with only 500 images (shape is 680x1200), the memory is increasing during experiment and it is more than 20G. It is only 9G when I try it on Replica datasets. Looking forward to your reply! |
Hi @JayKarhade @Nik-V9 @jywu511, I believe the issue at hand is related to the adaptive Gaussian kernel expansion mechanism. In my recent investigation on the robustness of current SLAM models (https://github.com/Xiaohao-Xu/SLAM-under-Perturbation), I have found that as the complexity of the scene increases (for example, with more perturbations and objects), it becomes necessary to add more Gaussian kernels to SplaTAM. This ensures a higher quality reconstruction due to its explicit modeling of the scene. Although SplaTAM performs well on standard SLAM datasets with SoTA performance, there still appears to be a gap that needs to be addressed when it comes to real-world applications. |
Very cool work; Thanks for sharing & testing SplaTAM in this setup! |
Hello, I appreciate your outstanding work. I would like to inquire about the GPU memory requirements associated with training/SLAM. Specifically, I'm interested in understanding the amount of memory needed for conducting experiments on the Replica dataset. Thank you!
The text was updated successfully, but these errors were encountered: