Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory cost for training #21

Closed
FrontierBreaker opened this issue Dec 11, 2023 · 8 comments
Closed

Memory cost for training #21

FrontierBreaker opened this issue Dec 11, 2023 · 8 comments

Comments

@FrontierBreaker
Copy link

Hello, I appreciate your outstanding work. I would like to inquire about the GPU memory requirements associated with training/SLAM. Specifically, I'm interested in understanding the amount of memory needed for conducting experiments on the Replica dataset. Thank you!

@JayKarhade
Copy link
Collaborator

Hi @FrontierBreaker , training replica sequences by downsampling images to (340,600) should take about ~3GB GPU memory.

you can change the resolution here.

https://github.com/spla-tam/SplaTAM/blob/bbaf5cc5754bf1034b33902007872c694e412a31/configs/replica/splatam.py#L51C31-L51C31

@FrontierBreaker
Copy link
Author

FrontierBreaker commented Dec 11, 2023

Thank you for your rapid reply!! So, how about training with the original resolution image? Also, are the results in the main paper produced by the original resolution image on Replica? Thank you!

@JayKarhade
Copy link
Collaborator

2-3GB of memory is for half-resolution. The original resolution (680,1200) is around ~9 GB GPU memory.

In the paper, we indicate which results use full resolution vs half resolution.

@JayKarhade
Copy link
Collaborator

Closing this for now. Feel free to reopen it in case of any discrepancies.

@Nik-V9
Copy link
Contributor

Nik-V9 commented Dec 26, 2023

Hi, Thanks for your interest in our work. An additional comment regarding the GPU memory requirement:

We store the keyframes on the GPU to prevent data transfer (CPU to GPU) & data read operation overhead during the map optimization using overlapping-view keyframes. Therefore, GPU memory can be reduced with further optimizations to the code.

@jywu511
Copy link

jywu511 commented Feb 29, 2024

Hi, thanks for the wonderful work! Something strange happened to my experiment. I only change the results folder from replica to my custom dataset with only 500 images (shape is 680x1200), the memory is increasing during experiment and it is more than 20G. It is only 9G when I try it on Replica datasets. Looking forward to your reply!
Best regards

@Xiaohao-Xu
Copy link

Hi @JayKarhade @Nik-V9 @jywu511, I believe the issue at hand is related to the adaptive Gaussian kernel expansion mechanism. In my recent investigation on the robustness of current SLAM models (https://github.com/Xiaohao-Xu/SLAM-under-Perturbation), I have found that as the complexity of the scene increases (for example, with more perturbations and objects), it becomes necessary to add more Gaussian kernels to SplaTAM. This ensures a higher quality reconstruction due to its explicit modeling of the scene. Although SplaTAM performs well on standard SLAM datasets with SoTA performance, there still appears to be a gap that needs to be addressed when it comes to real-world applications.

@Nik-V9
Copy link
Contributor

Nik-V9 commented Mar 20, 2024

Hi @JayKarhade @Nik-V9 @jywu511, I believe the issue at hand is related to the adaptive Gaussian kernel expansion mechanism. In my recent investigation on the robustness of current SLAM models (https://github.com/Xiaohao-Xu/SLAM-under-Perturbation), I have found that as the complexity of the scene increases (for example, with more perturbations and objects), it becomes necessary to add more Gaussian kernels to SplaTAM. This ensures a higher quality reconstruction due to its explicit modeling of the scene. Although SplaTAM performs well on standard SLAM datasets with SoTA performance, there still appears to be a gap that needs to be addressed when it comes to real-world applications.

Very cool work; Thanks for sharing & testing SplaTAM in this setup!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants