Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Growing memory #30

Closed
sjtuytc opened this issue Dec 23, 2021 · 3 comments
Closed

Growing memory #30

sjtuytc opened this issue Dec 23, 2021 · 3 comments

Comments

@sjtuytc
Copy link

sjtuytc commented Dec 23, 2021

Hi, I found that when I use 8 2080Ti to train this model. At the initial stage, the GPU memory occupation is 6/8, but the GPU memory is soon out. So do you have an explanation for this? And what's the suggested regime to train the MOTR model?

@dbofseuofhust
Copy link
Collaborator

Hi, I found that when I use 8 2080Ti to train this model. At the initial stage, the GPU memory occupation is 6/8, but the GPU memory is soon out. So do you have an explanation for this? And what's the suggested regime to train the MOTR model?

Hi~, thanks for attention!
We train MOTR by increasing frame from 2 to 5 gradually, so you need GPU memory>=24G (e.g P40 or V100).

@ZhangSongxi
Copy link

Hi, I found that when I use 8 2080Ti to train this model. At the initial stage, the GPU memory occupation is 6/8, but the GPU memory is soon out. So do you have an explanation for this? And what's the suggested regime to train the MOTR model?

Hi~, thanks for attention! We train MOTR by increasing frame from 2 to 5 gradually, so you need GPU memory>=24G (e.g P40 or V100).

Hi~, I use 3090 to train this model, but when the epoch growing to 150, the GPU memory is out. How can I solve this problem?

@quxu91
Copy link

quxu91 commented Apr 28, 2022

I suppose you can modify the argument '--sampler_lengths' smaller ,such as replace the[2,3,4,5] with [2,3,4,4]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants