Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU out of memory when running infer_vos.sh/py #4

Closed
chyphen7 opened this issue Dec 23, 2021 · 1 comment
Closed

GPU out of memory when running infer_vos.sh/py #4

chyphen7 opened this issue Dec 23, 2021 · 1 comment

Comments

@chyphen7
Copy link

I am trying to run bash ./launch/infer_vos.sh ytvos, but am getting errors of "GPU out of memory". Trying to reduce the batch_size down to 8, 4, 2, 1, but still getting the error. I have nVidia K2000, with only 4G GPU memory. Any suggestions/advice how to get around the issue? Thanks.

@arnike
Copy link
Collaborator

arnike commented Jan 26, 2022

Hi, having at least 12GB of GPU memory is the requirement for this implementation. The memory bottleneck are the context frames: depending on the test set, there may be up to 20 previous predictions accumulated in the context. Those have to stay on GPU memory. Obviously, you can move some of the computations on the CPU, but that would dramatically slow down the inference (and it's already quite slow on GPU). Nikita

@arnike arnike closed this as completed Jan 26, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants