Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA out of memory #6

Closed
Reagan1311 opened this issue Aug 7, 2022 · 2 comments
Closed

CUDA out of memory #6

Reagan1311 opened this issue Aug 7, 2022 · 2 comments

Comments

@Reagan1311
Copy link

I have GPUs with 11GB of memory, and I will get out of memory warning when I load more than three images.
(when computing the attention of ViT, attn = (q @ k.transpose(-2, -1)) * self.scale)

I think I can increase the stride or decrease the load size, but it will also degrade the performance.

I found the code only processes a single image each time, so I would like to ask if I can run the program across multiple GPUs?

@ShirAmir
Copy link
Owner

What is the resolution of the images you use? The code should work fine on images circa 300-400 pixels per dimension.
There are no current plans to support multiple GPU in this codebase. You could however divide the code to two different scripts. The first does feature extraction for a single image on GPU, and stores it in memory. The other loads features from different images and applies the applications on it (using CPU alone). This will alleviate the need to hold many images on GPU in parallel.

@Reagan1311
Copy link
Author

Thanks for the reply, I will close the issue now : )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants