Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minimum GPU memory required for running vision_transformer #17

Open
amiltonwong opened this issue Nov 3, 2020 · 1 comment
Open

minimum GPU memory required for running vision_transformer #17

amiltonwong opened this issue Nov 3, 2020 · 1 comment

Comments

@amiltonwong
Copy link

Hi, authors,

What are the minimum GPU memory required for running vision_transformer during inference and training, respectively?

@andsteing
Copy link
Collaborator

The flag settings in the README were tested with a host having 8 V100 GPUs connected, so that would be for 16G of RAM. If you have fewer GPUs or less memory, you can adapt the --accum_steps flag accordingly (e.g. --accum_steps=$((8*16)) if you have a single GPU with 16G of RAM). For inference you'll need less RAM, so you can try to decrease the --accum_steps flag until you run out of memory. Note that you can also change the --batch_size flag, but for training you will then also need to find a new value for the learning rate.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants