Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

As the epoch increased, so did the GPU memory #4

Closed
clytze0216 opened this issue Jan 20, 2021 · 1 comment
Closed

As the epoch increased, so did the GPU memory #4

clytze0216 opened this issue Jan 20, 2021 · 1 comment

Comments

@clytze0216
Copy link

clytze0216 commented Jan 20, 2021

Hi ,
Thanks for your great work!
When I fine tuning the VQA ,I met the problems that:
As the epoch increased, so did the GPU memory,Eventually,It will exceed the GPU's highest memory which causes the stopping.

And when using multiple GPUs for training, GPU0 uses more internal memory than any other.

This problem has been bothering me for a long time, and I want to ask do you know what is the reason?

Thanks for your reply~:)

@zhegan27
Copy link
Owner

@clytze0216 Thanks for your interest in our work. I did not meet this problem before. When you turn off "adversarial training", i.e., performing standard training, did you also meet the same problem? This can help us identify where the problem is.

Best,
Zhe

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants