Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA out of memory #42

Open
pppwzj opened this issue Nov 5, 2022 · 5 comments
Open

CUDA out of memory #42

pppwzj opened this issue Nov 5, 2022 · 5 comments

Comments

@pppwzj
Copy link

pppwzj commented Nov 5, 2022

Your team has done an excellent job. I would like to know that when I use four NVIDIA RTX 2080 and the batch_size is set to the minimum of 4, the output is always ' CUDA out of memory' when I run it. I would like to know if there are any parameters in the model that can be reduced to solve this problem. Thank you very much.

@pppwzj pppwzj changed the title CUDA of memory CUDA out of memory Nov 5, 2022
@lkeab
Copy link
Collaborator

lkeab commented Nov 5, 2022

During training, you can reduce the parameter here and here to save memory.

@pppwzj
Copy link
Author

pppwzj commented Nov 10, 2022

During training, you can reduce the parameter here and here to save memory.

during test, how to save memory? Thank you.

@pppwzj
Copy link
Author

pppwzj commented Nov 11, 2022

During training, you can reduce the parameter here and here to save memory.

Could you tell me if I reduce the parameter LIMIT, will it have any effect on the model? Will it reduce the performance? Thank you.

@lkeab
Copy link
Collaborator

lkeab commented Nov 15, 2022

limited performance decrease if you are not reducing it extremely; for inference, you can refer to here

@perp
Copy link

perp commented Nov 29, 2022

I reduce both limit from 30 to 10 , but still cuda out of memory on ti2080 12G

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants