-
Notifications
You must be signed in to change notification settings - Fork 175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to evaluate the model memory efficiently? #52
Comments
This is not going to be full solution. I have gotten Codegen-16B-multi to work on an A6000/48GB. The script we used to pull it off is here: https://github.com/nuprl/MultiPL-E/blob/main/inference/codegen.py Note the crazy code for the stopping criteria. IIRC it was necessary to get things to work. |
Can you make sure that FP16 is set and follow memory consumption up until |
@loubnabnl I set fp16 in the |
@Godofnothing we found a bug which made the memory consumption more than necessary, can you try running evaluation with code from this PR #61? you now need to specify |
Closing this issue, as I tried loading CodeGen-16B in mixed precision and it fits under 40GB of RAM |
Sorry for long delay. I've pulled the latest version of the code and model successfully fits onto 40GB. Thanks for your help and response. |
Thanks for the great work and convenient benchmarking tool!
I would like to evaluate
CodeGen-16B
model on thehumaneval
benchmark. At my disposal there is A6000 GPUs with 48Gb of memory. The evaluation script crashes due to CUDA out of memory here (i.e accelerator.prepare) even with the smallest batch size - 1.Since it is model evaluation I would expect that most of the memory is occupied by the model params (no optimizer states).
Naively, this model should fit into a single GPU if loaded in half precision, since
2x 16 = 32 < 48
. However, when setting in theaccelerate launch
mixed precision withfp16
I still face OOM error.What measures would you suggest to fit the model onto a single GPU?
The text was updated successfully, but these errors were encountered: