-
Notifications
You must be signed in to change notification settings - Fork 176
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA OOM when loading large models #99
Comments
Hi @Tianwei-She thanks for using MII. It looks like you're seeing this problem because we try to load the model in fp32 onto each GPU before converting it to fp16 here. In general, MII is not the most efficient with GPU memory when running multi-GPU, because:
Here's a PR to address some of these inefficiencies by loading with the user-specified dtype and allowing the user to use system memory to load the model before distributing the model across GPUs. Please give #105 a try and let me know if that fixes your problem: The script you shared should work with these changes, but if it doesn't try adding Note: Unfortunately, we will still need to load the entire model |
Closing due to inactivity and #105 has been merged, please reopen if you are seeing the same error with the latest DeepSpeed-MII. |
@mrwyattii Same error in V0.0.4 |
@wangshankun what kind of GPU are you trying to run on? The OPT-30b model is ~60GB in size. From the screenshot you shared, it looks like you only have 22GB of GPU memory available and will not be able to run a model this large: |
@mrwyattii It is precisely because the GPU memory is insufficient that I want to place the model in the host, which is why I configured CPU offload and load_with_sys_mem. |
Same question...Does Zero offload actually work in mii, I have been having a lot of difficulties trying to get DeepSpeed-MII to do any soft of cpu or nvme offload. |
I'm trying out deepspeed-mii on a local machine (8 GPU with 23GB VRAM each). Smaller models like
bloom-560m
andEleutherAI/gpt-neo-2.7B
worked well. However, I got CUDA OOM errors when loading larger models, likebloom-7b1
. For some even larger models likeEleutherAI/gpt-neox-20b
, the server just crashed without any specific error messages or logs.I've tried deepspeed inference before, and it worked fine on these models.
I use this script to deploy models
Is there something I should change to my deployment script?
Thanks!
The text was updated successfully, but these errors were encountered: