-
Notifications
You must be signed in to change notification settings - Fork 166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-gpu inference, the query gets stuck when using my own provider #97
Comments
Hi @Aaron-mindverse it's not clear why your provider is hanging with multi-GPU. If you copy and paste the code (or provide a link to a branch with the modifications) I can test it on my side. But I would first recommend taking a look at our DeepSpeed-MII/mii/models/providers/llm.py Line 25 in 747072b
Use this as a template for creating your |
Thank you so much, this worked for me, although I had to modify a lot of DeepSpeed code specifically tailored for the Bloom model, such as 'get_transformer_name' |
@Aaron-mindverse I don't think you should need to modify DS to get the OPT models running. In general, the OPT models will work with the HuggingFace provider. Could you share the modified code and I can help debug why you're running into Bloom-specific code paths on the DeepSpeed side? |
I modified this file with my own defined provider. No problems with single cards, but query gets stuck with multiple cards. Here is the file I mainly modified.
![image](https://user-images.githubusercontent.com/115475227/201680825-3aad1c41-0ed8-49d2-9d45-8fcd642f9dfe.png)
"DeepSpeed-MII/mii/models/providers/huggingface.py"
Now it's stuck
![image](https://user-images.githubusercontent.com/115475227/201681187-b90608ed-67a9-42b9-b5ca-9f73f285d90c.png)
Here is my deepspeed version
![image](https://user-images.githubusercontent.com/115475227/201681852-a8b62de0-b710-45e9-af69-db2f8405a6ac.png)
The text was updated successfully, but these errors were encountered: