Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-gpu inference, the query gets stuck when using my own provider #97

Closed
Aaron-mindverse opened this issue Nov 14, 2022 · 3 comments
Closed

Comments

@Aaron-mindverse
Copy link

Aaron-mindverse commented Nov 14, 2022

I modified this file with my own defined provider. No problems with single cards, but query gets stuck with multiple cards. Here is the file I mainly modified.
"DeepSpeed-MII/mii/models/providers/huggingface.py"
image

Now it's stuck
image

Here is my deepspeed version
image

@mrwyattii
Copy link
Contributor

Hi @Aaron-mindverse it's not clear why your provider is hanging with multi-GPU. If you copy and paste the code (or provide a link to a branch with the modifications) I can test it on my side. But I would first recommend taking a look at our BloomPipeline and load_hf_llm:

class BloomPipeline(object):

Use this as a template for creating your OPTProvider and it should work with multi-GPU. Thanks!

@Aaron-mindverse
Copy link
Author

Aaron-mindverse commented Nov 28, 2022

Hi @Aaron-mindverse it's not clear why your provider is hanging with multi-GPU. If you copy and paste the code (or provide a link to a branch with the modifications) I can test it on my side. But I would first recommend taking a look at our BloomPipeline and load_hf_llm:

class BloomPipeline(object):

Use this as a template for creating your OPTProvider and it should work with multi-GPU. Thanks!

Thank you so much, this worked for me, although I had to modify a lot of DeepSpeed code specifically tailored for the Bloom model, such as 'get_transformer_name'

https://github.com/microsoft/DeepSpeed/blob/21c28029648df4c98acd6331a8fdc0d29f297fa7/deepspeed/module_inject/replace_module.py#L124

@mrwyattii
Copy link
Contributor

@Aaron-mindverse I don't think you should need to modify DS to get the OPT models running. In general, the OPT models will work with the HuggingFace provider. Could you share the modified code and I can help debug why you're running into Bloom-specific code paths on the DeepSpeed side?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants