-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
try to test multi xpu with example #11091
Comments
Hi, @K-Alex13 , if you have downloaded the model from https://huggingface.co/Qwen/Qwen1.5-14B-Chat/tree/main, please just replace |
Yes I already use this method, the error comes up after the process you mentioned |
And the error mentioned missing file are also not in the Qwen/Qwen1.5-14B-Chat files. |
If |
|
how to use them |
Sorry but I have no idea about the meaning of 'them'. ME TEE Library (libmetee/libmetee-dev) is a C library to access CSE/CSME/GSC firmware via, and |
I install the packages you said above and try to us xpu-smi, same error comes up |
By the way I want to know if this is not a method which use two gpu as a bigger gpu to inference message. It just put model in two different gpu separately and inference separately? |
Maybe you could try these steps? sudo apt-get autoremove libmetee-dev
sudo apt-get autoremove libmetee
sudo apt-get install libmetee
sudo apt-get install libmetee-dev
sudo apt-get install xpu-smi
The model is separated and put to two GPUs, each GPU need less memory to inference. In this way, you could treat two GPUs as a bigger one. |
Due to the huggingface download problem, I download the model from following link.
https://huggingface.co/Qwen/Qwen1.5-14B-Chat/tree/main
Replace the model with the model's URL. And the issue comes up. Not sure what is going wrong Please help me.
The text was updated successfully, but these errors were encountered: