Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

try to test multi xpu with example #11091

Open
K-Alex13 opened this issue May 21, 2024 · 14 comments
Open

try to test multi xpu with example #11091

K-Alex13 opened this issue May 21, 2024 · 14 comments
Assignees

Comments

@K-Alex13
Copy link

image
Due to the huggingface download problem, I download the model from following link.
https://huggingface.co/Qwen/Qwen1.5-14B-Chat/tree/main
Replace the model with the model's URL. And the issue comes up. Not sure what is going wrong Please help me.

@plusbang plusbang self-assigned this May 22, 2024
@plusbang
Copy link
Contributor

Hi, @K-Alex13 , if you have downloaded the model from https://huggingface.co/Qwen/Qwen1.5-14B-Chat/tree/main, please just replace 'Qwen/Qwen1.5-14B-Chat' with your local model folder path here(https://github.com/intel-analytics/ipex-llm/blob/main/python/llm/example/GPU/Deepspeed-AutoTP/run_qwen_14b_arc_2_card.sh#L38).

@K-Alex13
Copy link
Author

Yes I already use this method, the error comes up after the process you mentioned

@K-Alex13
Copy link
Author

And the error mentioned missing file are also not in the Qwen/Qwen1.5-14B-Chat files.

@plusbang
Copy link
Contributor

And the error mentioned missing file are also not in the Qwen/Qwen1.5-14B-Chat files.

If model.safetensors.index.json is not in your local folder, such error message would still occur. You may need to check whether all model files are available and complete in your local model folder.

@K-Alex13
Copy link
Author

image
what is the function of low-bit here. I think it is 4 bit initial that the gpu needed will less than 16G so I do not know if this use two gpu here. or can you please tell how to check the gpu usage during inference?

@K-Alex13
Copy link
Author

image
Why gpu 0 did not inference the results and gpu1 did.

@plusbang
Copy link
Contributor

what is the function of low-bit here. I think it is 4 bit initial that the gpu needed will less than 16G so I do not know if this use two gpu here. or can you please tell how to check the gpu usage during inference?

  • As we introduced in README, you could specify other low bit optimizations (such as fp8) through --low-bit.
  • If you want to monitor GPU usage, you could use a tool named xpu-smi. Use sudo apt install xpu-smi to install, then you could use sudo xpu-smi stats -d 0 to check memory usage of GPU 0.

Why gpu 0 did not inference the results and gpu1 did.

  • Both GPU did inference but we only print inference result of RANK 0 here. In the log, [0] corresponds to output message of RANK 0 while [1] is RANK 1.

@K-Alex13
Copy link
Author

image
still not working

@plusbang
Copy link
Contributor

image still not working

According to your screenshot, maybe you could try sudo apt install libmetee and sudo apt install libmetee-dev.

@K-Alex13
Copy link
Author

how to use them

@plusbang
Copy link
Contributor

how to use them

Sorry but I have no idea about the meaning of 'them'. ME TEE Library (libmetee/libmetee-dev) is a C library to access CSE/CSME/GSC firmware via, and xpu-smi tool seems to need. Could you use xpu-smi now?

@K-Alex13
Copy link
Author

I install the packages you said above and try to us xpu-smi, same error comes up

@K-Alex13
Copy link
Author

By the way I want to know if this is not a method which use two gpu as a bigger gpu to inference message. It just put model in two different gpu separately and inference separately?

@plusbang
Copy link
Contributor

I install the packages you said above and try to us xpu-smi, same error comes up

Maybe you could try these steps?

sudo apt-get autoremove libmetee-dev
sudo apt-get autoremove libmetee
sudo apt-get install libmetee
sudo apt-get install libmetee-dev
sudo apt-get install xpu-smi

By the way I want to know if this is not a method which use two gpu as a bigger gpu to inference message. It just put model in two different gpu separately and inference separately?

The model is separated and put to two GPUs, each GPU need less memory to inference. In this way, you could treat two GPUs as a bigger one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants