-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cannot use gpu #47
Comments
[?] Parameter count (smaller is faster, larger is more capable): 7B
[?] Quality (lower is faster, higher is more capable): Low | Size: 3.01 GB, RAM usage: 5.51 GB
[?] Use GPU? (Large models might crash on GPU, but will run more quickly) (...: y |
I meet same question |
Hey @hotwa and @mathpopo! I've been seeing this across several systems. We're going to change interface packages for CodeLlama in the next week or so, and hopefully that should solve this problem. I'll keep you updated. In the meantime, this works for some users:
Let me know if the GPU gets used after rebuilding |
@KillianLucas sorry , cannot work (base) chenxin@chenxin-Nitro-AN515-52: |
@KillianLucas is like that?
yes, I can.
yes, I can.
yes, I can. |
This is now a duplicate of #168. If you still need help, please leave a comment on that issue. |
i select local mode and download llama and select gpu,but no gpu use,cpu use completely
The text was updated successfully, but these errors were encountered: