Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU usage #15

Open
AntouanK opened this issue Mar 13, 2023 · 4 comments
Open

GPU usage #15

AntouanK opened this issue Mar 13, 2023 · 4 comments

Comments

@AntouanK
Copy link

Hi there.
Thanks for sharing this project.

Does this use the GPU? or just the CPU?

@richardmon
Copy link

Although I had nothing to do with the creation of this project, I see that it is using llama.cpp, a project that focuses on using CPU vectorization to run the model so my best guess is that no GPU is needed.

@AntouanK
Copy link
Author

I see.

Actually I have an RTX 4090, so I was hoping I can use it, to speed things up.

@farrael004
Copy link

Same here. I hope that if GPU use is not yet supported, it will be soon.

@richardmon
Copy link

@AntouanK @farrael004 you guys can take a look at this issue where they discuss inference in consumer-grade GPUs meta-llama/llama#4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants