-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(llama.cpp): Vulkan, Kompute, SYCL #1647
Labels
Comments
mudler
added a commit
that referenced
this issue
Jan 29, 2024
This was referenced Jan 29, 2024
mudler
added a commit
that referenced
this issue
Jan 30, 2024
mudler
changed the title
llama.cpp Vulkan, Kompute, SYCL
feat(llama.cpp): Vulkan, Kompute, SYCL
Jan 31, 2024
mudler
added a commit
that referenced
this issue
Feb 1, 2024
mudler
added a commit
that referenced
this issue
Feb 1, 2024
* feat(sycl): Add sycl support (#1647) * onekit: install without prompts * set cmake args only in grpc-server Signed-off-by: Ettore Di Giacinto <mudler@localai.io> * cleanup * fixup sycl source env * Cleanup docs * ci: runs on self-hosted * fix typo * bump llama.cpp * llama.cpp: update server * adapt to upstream changes * adapt to upstream changes * docs: add sycl --------- Signed-off-by: Ettore Di Giacinto <mudler@localai.io>
The merge requests linked on this issue appears to be merged upstream. Does that mean LocalAI already supports Vulkan or there are any additional tasks to do before that? |
Only kompute is missing as for now |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Tracker for: ggerganov/llama.cpp#5138 and also ROCm
The text was updated successfully, but these errors were encountered: