Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

backend: update to latest llama.cpp commit #1883

Merged
merged 1 commit into from
Jan 29, 2024
Merged

Conversation

cebtenzzre
Copy link
Member

Now that our Vulkan backend has been merged into llama.cpp, we have a much smaller diff. This PR includes the updated llama.cpp as well as the necessary changes to make GPT4All compatible.

@cebtenzzre cebtenzzre added backend gpt4all-backend issues vulkan labels Jan 29, 2024
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
@manyoso manyoso merged commit 38c6149 into main Jan 29, 2024
6 of 10 checks passed
@koech-v
Copy link

koech-v commented Feb 1, 2024

Intel Arc got Windows support 2 days ago...

@cebtenzzre
Copy link
Member Author

Intel Arc got Windows support 2 days ago...

If you're talking about SYCL, that's a completely different backend that we don't support. GPT4All is built on top of llama.cpp's Kompute-based Vulkan backend.

@koech-v
Copy link

koech-v commented Feb 2, 2024

Intel Arc got Windows support 2 days ago...

If you're talking about SYCL, that's a completely different backend that we don't support. GPT4All is built on top of llama.cpp's Kompute-based Vulkan backend.

It's okay. Let me look around for one with support.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backend gpt4all-backend issues vulkan
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants