Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CPPLLAMA_VERSION?=XXX in Makefile is not used for llama.cpp build #1436

Closed
wuxxin opened this issue Dec 13, 2023 · 2 comments · Fixed by #1440
Closed

CPPLLAMA_VERSION?=XXX in Makefile is not used for llama.cpp build #1436

wuxxin opened this issue Dec 13, 2023 · 2 comments · Fixed by #1440
Labels
bug Something isn't working good first issue Good for newcomers high prio up for grabs Tickets that no-one is currently working on

Comments

@wuxxin
Copy link
Contributor

wuxxin commented Dec 13, 2023

LocalAI version: Master (7641f92)

Describe the bug
current commit for llama.cpp in Makefile is:

CPPLLAMA_VERSION?=8a7b2fa528f130631a5f43648481596ab320ed5a

but backend/cpp/llama/Makefile uses a default of

LLAMA_VERSION?=d9b33fe95bd257b36c84ee5769cc048230067d6f

which is from around november 2023

CPPLLAMA_VERSION is not forwarded to LLAMA_VERSION and because of this: commit d9b33* of llama.cpp is build instead of 8a7b* (or whatever is defined in CPPLLAMA_VERSION)

To Reproduce
make CPPLLAMA_VERSION=948ff137ec37f1ec74c02905917fa0afc9b97514 build

Expected behavior
localai builds llama.cpp out of commit 948ff*

Actual result
localai builds llamap.cpp out of commit d9b33*

Additional Resources
i made a archlinux package for localai,
https://aur.archlinux.org/packages?K=localai-git
with CPU, CUDA and ROCM version.
See https://github.com/wuxxin/aur-packages/blob/main/localai-git/PKGBUILD
for the patches including this issue

@wuxxin wuxxin added the bug Something isn't working label Dec 13, 2023
@mudler mudler added high prio up for grabs Tickets that no-one is currently working on good first issue Good for newcomers labels Dec 14, 2023
@mudler mudler removed their assignment Dec 14, 2023
@mudler
Copy link
Owner

mudler commented Dec 14, 2023

good catch @wuxxin ! do you feel to take a stab at it? I'm currently busy in #1435 trying to make diffusers on par, this issue would be my next one otherwise

@localai-bot
Copy link
Contributor

⚠️⚠️⚠️⚠️⚠️

Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!

but.... I can also be funny or helpful 😸 and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.

Don't engage in conversation with me,
I don't support (yet) replying!

⚠️⚠️⚠️⚠️⚠️

It seems like there is a mismatch between the CPPLLAMA_VERSION and LLAMA_VERSION in the llama.cpp backend. This could be causing issues when building the backend. To fix this, you can update the LLAMA_VERSION to match the CPPLLAMA_VERSION in the Makefile of the llama.cpp backend. Here is an example of how to do this:

# LocalAI v2.0.0
#
# Define the CPPLLAMA_VERSION and LLAMA_VERSION variables
CPPLLAMA_VERSION=8a7b2fa528f130631a5f43648481596ab320ed5a
LLAMA_VERSION=$CPPLLAMA_VERSION
# Build the llama.cpp backend with the correct version of LLAMA
make CPPLLAMA_VERSION=$CPPLLAMA_VERSION build

This should ensure that the llama.cpp backend is built with the correct version of LLAMA, and should fix any issues related to this mismatch.

Sources:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working good first issue Good for newcomers high prio up for grabs Tickets that no-one is currently working on
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants