New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
compilation fails for "examples/grpc-server" #1196
Comments
Try to install libabsl-dev. |
I also got this bug following the same https://localai.io/howtos/easy-setup-docker-gpu/ ... unforutunately didnt find this 'apt install libabsl-dev' and am in the process of a new pull/build, will give it a try assuming the issue persists. Could be missing the package on the Quay build? |
No luck with the apt install libabsl-dev command, think the GPU installation instructions are borked for the moment. |
Pretty sure the bug spawns from instructions: image: quay.io/go-skynet/local-ai:master-cublas-cuda12 |
No, that was not the issue... Man this rabbithole has got me on death spiral... depressed even. My ineptitude screaming at every docker recompose, compile. Docker has always been weak in my IT brain, I know I shouldn't have had to download these packages this many times over again... installing with cublas... Ive got be missing an important memo here... Before I checked cuda boxes the docker seemed to be installing with cuda libraries on its own... Im going to try a no frills install and check if cuda acceleration works, then work within the working base package to try to get cuda libraries upgraded? |
@nbollman It's not just you - I'm also having the same problem attempting to compile the image with documented defaults. |
same here. fyi: i got the same error when try installing/building locally. So it it is not docker/docker-compose specific.. EDIT: on local build, i was able to solve the absl error message, but after that almost the same error followed, but this time it is protobuf. and I couldn't solve the latter unit now. |
@mounta11n how did you resolve absl with local build? |
@nbollman I was able to make some progress by:
|
|
Fixed by installing these depts: brew install grpc protobuf abseil |
Workaround:Instead of trying to compile and run LocalAI directly on the host, I utilized a prebuilt docker image: docker run -d --name api \
--gpus '"device=0"' \
--restart=always \
-p 8080:8080 \
-e GALLERIES='[{"name":"model-gallery", "url":"github:go-skynet/model-gallery/index.yaml"}, {"url": "github:go-skynet/model-gallery/huggingface.yaml","name":"huggingface"}]' \
-e DEBUG=true \
-e MODELS_PATH=/models \
-e THREADS=8 \
-e BUILD_TYPE=cublas \
-e REBUILD=true \
-e CMAKE_ARGS="-DLLAMA_CUBLAS=on -DLLAMA_F16C=OFF -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_AVX=OFF -DLLAMA_FMA=OFF" \
-e FORCE_CMAKE=1 \
-v $PWD/models:/models \
-t quay.io/go-skynet/local-ai:v1.30.0-cublas-cuda12 \
/usr/bin/local-ai Here's a breakdown of the notable changes:
By utilizing the above Docker run command, I was able to get LocalAI running successfully with CUDA support. I hope this helps! Let me know if you face any other challenges. |
Hi there,
If it can be useful, I can create a separate script with all the commands and create a PR containing it. |
I am also now hanging on the protobuf error... anyone solved it (not on mac)? |
I have seen that the build now fails. |
The issue was fixed. With the code in the master branch, it is possible to build gRPC locally by running the command: I think it is possible to close the issue. |
For clarification for other noobs like me: that means adding |
results here in some error on current master branch (
|
@EchedeyLR can you share your .env ? |
Is exactly the default one at https://github.com/mudler/LocalAI/blob/v1.40.0/.env with GO_TAGS and REBUILD uncommented. |
If this is needed, why was not included in the example |
AFAIK it's not a fix but a workaround while the team has figured out why this started happening. Personnaly here's what I needed to add at the end of my .env:
(I also set REBUILD=true and BUILD_TYPE=cublas) |
Would this work in the container image? I am also worried about these fixed CMAKE_ARGS. OpenBLAS did not build for me in go-llama since several months ago, just a version stopped working and I thought that was related to instruction set, as in other computer everything worked so far. I thought this was self-detected and not hard coded, this just makes it difficult for people to run it in non-exactly modern computers (and no, I am not speaking about something from 2008 or so, I imply 2014 even). |
|
Is this needed in the docs? Please ping me if so |
We need to confirm this is the issue yet. I am yet to try that solution. Other user reported it as nonworking |
My problem was slightly different in that it was giving me errors for absl::check, absl::log, etc. It turns out Conda installs its own libraries and binaries, which also meant it had its own incompatible version of absl. I completely uninstalled Anaconda3, then I cloned grpc with I built and installed abseil-cpp, protobuf (inside grpc/third_party) and grpc from the folder above into /usr/local, after all libraries were installed I built LocalAI as usual. Here are my build args for LocalAI: Building as I type this, seems to be ok so far Ubuntu Server 22.04, Xeon X5650 |
Just a little report, coming from #1386 using Notable comment there from @B4ckslash
|
Thank You so much this worked in my Ubuntu WSL |
Could you provide a script how to do it? I am facing same build problems |
i got everything fixeed "solution"
i am using the model : It works like charm 🪄 🌟 |
LocalAI version:
45370c2
Environment, CPU architecture, OS, and Version:
Linux fedora 6.5.6-300.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Oct 6 19:57:21 UTC 2023 x86_64 GNU/Linux
Describe the bug
After failures with CUDA and docker in
#1178
I try to compile and run LocalAI directly on the host:
make BUILD_TYPE=cublas build
To Reproduce
make BUILD_TYPE=cublas build
Expected behavior
Successful build, binaries running with CUDA support
Logs
Additional context
I also tried
CMAKE_ARGS="-DLLAMA_AVX512=OFF" make BUILD_TYPE=cublas build
because my CPU doesn't support AVX512.Maybe interesting:
But the error message seems more like something is missing than it's the wrong gcc.
EDIT: search for absl, found and installed
sudo dnf install python3-absl-py.noarch
... doesn't help.The text was updated successfully, but these errors were encountered: