-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Server crashes after a call to handlers #86
Comments
` llama.cpp: loading model from /models/koala-7B-4bit-128g.bin goroutine 7 [syscall]: goroutine 1 [IO wait]: goroutine 2 [force gc (idle)]: goroutine 3 [GC sweep wait]: goroutine 4 [GC scavenge wait]: goroutine 5 [finalizer wait]: goroutine 6 [sleep]: rax 0x0 got the same problem, anyone can help??? |
this looks like to me you don't have AVX, did you built the image locally or pulled it? See also #88 |
I pulled the latest image, the host which runs the image suppor avx
|
I am also having the same issue --strange thing was, it was working last Friday.
|
Can you file a separate issue for this? I've just cleared up code in the bindings so it's sitting much closer with upstream, that might be a regression. Are you using a build from master? where did you find the model so I can test here? |
can you try the images from master? |
|
Missing logs here from the OP. Please provide more logs with See also: https://localai.io/faq/#everything-is-slow-how-come Closing the issue now due to inactivity. Reopen if necessary. |
I can see there are 3 endpoints using two handlers.
If I call use the path
/models
I get this:{"object":"list","data":[{"id":"ggml-gpt4all-j","object":"model"}]}
If I try the other handler I get "Empty reply from server" and the container crashes.
Looking the logs of the container I see:
But nothing more.
I have tried both the Usage examples in here: https://github.com/go-skynet/LocalAI#usage
Any idea what I'm missing?
I also have tried other models with similar behaviors.
Thanks.
The text was updated successfully, but these errors were encountered: