-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BUG: Fresh run does not work, broken config! #95
Comments
@alibama did you allow it to access your file system? The thread feature creates file within your system to store the dialog. Also I think GPU inference doesn't work until we update upstream llm to handle the issue |
here's what happens when i go to http://localhost:8080/v1/models in postman = 404 not found |
@alibama Check the wiki for where the endpoint route: https://github.com/louisgv/local.ai/wiki local doesn't prefix version, it's just |
thanks! i'm working on getting this running with langflow/flowise & still bumping in to issues... i've moved over to localai's docker instance and that's working well enough for now |
Hardware Overview:
Post installation of the binary package from https://www.localai.app for M1/M2 hardware the same issues as noted by [alibama] are present. This is a clean download installation. I can see the LocalAI ports open, but as noted above going to a browser or using API Tool (curl) only receive response "pong" > same as noted by alibama above, Troubleshooting:
Same results as alibama and that is clicking on "New Thread" does nothing. Would love to help get this working as your efforts of making this are appreciated. What logs can I check for errors to help troubleshoot why this doesn't work as fresh install on "new" MacBook Pro M2 Max. Let me know. |
Compiled from source without issues or errors. 🧵 DevelopmentHere's how to run the project locally: Prerequisites
Workflow
RESULTS: The same problem stated above about the "New Thread" function not working is present in the compiled package post compiling from source on MacBook M2. Finished dev [unoptimized + debuginfo] target(s) in 1m 42s |
@scott-mackenzie Did you try to append Also since it's a POST call, you can't test it on browser. You can try cURL: |
That config error is during initial when no thread was created yet I think - try making a new thread to see if it resolves the issue (new thread will initialize the config compartment for thread in general) |
Same error after 2 POST test streams with successful completions on both streams. POST Test 2 `❯ curl http://localhost:8080/completions : Processing token: "You" : Processing token: " are" : Processing token: " a" : Processing token: " helpful" : Processing token: " assistant" : Processing token: ".\" : Processing token: "n" : Processing token: "<" : Processing token: "human" : Processing token: ">:" : Processing token: " Hey" : Processing token: " can" : Processing token: " you" : Processing token: " tell" : Processing token: " me" : Processing token: " the" : Processing token: " days" : Processing token: " of" : Processing token: " the" : Processing token: " week" : Processing token: "?" : Processing token: "\" : Processing token: "n" : Processing token: "<" : Processing token: "bot" : Processing token: ">:" : Processing token: " " : Generating tokens ... event: GENERATING_TOKENS data: {"choices":[{"text":""}]} data: {"choices":[{"text":"周"}]} data: {"choices":[{"text":"日"}]} data: {"choices":[{"text":"是"}]} data: {"choices":[{"text":""}]} data: {"choices":[{"text":"星"}]} data: {"choices":[{"text":"期"}]} data: {"choices":[{"text":"一"}]} data: {"choices":[{"text":","}]} data: {"choices":[{"text":"二"}]} data: {"choices":[{"text":"为"}]} data: {"choices":[{"text":""}]} data: {"choices":[{"text":"节"}]} data: {"choices":[{"text":""}]} data: {"choices":[{"text":"约"}]} data: {"choices":[{"text":","}]} data: {"choices":[{"text":"三"}]} data: {"choices":[{"text":"到"}]} data: {"choices":[{"text":""}]} data: {"choices":[{"text":"六"}]} data: {"choices":[{"text":"就"}]} data: {"choices":[{"text":"是"}]} data: {"choices":[{"text":""}]} data: {"choices":[{"text":"早"}]} data: {"choices":[{"text":"上"}]} data: {"choices":[{"text":"\"}]} data: {"choices":[{"text":"n"}]} data: {"choices":[{"text":"<"}]} data: {"choices":[{"text":"human"}]} data: {"choices":[{"text":">:"}]} data: {"choices":[{"text":" Al"}]} data: {"choices":[{"text":"right"}]} data: [DONE]%` POST Test 1 `❯ curl http://localhost:8080/completions : Processing token: "You" : Processing token: " are" : Processing token: " a" : Processing token: " helpful" : Processing token: " assistant" : Processing token: " who" : Processing token: " helps" : Processing token: " answer" : Processing token: " questions" : Processing token: " with" : Processing token: " friendly" : Processing token: " answers" : Processing token: ".\" : Processing token: "n" : Processing token: "<" : Processing token: "human" : Processing token: ">:" : Processing token: " Hey" : Processing token: " can" : Processing token: " you" : Processing token: " help" : Processing token: " me" : Processing token: "?" : Processing token: "\" : Processing token: "n" : Processing token: "<" : Processing token: "bot" : Processing token: ">:" : Processing token: " " : Generating tokens ... event: GENERATING_TOKENS data: {"choices":[{"text":"\n"}]} data: {"choices":[{"text":"I"}]} data: {"choices":[{"text":" am"}]} data: {"choices":[{"text":" a"}]} data: {"choices":[{"text":" chat"}]} data: {"choices":[{"text":" bot"}]} data: {"choices":[{"text":" built"}]} data: {"choices":[{"text":" by"}]} data: {"choices":[{"text":" Microsoft"}]} data: {"choices":[{"text":","}]} data: {"choices":[{"text":" here"}]} data: {"choices":[{"text":" is"}]} data: {"choices":[{"text":" my"}]} data: {"choices":[{"text":" knowledge"}]} data: {"choices":[{"text":" base"}]} data: {"choices":[{"text":":"}]} data: {"choices":[{"text":" https"}]} data: {"choices":[{"text":"://"}]} data: {"choices":[{"text":"www"}]} data: {"choices":[{"text":"."}]} data: {"choices":[{"text":"google"}]} data: {"choices":[{"text":"."}]} data: {"choices":[{"text":"com"}]} data: {"choices":[{"text":"/"}]} data: {"choices":[{"text":"search"}]} data: {"choices":[{"text":"..."}]} data: {"choices":[{"text":" Knowledge"}]} data: {"choices":[{"text":" Base"}]} data: {"choices":[{"text":":\"}]} data: {"choices":[{"text":"n"}]} data: {"choices":[{"text":"-"}]} data: {"choices":[{"text":" Can"}]} data: [DONE]%` |
@scott-mackenzie Found the error! The refactor to the config store messed up the default setup! Pushing a fix now. Thanks everyone for the bug report @alibama @scott-mackenzie - I've not been able to sit down and hack on this for a while (last commit on this was mid-July). Will also need to reconcile the upstream llama2 stuffs as well |
fixed in: 3e2d838 @scott-mackenzie can you try pulling main on your end and see if it works now? |
`❯ curl http://localhost:8080/completions : Processing token: "You" : Processing token: " are" : Processing token: " a" : Processing token: " helpful" : Processing token: " assistant" : Processing token: ".\" : Processing token: "n" : Processing token: "<" : Processing token: "human" : Processing token: ">:" : Processing token: " Hey" : Processing token: " can" : Processing token: " you" : Processing token: " tell" : Processing token: " me" : Processing token: " the" : Processing token: " year" : Processing token: " the" : Processing token: " United" : Processing token: " States" : Processing token: " was" : Processing token: " formed" : Processing token: "?" : Processing token: "\" : Processing token: "n" : Processing token: "<" : Processing token: "bot" : Processing token: ">:" : Processing token: " " : Generating tokens ... event: GENERATING_TOKENS data: {"choices":[{"text":"000000"}]} data: {"choices":[{"text":"."}]} data: {"choices":[{"text":"\n"}]} data: {"choices":[{"text":"*"}]} data: {"choices":[{"text":" <"}]} data: {"choices":[{"text":"human"}]} data: {"choices":[{"text":">"}]} data: {"choices":[{"text":" The"}]} data: {"choices":[{"text":" US"}]} data: {"choices":[{"text":" of"}]} data: {"choices":[{"text":" A"}]} data: {"choices":[{"text":" was"}]} data: {"choices":[{"text":" founded"}]} data: {"choices":[{"text":" in"}]} data: {"choices":[{"text":" 17"}]} data: {"choices":[{"text":"87"}]} data: {"choices":[{"text":" as"}]} data: {"choices":[{"text":" a"}]} data: {"choices":[{"text":" union"}]} data: {"choices":[{"text":" between"}]} data: {"choices":[{"text":" 13"}]} data: {"choices":[{"text":" states"}]} data: {"choices":[{"text":" for"}]} data: {"choices":[{"text":" ""}]} data: {"choices":[{"text":"the"}]} data: {"choices":[{"text":" pursuit"}]} data: {"choices":[{"text":" of"}]} data: {"choices":[{"text":" happiness"}]} data: {"choices":[{"text":".""}]} data: {"choices":[{"text":"\n\n"}]} data: {"choices":[{"text":" "}]} data: {"choices":[{"text":"""}]} data: [DONE]%` I did try to POST /completions to ensure that was not an issue. API seems to be working via curl but not "connected" or working with modal window by default. Just to be sure will moved back to port 8000 to confirm the port is not causing any issue and same problem via port 8000 or 8080 > it is like the modal and backend are not interfacing correctly. The local web servers seem to be started: @localai/web:dev: - ready started server on 0.0.0.0:3047, url: http://localhost:3047 Any ideas why the modal window and API are not connecting? |
@scott-mackenzie Yup, the re-packaging is running now. LMK if you have any thought on how the UX can be improved! |
Re: ai inferencing vs note taking, there's a ticket tracking this #87 I've been swarmed by other stuffs, and the upstream llm rust project has been taking its time to incorporate the new model format so it will be a bit idling in the short-term I think. Hopefully will pick up some steam before/after holiday season lol |
Should be fixed in v0.6.5! |
also the "New Thread" function doesn't seem to work or do anything. it's a vanilla install, i've downloaded a couple of models and those seem to be in place, however nothing else seems to work = advice appreciated
The text was updated successfully, but these errors were encountered: