-
-
Notifications
You must be signed in to change notification settings - Fork 283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
local exllamav2 (TabbyAPI) KeyError: 'stop' #44
Comments
Hmmm, I'll try using tabbyAPI to replicate the bug. |
I am getting the same thing with LM Studio running locally however I wonder if it has to do with the template. EDIT: It actually has to do with the response that is given from the local AI. In my case, I integrated the openai python package, declared the client as such : |
Same here with LM Studio, Problem is that it uses |
I had to modify process_LLM to the following to get it to work (mind you I added debug statements and I am pretty sure you can remove a lot of if statements in there for the next_token haha. I also had to check for empty messages because for whatever reason there were so many empty messages being appended with no content so I stripped all those out. The reason for no PR is I have no idea how this would break the other implementations.:
|
Thanks for sharing the project! The interrupt feature is really impressive! :)
I'm getting an error on Ubuntu 22.04 when trying a different backend, with a fresh install of tabbyAPI:
The text was updated successfully, but these errors were encountered: