-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wingman v2.0.8 , Local model is not supporting #27
Comments
I'm unfamiliar with LM Studio, but I just downloaded it and started a local inference server running Can you provide some more information regarding your setup? KoboldCpp support is almost finished - still porting this functionality from the previous major version. |
Thank you , it is working now. i was using "http://localhost:1234/v1". now i changed to "http://localhost:1234/v1/chat/completions" . it is working. I was using Wingman 1.3.8 preview, it was excellent. Now, it upgraded well. Thank you. |
This is fixed now and the completion stream resoled by LM Studio should correctly end the response. Pushing a release with this bug fix now. |
Excellent! Glad it's working. Have fun! |
Wingman v2.0.8 , Local model is not supporting. LM Studio, Koboldcpp URL are not working, please provide documentation or tutorial or video about enabling local models. Thank you
The text was updated successfully, but these errors were encountered: