You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Love this plugin so far! Streaming works well with Ollama, but I'd love to be able to use this with LM Studio to enable streaming on Windows too with local LLMs. I have found LM Studio server setup to be much easier than Ollama, more choices are provided and it works on Windows.
Since it has an Open AI style API, I have seemingly been able to get it working both with the LocalAI and OpenAI base link settings under the Advanced Tab. This is great, but I need to wait for inferencing to finish before I get output from LM Studio.
Streaming does work otherwise, I believe all the API request needs is "stream": true added on the LM Studio side.
Not sure how much work is needed to handle the streaming within the plugin itself, I tried to make this change quickly to the API calls in a fork but completions vs streaming seem differently configured and therefore more work would be required.
Love this plugin so far! Streaming works well with Ollama, but I'd love to be able to use this with LM Studio to enable streaming on Windows too with local LLMs. I have found LM Studio server setup to be much easier than Ollama, more choices are provided and it works on Windows.
Since it has an Open AI style API, I have seemingly been able to get it working both with the LocalAI and OpenAI base link settings under the Advanced Tab. This is great, but I need to wait for inferencing to finish before I get output from LM Studio.
Streaming does work otherwise, I believe all the API request needs is
![Screenshot 2024-01-15 at 11 04 40 AM](https://private-user-images.githubusercontent.com/66276452/296808948-08268557-fb5c-4ba2-98ef-23082031356a.png?jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJnaXRodWIuY29tIiwiYXVkIjoicmF3LmdpdGh1YnVzZXJjb250ZW50LmNvbSIsImtleSI6ImtleTUiLCJleHAiOjE3MjEzNjQxMTYsIm5iZiI6MTcyMTM2MzgxNiwicGF0aCI6Ii82NjI3NjQ1Mi8yOTY4MDg5NDgtMDgyNjg1NTctZmI1Yy00YmEyLTk4ZWYtMjMwODIwMzEzNTZhLnBuZz9YLUFtei1BbGdvcml0aG09QVdTNC1ITUFDLVNIQTI1NiZYLUFtei1DcmVkZW50aWFsPUFLSUFWQ09EWUxTQTUzUFFLNFpBJTJGMjAyNDA3MTklMkZ1cy1lYXN0LTElMkZzMyUyRmF3czRfcmVxdWVzdCZYLUFtei1EYXRlPTIwMjQwNzE5VDA0MzY1NlomWC1BbXotRXhwaXJlcz0zMDAmWC1BbXotU2lnbmF0dXJlPWUxYTg4YmFjOTcwZjBiYjBjOTVkNjMwYTk5OGY1MWQxY2U3MDZmNWQyMjlhN2U1N2Q0ZmU3ZDQ3NTRhMDljNDMmWC1BbXotU2lnbmVkSGVhZGVycz1ob3N0JmFjdG9yX2lkPTAma2V5X2lkPTAmcmVwb19pZD0wIn0.zWxnZWVYeQXSJHjdGd6kNJMv7v7V3cgFOUJpK-EJ-8M)
"stream": true
added on the LM Studio side.Not sure how much work is needed to handle the streaming within the plugin itself, I tried to make this change quickly to the API calls in a fork but completions vs streaming seem differently configured and therefore more work would be required.
Obsidian Copilot has this implemented for reference.
Thanks for the plugin - great work!
The text was updated successfully, but these errors were encountered: