Replies: 11 comments 1 reply
-
LM Studio added better embedding support recently, I'm going to test it out, perhaps post a video about this. |
Beta Was this translation helpful? Give feedback.
-
Need add "LM Studio" to embedded provider and support /v1/embeddings |
Beta Was this translation helpful? Give feedback.
-
Hi, I have the same problem. I've added embeddings model in LMStudio (tried both nomic-ai and bge-large), then I've added custom model in Obsidian Copilot Settings in QA section (I've specified embeddings model as a name and normal localhost:11434/v1 as a base url).
However, nothing happens then. Obsidian Copilot keep showing the following notification.
And when I ask something in Vault QA mode, the Copilot waiting indefinitely... |
Beta Was this translation helpful? Give feedback.
-
I was able to solve the issue by upgrading LM Studio to version 0.3.2 |
Beta Was this translation helpful? Give feedback.
-
I am also using the latest version. |
Beta Was this translation helpful? Give feedback.
-
In LM Studio I have port As for QA Settings, the settings are the same except for model and provider. But it seems that the QA Settings doesn't matter, it works anyway... |
Beta Was this translation helpful? Give feedback.
-
Thank you. With these settings, it worked fine. |
Beta Was this translation helpful? Give feedback.
-
I'm having trouble with QA using LM Studio too. The tips Morig kindly wrote for us don't seem to work for the QA. I've been going back and forth for hours trying to figure out what works and what doesn't, and there isn't much help anywhere about how to do this. Maybe LM Studio or Copilot or both were updated and broke this. I can't figure it out. Suggestions:
Thanks. |
Beta Was this translation helpful? Give feedback.
-
I would like to add that the token should be made optional |
Beta Was this translation helpful? Give feedback.
-
@Armandeus66 It's difficult to say what exactly wrong with your setup, however try the following steps:
While experimenting try enable Verbose Logging in LM Studio Server. It will log all requests and errors, this can helm to understand what's wrong. Btw, I agree that the Obsidian-copilot should have a LM Studio option in the menu for QA LLM provider |
Beta Was this translation helpful? Give feedback.
-
Thank you very much for that information! EDIT I have the latest version of LM Studio. I am running the default text-embedding-nomic-embed-text-v1.5 that came with LM Studio for the QA. Everything is running with CORS on. My provider for QA is: text-embedding-nomic-embed-text-v1.5 Provider: 3rd Party Base URL http://localhost:11434/v1 API key lm-studio (necessary?) The QA throws an error and I can't get it to run. |
Beta Was this translation helpful? Give feedback.
-
I see lm-studio in general settings but not in QA settings.
So far it works great for chat with lm-studio but I can't get it to work with QA mode.
Being able to use a local LLM to index my entire vault sounds like the best deal ever.
Any suggestion?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions