How is the project private? #905
Replies: 4 comments
-
@andrzejZdobywca You can use LOCAL models. |
Beta Was this translation helpful? Give feedback.
-
@logancyang where can I find the privacy policy? I've seen it written that it is "on the website" but I cannot find it on the website. Can you please provide a direct link? |
Beta Was this translation helpful? Give feedback.
-
https://github.com/logancyang/obsidian-copilot/blob/master/local_copilot.md |
Beta Was this translation helpful? Give feedback.
-
Agree with OP, it's a little misleading as you think only the embedding model is required to run locally (privately) but what you're actually doing is using the embed model to index your Vault then the External Open AI API request to send that out to Open AI to do the heavy lifting. I'm experimenting with Local LLM via LM Studio at running Chat
Embed
|
Beta Was this translation helpful? Give feedback.
-
I am a little confused by how you market the product. You are saying it's privacy first and focus on local storage, but you are using cloud providers.
Even if it's stored locally, even if you index it locally, you must send the content of the note to OpenAI or Claude servers. It completely misses the point of privacy. I feel like you are misleading the users.
Beta Was this translation helpful? Give feedback.
All reactions