You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello. Thank you for open sourcing this awesome tool.
I was watching the video by Matthew Berman and he suggested to improve the open source nature of the project by replacing the OpenAI embedding with the open source Nomic embedding.
The text was updated successfully, but these errors were encountered:
thanks for the rely @ipty , @AltayYuzeir I can confirm that nomic and local embeddings do work. If you are going to use local embedding models while also using a local chat completions, time-to-first-token can increase considerably depending on your machine. I recommend decreasing the values associated with the RAG process at the bottom of the config when using local inference. cheers!
Hello. Thank you for open sourcing this awesome tool.
I was watching the video by Matthew Berman and he suggested to improve the open source nature of the project by replacing the OpenAI embedding with the open source Nomic embedding.
The text was updated successfully, but these errors were encountered: