-
Notifications
You must be signed in to change notification settings - Fork 957
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Would it be open to migrate to other search engine or LLMs? #22
Comments
Yep, it's possible. For the search engine part, check out e.g. the search_with_bing() function, and the photon's init() function. We currently support bing, google and https://serper.dev/. It's probably easy to swap in your own search engine. For the LLM model, you can replace the openai client to connect to other openai-compatible servers. The related question part requires a bit care, as your llm server need to support function calling / structured output. All lepton LLM endpoints support this (with custom models too) out of the box. With others, you might need a bit of adjustment, and you can also choose to simply turn off related questions. |
I have successfully deployed on lepton.ai, and its lightning-fast response has left a deep impression on me : ) For local deployment, I guess we need to modify the following two parts, right? search_with_lepton/search_with_lepton.py Lines 256 to 260 in a6ac6da
search_with_lepton/search_with_lepton.py Line 215 in a6ac6da
For online deployment, is it not possible to switch to other non-Lepton Hosted models (even those I deployed on Lepton)?"
https://dashboard.lepton.ai/workspace/olcdfyso/explore/detail/search-by-lepton |
For local deployment, you just need to do (in commandline):
and make sure you log in to your workspace. For the other non-Lepton hosted models, see above - essentially it is this line https://github.com/leptonai/search_with_lepton/blob/db27467/search_with_lepton.py#L257 You might want to start with environmental variable RELATED_QUESTIONS=False with other api endpoints. |
I'd like to implement a internal conversational search with custom search engine and LLMs, would it be easy to do so? (i.e. is there a plugable interface/plugin system?)
Or is it strongly coupling with Bing search and lepton LLM?
The text was updated successfully, but these errors were encountered: