-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancement: Change OPENAI_REVERSE_PROXY to only expect a network host / port. #1027
Comments
I noticed the CHATGPT_REVERSE_PROXY: http://host.docker.internal:8070/v1
OPENAI_REVERSE_PROXY: http://host.docker.internal:8070/v1/chat/completions Just to see what would happen, and it polled the models as expect and was able to post chats as expected. This gets me where I wanted to be. |
Hi, thanks for using LibreChat! This is related to #1026 except the point about fetching models. What endpoint is it for LocalAI to fetch models? I can try setting it up today to test |
Actually I'm a little confused here. It should be fetching models correctly with this:
I'm testing LocalAI in a minute |
In local ai, I see the models @ |
Relevant LocalAI docs: https://localai.io/advanced/index.html For now, as a quick patch, I can add an environment variable to force the prompt payload, but to make the UI setup more flexible, I can make this an option on the frontend soon |
Thanks for looking into this so quickly! I'll have to check this out. |
Contact Details
No response
What features would you like to see added?
Thanks so much for this great project. There aren't many working front ends like yours let alone with all these great features. I do have a question, similar to the comment below on how best to point LibreChat at my localai container. It's already pretty much working, but I was wondering how feasible this change below might be.
I don't know how dumb an idea this might be, but is it possible to alter the code so that
OPENAI_REVERSE_PROXY
is not expected to provide any api paths/v1
/v1/chat/completions
, and those are just presumed? I'm guessing the path is built in for other use cases I'm not aware of, but in the event it's not, it would make integrating with localai fully automatic.More details
#403 (comment)
In the comment above, you point out how to hard code the models in the event that you are running your own reverse proxy (or using the OPENAI_REVERSE_PROXY env as a way to access local llms like via localai).
If I set
Librechat will grab the model list from localai without issue from /v1/models, but will fail to post to /v1 because that's not the right end point.
If I set
Then my posts go through and I can chat with the localai models, but only if I've hard coded them ahead of time.
Which components are impacted by your request?
No response
Pictures
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: