Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding a custom api end point #331

Closed
BhagatHarsh opened this issue Jun 22, 2023 · 19 comments
Closed

adding a custom api end point #331

BhagatHarsh opened this issue Jun 22, 2023 · 19 comments

Comments

@BhagatHarsh
Copy link

I use a custom and free api endpoint so I would like to add a feature which is similar to:

export OPENAI_API_KEY=[your api key]

so I would like to have:

export OPENAI_API_BASE=[your custom api base url]

If you don't export it then it will have the default base url.

is it appropriate for me to work on this or if it is already implemented then please let me know.

@mgrist
Copy link

mgrist commented Jun 22, 2023

As far as I know, you need an Open AI API key to use this tool. You can get one for free at OpenAI Website. I am not sure what you mean about having a custom-free API endpoint.

@BhagatHarsh
Copy link
Author

@mgrist thank you for replying
There are ways to get around the paywall, there are people who provide free chat gpt api through reverse proxy servers, so the endpoint url is different but behaves similar to open ai, you can look further into it here repo

@shubham-attri
Copy link
Contributor

@BhagatHarsh, I do get the point you are trying to make here, but trying to get away with proxy servers and getting around is unethical and unauthorized access to paid services or content is considered unethical and may violate terms of service or legal agreements. It's important to respect the rights and policies set by the service provider and let's maintain the integrity of the project. Ofc there can be support to run it on local machine or Azure services so that we can run the model locally and get around without paying for tokens.

@BhagatHarsh
Copy link
Author

@shubham-attri completely agreed, that is why I asked before making a PR.

But is the feature hurting any policies here?

All I want is to add a way to not edit the code everytime but just change the api_base using export.

The way people wanna use it is at their discretion.

@bsu3338
Copy link

bsu3338 commented Jun 23, 2023

Fastchat has an openAI API interface to opensource models.
https://github.com/lm-sys/FastChat/blob/main/docs/openai_api.md

Also Helicone.ai required changing the api_base to use their product
https://docs.helicone.ai/quickstart/integrate-in-one-minute

I am not supporting unethical use, but I do see use cases for adding api_base option. You would also have to allow the user to define their own model

@fat-tire
Copy link

Also see https://localai.io

Along these lines, I'm also wondering about the TERMS_OF_USE.md that doesn't seem to exist, and how this differs from the provisions outlined in the LICENSE.

@BhagatHarsh, I do get the point you are trying to make here, but trying to get away with proxy servers and getting around is unethical and unauthorized access to paid services or content is considered unethical and may violate terms of service or legal agreements. It's important to respect the rights and policies set by the service provider and let's maintain the integrity of the project. Ofc there can be support to run it on local machine or Azure services so that we can run the model locally and get around without paying for tokens.

@jet-georgi-velev
Copy link

Same problem here, it doesn't seem to consider OPENAI_API_BASE when running. You gotta edit ai.py in order to use a different instance of GPT instead of the Open AI one.

On the folks mentioning unethical practices and other "spooky" nonsense - Microsoft and others offer private instances of the OpenAI models where your data is private and not shared unlike when using OpenAI's API.

@mgrist
Copy link

mgrist commented Jun 26, 2023

@jet-georgi-velev
I didn't think about other providers using OpenAI models, so this feature request seems pretty valid to me. Thanks for the insight!

@JinchuLi2002
Copy link

@jet-georgi-velev
I see this can be a useful feature especially for working with locally deployed LLms.

In fact, setting api base in environment should work by itself, except that the current version performs verification of model availability via OpenAI by default, which is not what we want if we're just "borrowing OpenAI's API" for local inference and not actually contacting OpenAI's service.

I see this issue has been around for a few days now, so I compiled a very short PR that should solve it.

@jet-georgi-velev
Copy link

jet-georgi-velev commented Jun 29, 2023 via email

@JinchuLi2002
Copy link

@jet-georgi-velev
Hi, thanks for the reply! Yeah I should've made clear in my PR that it only for (non-Azure) OpenAI-compatible LLMs, as I am using local inference and had encountered the same issue as the OP.

@SumitKumarDev10
Copy link

I think having a Base API is not that of a good idea.
It violates their Terms Of Use.

The sharing of API keys is against the Terms of Use. As you begin experimenting, you may want to expand API access to your team. OpenAI does not support the sharing of API keys.

For More information, go to:
Best Practices for API Key Safety | OpenAI Help Centre

@JinchuLi2002
Copy link

JinchuLi2002 commented Jun 30, 2023

@SumitKumarDev10
Hi Sumit, I think there's some misunderstanding here.

  1. It's not the sharing of API secret keys, but merely to add an option to send the query to a custom url (i.e. say you set up a local LLM on your own GPU, like FastChat, you can then make openai.ChatCompletion.create() send your query to http://localhost:8000 or wherever it's deployed), rather than the default api.openai.com/v1.
  2. The OpenAI API itself supports switching api end points by export OPENAI_API_BASE=, it's just that gpt-engineer had some bugs that blocks the proper use of it

@SumitKumarDev10
Copy link

SumitKumarDev10 commented Jun 30, 2023 via email

@noxiouscardiumdimidium
Copy link

noxiouscardiumdimidium commented Jul 2, 2023

Thank You Jinchu for correcting me and my confusion. I never knew what an LLM was, so your knowledge and experience on these topic is quite fascinating atleast for a beginner like me.

On Fri, 30 Jun, 2023, 12:52 pm Jinchu Li, @.> wrote: @SumitKumarDev10 https://github.com/SumitKumarDev10 Hi Sumit, I think there's some misunderstanding here. 1. It's not the sharing of API secret keys, but merely to add an option to send the query to a custom url (i.e. say you set up a local LLM on your own GPU, like FastChat, you can then make openai.ChatCompletion.create() send your query to http://localhost:8000 or wherever it's deployed). 2. The OpenAI API itself supports switching api end points by export OPENAI_API_BASE=, it's just that gpt-engineer had some bugs that blocks the proper use of it — Reply to this email directly, view it on GitHub <#331 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/BANRI7MERY4XXS6GPDZYBJTXNZ5ETANCNFSM6AAAAAAZQMQ2JY . You are receiving this because you were mentioned.Message ID: @.>

yeah, when youre using the exports, local, youre using a fictitious hash you just MADE UP. if its allowed to be inplmented properly, all it does it verify you're allowing your own device to interface with another local port on your own machine. it CANNOT be used to access openai, or any other paid service - therefore they will never REQUIRE being shared with anyone/port/outside-machine not explicitly defined and allowed by it own end user, and owner. the default "api-key" for textgen... is "dummy" as in a valueless place-holder. as long as both instances have "dummy" set as the key, they can validate their connection. they have no monetary value, there's no point to trade them and zero harm if you do, because they have no value they are incapable of being used in any form of theft, misappropriation, or trading of digital credits as part of digital money laundering... every stipulation about sharing REAL keys in no way applies to an infinite supply of random characters... if i tell you i sometimes use I-AM-NUMBER-1, neither of us is capable of causing or suffering a legally actionable "quantifiable damage-in-fact" xD

@SumitKumarDev10
Copy link

@noxiouscardiumdimidium I am sure you have written something valuable, interesting and fascinating but I am sorry because I am still as beginner and don't really know what you are talking about. Please don't take this reply offensively. It is just that I am being honest.

@noxiouscardiumdimidium
Copy link

@noxiouscardiumdimidium I am sure you have written something valuable, interesting and fascinating but I am sorry because I am still as beginner and don't really know what you are talking about. Please don't take this reply offensively. It is just that I am being honest.

i know, i made a clearer one. it s in discussions "Gpt-Engineer+Textgen". the point of the legal breakdown is that openai allows this, and the server and github rules ONLY apply to REAL keys with monetary value , not security passwords, which is what you're actually exporting. the ONLY thing openai asks to use their openai.api for local LLM support, is to confirm the end-user has given permission to access the local port... by both end-points exporting the the same key

@SumitKumarDev10
Copy link

SumitKumarDev10 commented Jul 3, 2023 via email

@AntonOsika
Copy link
Collaborator

PR open for this, closing already to keep things tidy 🏃

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
10 participants