-
-
Notifications
You must be signed in to change notification settings - Fork 6.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adding a custom api end point #331
Comments
As far as I know, you need an Open AI API key to use this tool. You can get one for free at OpenAI Website. I am not sure what you mean about having a custom-free API endpoint. |
@BhagatHarsh, I do get the point you are trying to make here, but trying to get away with proxy servers and getting around is unethical and unauthorized access to paid services or content is considered unethical and may violate terms of service or legal agreements. It's important to respect the rights and policies set by the service provider and let's maintain the integrity of the project. Ofc there can be support to run it on local machine or Azure services so that we can run the model locally and get around without paying for tokens. |
@shubham-attri completely agreed, that is why I asked before making a PR. But is the feature hurting any policies here? All I want is to add a way to not edit the code everytime but just change the api_base using export. The way people wanna use it is at their discretion. |
Fastchat has an openAI API interface to opensource models. Also Helicone.ai required changing the api_base to use their product I am not supporting unethical use, but I do see use cases for adding api_base option. You would also have to allow the user to define their own model |
Also see https://localai.io Along these lines, I'm also wondering about the TERMS_OF_USE.md that doesn't seem to exist, and how this differs from the provisions outlined in the LICENSE.
|
Same problem here, it doesn't seem to consider On the folks mentioning unethical practices and other "spooky" nonsense - Microsoft and others offer private instances of the OpenAI models where your data is private and not shared unlike when using OpenAI's API. |
@jet-georgi-velev |
@jet-georgi-velev In fact, setting api base in environment should work by itself, except that the current version performs verification of model availability via OpenAI by default, which is not what we want if we're just "borrowing OpenAI's API" for local inference and not actually contacting OpenAI's service. I see this issue has been around for a few days now, so I compiled a very short PR that should solve it. |
@JinchuLi2002 I've checked your PR and this won't work. At least for the
Azure implementation as it doesn't use `model` but it uses `deployment_id`.
I've got a more sufficient patch but I'm away and I can submit it on
Tuesday.
…On Thu, 29 Jun 2023, 10:49 JinchuLi2002, ***@***.***> wrote:
@jet-georgi-velev <https://github.com/jet-georgi-velev>
I see this can be a useful feature especially for working with locally
deployed LLms.
In fact, setting api base in environment should work by itself, except
that the current version performs verification of model availability via
OpenAI by default, which is not what we want if we're just "borrowing
OpenAI's API" for local inference and not actually contacting OpenAI's
service.
I see this issue has been around for a few days now, so I compiled a very
short PR that should solve it.
—
Reply to this email directly, view it on GitHub
<#331 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A4F5TBRHBEA5RBOFTHMV3KTXNVFT3ANCNFSM6AAAAAAZQMQ2JY>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
--
This email and any files transmitted with it contain confidential
information and/or privileged or personal advice. This email is intended
for the addressee(s) stated above only. If you are not the addressee of the
email please do not copy or forward it or otherwise use it or any part of
it in any form whatsoever. If you have received this email in error please
notify the sender and remove the e-mail from your system. Thank you.
This
is an email from the company Just Eat Takeaway.com N.V., a public limited
liability company with corporate seat in Amsterdam, the Netherlands, and
address at Piet Heinkade 61, 1019GM Amsterdam, registered with the Dutch
Chamber of Commerce with number 08142836 and where the context requires,
includes its subsidiaries and associated undertakings.
|
@jet-georgi-velev |
I think having a Base API is not that of a good idea.
For More information, go to: |
@SumitKumarDev10
|
Thank You Jinchu for correcting me and my confusion. I never knew what an
LLM was, so your knowledge and experience on these topic is quite
fascinating atleast for a beginner like me.
…On Fri, 30 Jun, 2023, 12:52 pm Jinchu Li, ***@***.***> wrote:
@SumitKumarDev10 <https://github.com/SumitKumarDev10>
Hi Sumit, I think there's some misunderstanding here.
1. It's not the sharing of API secret keys, but merely to add an
option to send the query to a custom url (i.e. say you set up a local LLM
on your own GPU, like FastChat, you can then make
openai.ChatCompletion.create() send your query to http://localhost:8000
or wherever it's deployed).
2. The OpenAI API itself supports switching api end points by export
OPENAI_API_BASE=, it's just that gpt-engineer had some bugs that
blocks the proper use of it
—
Reply to this email directly, view it on GitHub
<#331 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BANRI7MERY4XXS6GPDZYBJTXNZ5ETANCNFSM6AAAAAAZQMQ2JY>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
yeah, when youre using the exports, local, youre using a fictitious hash you just MADE UP. if its allowed to be inplmented properly, all it does it verify you're allowing your own device to interface with another local port on your own machine. it CANNOT be used to access openai, or any other paid service - therefore they will never REQUIRE being shared with anyone/port/outside-machine not explicitly defined and allowed by it own end user, and owner. the default "api-key" for textgen... is "dummy" as in a valueless place-holder. as long as both instances have "dummy" set as the key, they can validate their connection. they have no monetary value, there's no point to trade them and zero harm if you do, because they have no value they are incapable of being used in any form of theft, misappropriation, or trading of digital credits as part of digital money laundering... every stipulation about sharing REAL keys in no way applies to an infinite supply of random characters... if i tell you i sometimes use I-AM-NUMBER-1, neither of us is capable of causing or suffering a legally actionable "quantifiable damage-in-fact" xD |
@noxiouscardiumdimidium I am sure you have written something valuable, interesting and fascinating but I am sorry because I am still as beginner and don't really know what you are talking about. Please don't take this reply offensively. It is just that I am being honest. |
i know, i made a clearer one. it s in discussions "Gpt-Engineer+Textgen". the point of the legal breakdown is that openai allows this, and the server and github rules ONLY apply to REAL keys with monetary value , not security passwords, which is what you're actually exporting. the ONLY thing openai asks to use their openai.api for local LLM support, is to confirm the end-user has given permission to access the local port... by both end-points exporting the the same key |
Ok, Thank You
…On Mon, Jul 3, 2023 at 4:04 PM noxiouscardiumdimidium < ***@***.***> wrote:
@noxiouscardiumdimidium <https://github.com/noxiouscardiumdimidium> I am
sure you have written something *valuable*, *interesting* and
*fascinating* but I am sorry because I am still as *beginner* and don't
really know what you are talking about. Please don't take this reply
*offensively*. It is just that I am being *honest*.
i know, i made a clearer one. it s in discussions "Gpt-Engineer+Textgen".
the point of the legal breakdown is that openai allows this, and the server
and github rules ONLY apply to REAL keys with monetary value , not security
passwords, which is what you're actually exporting. the ONLY thing openai
asks to use their openai.api for local LLM support, is to confirm the
end-user has given permission to access the local port... by both
end-points exporting the the same key
—
Reply to this email directly, view it on GitHub
<#331 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BANRI7MCZ5XTWYKPNDXKEZ3XOKN23ANCNFSM6AAAAAAZQMQ2JY>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
PR open for this, closing already to keep things tidy 🏃 |
I use a custom and free api endpoint so I would like to add a feature which is similar to:
so I would like to have:
If you don't export it then it will have the default base url.
is it appropriate for me to work on this or if it is already implemented then please let me know.
The text was updated successfully, but these errors were encountered: