Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to configure API using Azure OpenAI? #179

Open
peiyaoli opened this issue Aug 18, 2023 · 8 comments
Open

How to configure API using Azure OpenAI? #179

peiyaoli opened this issue Aug 18, 2023 · 8 comments
Labels
enhancement New feature or request question Further information is requested

Comments

@peiyaoli
Copy link

Hi, we are using GPT provided by Azure. How should we configure the token for that? Many thanks

@caufieldjh
Copy link
Member

caufieldjh commented Aug 22, 2023

Hi @peiyaoli , thanks for your question. OntoGPT does not currently have a way to interface with the Azure OpenAI service. It looks like that feature will be in the llm package very soon, though: simonw/llm#178.
Once it's available in llm we'll add it to OntoGPT .

@peiyaoli
Copy link
Author

Many thanks

@caufieldjh caufieldjh added enhancement New feature or request question Further information is requested labels Sep 21, 2023
@ishaan-jaff
Copy link

Hi @peiyaoli @caufieldjh I believe we can help with this issue. I’m the maintainer of LiteLLM https://github.com/BerriAI/litellm

TLDR:
We allow you to use any LLM as a drop in replacement for gpt-3.5-turbo.
If you don't have access to the LLM you can use the LiteLLM proxy to make requests to the LLM

You can use LiteLLM in the following ways:

With your own API KEY:

This calls the provider API directly

from litellm import completion
import os
## set ENV variables 
os.environ["OPENAI_API_KEY"] = "your-key" # 
os.environ["COHERE_API_KEY"] = "your-key" # 

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)

Using the LiteLLM Proxy with a LiteLLM Key

this is great if you don’t have access to claude but want to use the open source LiteLLM proxy to access claude

from litellm import completion
import os

## set ENV variables 
os.environ["OPENAI_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your openai key
os.environ["COHERE_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your cohere key

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)

@peiyaoli
Copy link
Author

peiyaoli commented Oct 9, 2023

many thanks, Bro!

@peiyaoli
Copy link
Author

Hi, @caufieldjh Just want to check if this feature has been updated or not.

@caufieldjh
Copy link
Member

Hi @peiyaoli, looks like the change made to llm hasn't been merged for some reason.
I do like the LiteLLM solution that @ishaan-jaff posted about above but haven't been able to implement it yet.
Looks like this would be a popular feature to have so I'll prioritize it and update on this issue once it's available.

@cmungall
Copy link
Member

@caufieldjh

I think we should adopt litellm ASAP

My original plan was to use llm, as we do in curategpt, as it provides plugins for various models. But it looks like we never did this and we implemented variants of the SPIRES engine for gpt4all. I'm not a huge fan of this as it violates DRY.

I'm also less of a fan of my original strategy as llm is starting to feel a bit like Simon doesn't have much time to contribute or respond to community PRs.

litellm is particularly nice for us here as we barely have to modify any code to get it to work with the original knowledge engine code, it simulates the openai api. It can be used either directly or via a proxy server that is very easy to run.

@ishaan-jaff thanks for this great tool!

@caufieldjh
Copy link
Member

My original plan was to use llm, as we do in curategpt, as it provides plugins for various models.

This is how gpt4all is currently implemented - it uses llm-gpt4all. The way it's called will change a bit in #306 but it's still calling that module. I do agree that the llm package feels like it's falling behind w.r.t. open models, GPU support, integration with other infrastructures, etc. litellm looks like it checks all the boxes so I'll prioritize implementing it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants