Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding OPENAI_API_BASE #114

Closed
StudyingLover opened this issue Aug 15, 2023 · 2 comments
Closed

Adding OPENAI_API_BASE #114

StudyingLover opened this issue Aug 15, 2023 · 2 comments

Comments

@StudyingLover
Copy link

Hope can add my own API endpoint in setting

@ishaan-jaff
Copy link

Hi @StudyingLover I’m the maintainer of LiteLLM - we allow you to create a proxy server to call 100+ LLMs, and I think it can solve your problem

Try it here: https://docs.litellm.ai/docs/proxy_server. I'd love your feedback on how we can make it better for you

Usage

import openai
openai.api_base = "http://0.0.0.0:8000/" # proxy url
print(openai.ChatCompletion.create(model="test", messages=[{"role":"user", "content":"Hey!"}]))

Creating a proxy server

Hugging Face Models

$ export HUGGINGFACE_API_KEY=my-api-key #[OPTIONAL]
$ litellm --model claude-instant-1

Anthropic

$ export ANTHROPIC_API_KEY=my-api-key
$ litellm --model claude-instant-1

Palm

$ export PALM_API_KEY=my-palm-key
$ litellm --model palm/chat-bison

@samyakkkk
Copy link
Contributor

This will be worked on in the future. Closing this for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants