Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Allow using multiple OpenAI API keys #118

Closed
gabrielmbmb opened this issue Nov 27, 2023 · 1 comment
Closed

[FEATURE] Allow using multiple OpenAI API keys #118

gabrielmbmb opened this issue Nov 27, 2023 · 1 comment
Labels
enhancement New feature or request team: ml A tag for the ML team.

Comments

@gabrielmbmb
Copy link
Member

Is your feature request related to a problem? Please describe.
OpenAI API implements rate limits per account and model that can be reached quite easily when generating a new dataset.

Describe the solution you'd like
Allow providing more than one OPENAI_API_KEY (from different OpenAI accounts) to "load balance" the requests to the OpenAI API between more than one account.

@gabrielmbmb gabrielmbmb added enhancement New feature or request team: ml A tag for the ML team. labels Nov 27, 2023
@krrishdholakia
Copy link

Hey @gabrielmbmb, I'm the maintainer of LiteLLM we allow you to create an openai-compatible Router to maximize throughput by load balancing + queuing (beta).

I'd love to get your feedback if this solves your issue

Here's the quick start

from litellm import Router

model_list = [{ # list of model deployments 
    "model_name": "gpt-3.5-turbo", # model alias 
    "litellm_params": { # params for litellm completion/embedding call 
        "model": "azure/chatgpt-v-2", # actual model name
        "api_key": os.getenv("AZURE_API_KEY"),
        "api_version": os.getenv("AZURE_API_VERSION"),
        "api_base": os.getenv("AZURE_API_BASE")
    }
}, {
    "model_name": "gpt-3.5-turbo", 
    "litellm_params": { # params for litellm completion/embedding call 
        "model": "azure/chatgpt-functioncalling", 
        "api_key": os.getenv("AZURE_API_KEY"),
        "api_version": os.getenv("AZURE_API_VERSION"),
        "api_base": os.getenv("AZURE_API_BASE")
    }
}, {
    "model_name": "gpt-3.5-turbo", 
    "litellm_params": { # params for litellm completion/embedding call 
        "model": "gpt-3.5-turbo", 
        "api_key": os.getenv("OPENAI_API_KEY"),
    }
}]

router = Router(model_list=model_list)

# openai.ChatCompletion.create replacement
response = await router.acompletion(model="gpt-3.5-turbo", 
                messages=[{"role": "user", "content": "Hey, how's it going?"}])

print(response)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request team: ml A tag for the ML team.
Projects
None yet
Development

No branches or pull requests

2 participants