You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
OpenAI API implements rate limits per account and model that can be reached quite easily when generating a new dataset.
Describe the solution you'd like
Allow providing more than one OPENAI_API_KEY (from different OpenAI accounts) to "load balance" the requests to the OpenAI API between more than one account.
The text was updated successfully, but these errors were encountered:
Hey @gabrielmbmb, I'm the maintainer of LiteLLM we allow you to create an openai-compatible Router to maximize throughput by load balancing + queuing (beta).
I'd love to get your feedback if this solves your issue
Here's the quick start
fromlitellmimportRoutermodel_list= [{ # list of model deployments "model_name": "gpt-3.5-turbo", # model alias "litellm_params": { # params for litellm completion/embedding call "model": "azure/chatgpt-v-2", # actual model name"api_key": os.getenv("AZURE_API_KEY"),
"api_version": os.getenv("AZURE_API_VERSION"),
"api_base": os.getenv("AZURE_API_BASE")
}
}, {
"model_name": "gpt-3.5-turbo",
"litellm_params": { # params for litellm completion/embedding call "model": "azure/chatgpt-functioncalling",
"api_key": os.getenv("AZURE_API_KEY"),
"api_version": os.getenv("AZURE_API_VERSION"),
"api_base": os.getenv("AZURE_API_BASE")
}
}, {
"model_name": "gpt-3.5-turbo",
"litellm_params": { # params for litellm completion/embedding call "model": "gpt-3.5-turbo",
"api_key": os.getenv("OPENAI_API_KEY"),
}
}]
router=Router(model_list=model_list)
# openai.ChatCompletion.create replacementresponse=awaitrouter.acompletion(model="gpt-3.5-turbo",
messages=[{"role": "user", "content": "Hey, how's it going?"}])
print(response)
Is your feature request related to a problem? Please describe.
OpenAI API implements rate limits per account and model that can be reached quite easily when generating a new dataset.
Describe the solution you'd like
Allow providing more than one
OPENAI_API_KEY
(from different OpenAI accounts) to "load balance" the requests to the OpenAI API between more than one account.The text was updated successfully, but these errors were encountered: