-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question]: Is there a way we can use select OpenAI Key dynamically? #7377
Comments
🤖 Hello, Yes, there is a way to dynamically switch between multiple OpenAI API keys on the fly. The LlamaIndex framework allows for this by passing the from llama_index.llms.openai import OpenAI
# Initialize with the first API key
openai_obj1 = OpenAI(api_key='sk-yourfirstapikey')
# Use the OpenAI object
# ...
# Initialize with the second API key
openai_obj2 = OpenAI(api_key='sk-yoursecondapikey')
# Use the second OpenAI object
# ... This method allows you to switch API keys dynamically within your code. However, please be aware that this method is not thread-safe. If you are using multiple threads, you should ensure that the API key is set correctly in each thread. Additionally, the LlamaIndex framework does support the initialization of multiple OpenAI objects with different API keys simultaneously. This is evident from the Here is the relevant code: def __init__(
self,
mode: str = OpenAIEmbeddingMode.TEXT_SEARCH_MODE,
model: str = OpenAIEmbeddingModelType.TEXT_EMBED_ADA_002,
deployment_name: Optional[str] = None,
embed_batch_size: int = DEFAULT_EMBED_BATCH_SIZE,
callback_manager: Optional[CallbackManager] = None,
**kwargs: Any,
) -> None:
validate_openai_api_key(
kwargs.get("api_key", None), kwargs.get("api_type", None)
)
self._query_engine = get_engine(mode, model, _QUERY_MODE_MODEL_DICT)
self._text_engine = get_engine(mode, model, _TEXT_MODE_MODEL_DICT)
super().__init__(
embed_batch_size=embed_batch_size,
callback_manager=callback_manager,
model_name=model,
deployment_name=deployment_name,
openai_kwargs=kwargs,
) In this code, I hope this helps! Let me know if you have any other questions. Best, Sources
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
Cool, thank you |
Actually, it doesn't work |
@rendyfebry What's the issue? Just tested locally myself, seems to work fine |
Thanks for testing that. When I do it standalone like that, it seems to work, but when I use that with the rest of the system it doesn't work.
|
Found one problem here. When https://github.com/jerryjliu/llama_index/blob/main/llama_index/embeddings/utils.py#L52 For now, I will pass the embed_model to ServiceContext, but would be great if we could pass the api_key down the line automatically. |
Since the embed model and llm are completely separate abstractions, it's a bit tricky to nicely pass the openai key to both of them under the hood, since that is explicit to openai? For now, I think having to initialize both the embed model and llm is a fair compromise for this use case :) |
Question Validation
Question
As per documentation, the only possible way to inject OpenAI key is by using the environment variable
OPENAI_API_KEY
.If let's say I have multiple API Kesy and want to change between them on the fly, what's the best way to do that?
The text was updated successfully, but these errors were encountered: