-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: xxxxx********************xxxx #10501
Comments
Does that work for you? |
Thank you for a quick response. It does not work for me. Still gives the same error. |
what if you do this with your API key service_context=ServiceContext.from_defaults(llm=OpenAI(temperature=0, model="gpt-3.5-turbo", api_key="sk-..."))
print(service_context.llm.complete("Hello!")) If that still doesn't work, I think you need to generate a new API key |
I am using this key generated by my organization. It works for other calls as shown below. Is there any other reason why it could fail in the above case? Log Below:
However, it's important to note that not all Azure AI services have the same level of support for customer managed keys. You should always check the documentation for the specific service you're interested in to confirm whether this feature is supported and to understand any limitations or requirements. |
🤖 Based on the information you've provided, it seems like you're using the Azure OpenAI service, which has a different authentication mechanism compared to the standard OpenAI API. The Azure OpenAI service uses Azure AD for token authentication, and the token might have expired, leading to the "Incorrect API key provided" error. In the LlamaIndex codebase, the Here is the relevant code: def _get_credential_kwargs(self, **kwargs: Any) -> Dict[str, Any]:
if self.use_azure_ad:
self._azure_ad_token = refresh_openai_azuread_token(self._azure_ad_token)
self.api_key = self._azure_ad_token.token
return {
"api_key": self.api_key,
"max_retries": self.max_retries,
"timeout": self.timeout,
"azure_endpoint": self.azure_endpoint,
"azure_deployment": self.azure_deployment,
"api_version": self.api_version,
"default_headers": self.default_headers,
"http_client": self._http_client,
**kwargs,
} You can find this method in the llama_index/llms/azure_openai.py file. Ensure that If you're still facing issues, could you please provide more details about your setup? Specifically, it would be helpful to know how you're initializing the SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
@lenka1 ohhhh you are using azure! You should follow the azure setup here |
Thank you for the information. Unfortunately I am still getting the 401 error with the call File "/home/tlmalla/.local/lib/python3.10/site-packages/openai/_base_client.py", line 959, in _request |
llm = AzureChatOpenAI( |
Did you follow the guide exactly? Setup the llm and embedding model, and set the global service context? The error message above is for embeddings |
@lenka1 , I'm sad to see that we aren't aligned after multiple rounds of clarification. To ensure we can be on the same page, please share a complete script. The script should:
Clarifications:
|
I encounter similar issue. llama-index==0.9.48 llm = AzureOpenAI( #use following code to test out llm working fine #However, when I use agent (OpenAIAgent or ReActAgent), it shows the error message. #error message |
@YanZhangADS what tools are you using? Are they query engines? Did you properly setup azure for any them? See the general azure setup here (note it's for v0.10.x syntax, before that you would set a global service context) |
Thanks for youre reply! This tool is a PandasQueryEngine object (from llama_index.query_engine import PandasQueryEngine). It extract information from a local csv file. No embedding or indexing is used. "Did you properly setup azure for any them?" -- No, I did not setup azure on this tool. |
@YanZhangADS the pandas query engine uses an LLM to write pandas queries. You'll need to pass in the LLM if you want to use it with azure 👍 |
Thanks @logan-markewich for pointing out the direction! My issue is resolved. For llama-index==0.9.48, if you use AzureOpenAI you need to add following. Reference: https://docs.llamaindex.ai/en/latest/module_guides/supporting_modules/service_context_migration.html# from llama_index import ServiceContext, set_global_service_context |
I'm going to close this issue. I think there's more than enough details for @lenka1 to get it straightened out. The docs I shared above give an exact example when using the latest version, otherwise @YanZhangADS gave a good example too. |
Bug Description
I am calling aevaluate method for anwerrelevancy keep getting this message openai.AuthenticationError: Error code: 401 - {'error': {'message': 'Incorrect API key provided: xxxxx********************xxxx. The same key works for other OPENAI calls. Please help.
judges["answer_relevancy"] = AnswerRelevancyEvaluator(
service_context=ServiceContext.from_defaults(
llm=OpenAI(temperature=0, model="gpt-3.5-turbo")
eval_tasks.append(
await judges["answer_relevancy"].aevaluate(
query=query,
response=answer,
sleep_time_in_seconds=1.0,
Version
llama-index==0.9.44
Steps to Reproduce
None
Relevant Logs/Tracbacks
The text was updated successfully, but these errors were encountered: