-
Notifications
You must be signed in to change notification settings - Fork 14.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue: openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}} #14018
Comments
🤖 Hello @arunkumars27, Nice to meet you! I'm Dosu, a bot here to help you navigate issues, answer questions, and guide you in contributing to the LangChain repository. While we're waiting for a human maintainer to assist, feel free to ask me anything. I'm here to help! Based on the error message you're encountering, it seems like the resource you're trying to access cannot be found. This could be due to several reasons, such as incorrect OpenAI API endpoint or key, incorrect model name or deployment, incorrect Azure Cosmos DB connection string, or the Azure Cosmos DB collection or index does not exist. Here are some steps you can take to troubleshoot this issue:
os.environ["OPENAI_API_BASE"] = "YOUR_OPEN_AI_ENDPOINT" # https://example.openai.azure.com/
os.environ["OPENAI_API_KEY"] = "YOUR_OPEN_AI_KEY"
os.environ["OPENAI_EMBEDDINGS_DEPLOYMENT"] = "smart-agent-embedding-ada" # the deployment name for the embedding model
os.environ["OPENAI_EMBEDDINGS_MODEL_NAME"] = "text-embedding-ada-002" # the model name
CONNECTION_STRING = "AZURE COSMOS DB MONGO vCORE connection string"
INDEX_NAME = "izzy-test-index"
NAMESPACE = "izzy_test_db.izzy_test_collection"
DB_NAME, COLLECTION_NAME = NAMESPACE.split(".") Please double-check these points and try again. If the error persists, it would be helpful to have more specific details about when and where the error occurs. Also, I found some similar issues in the LangChain repository that might be helpful:
These issues suggest checking the I hope this helps! If you have any more questions or need further clarification, feel free to ask. -Dosu Sources
|
I'm having the same issue that the embeddings endpoint results in a 404 resource not found error. The attempted url was "my-resource.openai.azure.com/openai/embeddings", I've also tried "my-resource.openai.azure.com/embeddings" just in case. |
You might be missing the deployment name when instantiating the embedding = AzureOpenAIEmbeddings(
openai_api_version="1699-02-30",
openai_api_key="xxxxxxxxxxxxxxxxxxxxxxxxx",
# model_name="gpt-35-turbo",
azure_endpoint="https://ggggggggggggggggggggggg.openai.azure.com/"
azure_deployment = ... # <- add your deployment name here
) |
This helped me immensely. I was using OpenAIEmbeddings previously, and switching to the AzureOpenAIEmbeddings seems to have solved my issue (I did include the azure_deployment name, which was the name of my embeddings deployment). |
Hello, I met the same question when I am using from langchain.chat_models import ChatOpenAI
from langchain.schema import (HumanMessage, SystemMessage)
import os
os.environ["OPENAI_API_KEY"] = "xxx"
chat = ChatOpenAI(
model="gpt-4",
temperature=0.5,
max_tokens=200
)
messages = [
SystemMessage(content='You are a creative assistant'),
HumanMessage(content='Hello')
]
response = chat(messages)
print(response) My environment: python version: Here is my error stack: File "/Users/paulchess/Desktop/Home/ai/langchain-practice/src/langcha.py", line 18, in <module>
response = chat(messages)
^^^^^^^^^^^^^^
File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 636, in __call__
generation = self.generate(
^^^^^^^^^^^^^^
File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 382, in generate
raise e
File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 372, in generate
self._generate_with_cache(
File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 528, in _generate_with_cache
return self._generate(
^^^^^^^^^^^^^^^
File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/langchain_community/chat_models/openai.py", line 435, in _generate
response = self.completion_with_retry(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/langchain_community/chat_models/openai.py", line 352, in completion_with_retry
return self.client.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/openai/_utils/_utils.py", line 272, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 645, in create
return self._post(
^^^^^^^^^^^
File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1088, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/openai/_base_client.py", line 853, in request
return self._request(
^^^^^^^^^^^^^^
File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/openai/_base_client.py", line 930, in _request
raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}} I found it's the same with Howerver, when I use OpenAI api directly,it's well, the working code below: import os
from openai import OpenAI
os.environ["OPENAI_API_KEY"] = "xxx"
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4",
temperature=0.5,
max_tokens=100,
messages=[
{"role": "system", "content": "You are an AI asistant."},
{"role": "user", "content": "Hello"}
]
)
print(response.choices[0].message.content) |
Issue you'd like to raise.
I use the below code to load data and splitted it and embedded it and finally pushing it into vector store. during that process,
I'm getting openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}. After this method failed, I also tried with AzurecosmosDBVectorsearch vector store it also failed and returned the same error. Kindly help on this.
Suggestion:
No response
The text was updated successfully, but these errors were encountered: