Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue: openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}} #14018

Closed
arunkumars27 opened this issue Nov 29, 2023 · 8 comments
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature 🔌: chroma Primarily related to ChromaDB integrations Ɑ: embeddings Related to text embedding models module Ɑ: vector store Related to vector store module

Comments

@arunkumars27
Copy link

Issue you'd like to raise.

I use the below code to load data and splitted it and embedded it and finally pushing it into vector store. during that process,
I'm getting openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}. After this method failed, I also tried with AzurecosmosDBVectorsearch vector store it also failed and returned the same error. Kindly help on this.

from langchain.embeddings.azure_openai import AzureOpenAIEmbeddings
from langchain.vectorstores.azure_cosmos_db import AzureCosmosDBVectorSearch
from langchain.vectorstores.chroma import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.document_loaders import PyPDFLoader

# Load PDF
loaders = [
    PyPDFLoader("ai.pdf")
]
docs = []
for loader in loaders:
    docs.extend(loader.load())
    # Define the Text Splitter
text_splitter = RecursiveCharacterTextSplitter(
    chunk_size=1500,
    chunk_overlap=150
)

# Create a split of the document using the text splitter
res_splits = text_splitter.split_documents(docs)

embedding = AzureOpenAIEmbeddings(
    openai_api_version="1699-02-30",
    openai_api_key="xxxxxxxxxxxxxxxxxxxxxxxxx",
    #   model_name="gpt-35-turbo",
    azure_endpoint="https://ggggggggggggggggggggggg.openai.azure.com/")
persist_directory = 'docs/chroma/'



#  Create the vector store
vectordb = Chroma.from_documents(
    documents=res_splits,
    embedding=embedding,
    persist_directory=persist_directory
)

print(vectordb._collection.count())

Suggestion:

No response

Copy link

dosubot bot commented Nov 29, 2023

🤖

Hello @arunkumars27,

Nice to meet you! I'm Dosu, a bot here to help you navigate issues, answer questions, and guide you in contributing to the LangChain repository. While we're waiting for a human maintainer to assist, feel free to ask me anything. I'm here to help!

Based on the error message you're encountering, it seems like the resource you're trying to access cannot be found. This could be due to several reasons, such as incorrect OpenAI API endpoint or key, incorrect model name or deployment, incorrect Azure Cosmos DB connection string, or the Azure Cosmos DB collection or index does not exist.

Here are some steps you can take to troubleshoot this issue:

  1. Ensure that the OpenAI API endpoint and key are correctly set in the environment variables. The endpoint should be a valid URL and the key should be the one provided by Azure.
os.environ["OPENAI_API_BASE"] = "YOUR_OPEN_AI_ENDPOINT"  # https://example.openai.azure.com/
os.environ["OPENAI_API_KEY"] = "YOUR_OPEN_AI_KEY"
  1. The model name and deployment should match the ones provided by Azure. If they are incorrect, the OpenAI API will not be able to find the requested resource.
os.environ["OPENAI_EMBEDDINGS_DEPLOYMENT"] = "smart-agent-embedding-ada"  # the deployment name for the embedding model
os.environ["OPENAI_EMBEDDINGS_MODEL_NAME"] = "text-embedding-ada-002"  # the model name
  1. The connection string should be the one provided by Azure Cosmos DB. If it is incorrect, the MongoDB client will not be able to connect to the database.
CONNECTION_STRING = "AZURE COSMOS DB MONGO vCORE connection string"
  1. Ensure that the collection and index exist in the Azure Cosmos DB. If they do not exist, you will need to create them before running the code.
INDEX_NAME = "izzy-test-index"
NAMESPACE = "izzy_test_db.izzy_test_collection"
DB_NAME, COLLECTION_NAME = NAMESPACE.split(".")

Please double-check these points and try again. If the error persists, it would be helpful to have more specific details about when and where the error occurs.

Also, I found some similar issues in the LangChain repository that might be helpful:

  1. InvalidRequestError: Resource not found. when running pandas_dataframe_agent over AzureOpenAI
  2. Azure OpenAI - Exception has occurred: InvalidRequestError Resource not found
  3. Unable to use gpt-3.5-turbo deployed on Azure OpenAI with langchain embeddings.

These issues suggest checking the openai_api_base URL, ensuring it is correctly formatted and points to the correct Azure resource, and ensuring that the deployment_name parameter is correctly filled in the AzureOpenAI function.

I hope this helps! If you have any more questions or need further clarification, feel free to ask.

-Dosu

Sources

@dosubot dosubot bot added Ɑ: embeddings Related to text embedding models module Ɑ: vector store Related to vector store module 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature 🔌: chroma Primarily related to ChromaDB integrations labels Nov 29, 2023
@dchinn1
Copy link

dchinn1 commented Dec 1, 2023

I'm having the same issue that the embeddings endpoint results in a 404 resource not found error. The attempted url was "my-resource.openai.azure.com/openai/embeddings", I've also tried "my-resource.openai.azure.com/embeddings" just in case.

@mspronesti
Copy link
Contributor

You might be missing the deployment name when instantiating the AzureOpenAIEmbeddings object.

embedding = AzureOpenAIEmbeddings(
    openai_api_version="1699-02-30",
    openai_api_key="xxxxxxxxxxxxxxxxxxxxxxxxx",
    #   model_name="gpt-35-turbo",
    azure_endpoint="https://ggggggggggggggggggggggg.openai.azure.com/"
    azure_deployment = ... # <- add your deployment name here
)

@dchinn1
Copy link

dchinn1 commented Dec 4, 2023

You might be missing the deployment name when instantiating the AzureOpenAIEmbeddings object.

embedding = AzureOpenAIEmbeddings(
    openai_api_version="1699-02-30",
    openai_api_key="xxxxxxxxxxxxxxxxxxxxxxxxx",
    #   model_name="gpt-35-turbo",
    azure_endpoint="https://ggggggggggggggggggggggg.openai.azure.com/"
    azure_deployment = ... # <- add your deployment name here
)

This helped me immensely. I was using OpenAIEmbeddings previously, and switching to the AzureOpenAIEmbeddings seems to have solved my issue (I did include the azure_deployment name, which was the name of my embeddings deployment).

@PaulChess
Copy link

PaulChess commented Dec 28, 2023

Hello, I met the same question when I am using ChatOpenAI. I'm not using azure.
Here is my source code:

from langchain.chat_models import ChatOpenAI
from langchain.schema import (HumanMessage, SystemMessage)
import os

os.environ["OPENAI_API_KEY"] = "xxx"

chat = ChatOpenAI(
  model="gpt-4",
  temperature=0.5,
  max_tokens=200
)

messages = [
  SystemMessage(content='You are a creative assistant'),
  HumanMessage(content='Hello')
]

response = chat(messages)

print(response)

My environment: python version: 3.11 ,langchain version: 0.0.352

Here is my error stack:

  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/src/langcha.py", line 18, in <module>
    response = chat(messages)
               ^^^^^^^^^^^^^^
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 636, in __call__
    generation = self.generate(
                 ^^^^^^^^^^^^^^
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 382, in generate
    raise e
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 372, in generate
    self._generate_with_cache(
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 528, in _generate_with_cache
    return self._generate(
           ^^^^^^^^^^^^^^^
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/langchain_community/chat_models/openai.py", line 435, in _generate
    response = self.completion_with_retry(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/langchain_community/chat_models/openai.py", line 352, in completion_with_retry
    return self.client.create(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/openai/_utils/_utils.py", line 272, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 645, in create
    return self._post(
           ^^^^^^^^^^^
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1088, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/openai/_base_client.py", line 853, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/openai/_base_client.py", line 930, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}

I found it's the same with AzureChatOpenAI

Howerver, when I use OpenAI api directly,it's well, the working code below:

import os
from openai import OpenAI

os.environ["OPENAI_API_KEY"] = "xxx"

client = OpenAI()

response = client.chat.completions.create(
  model="gpt-4",
  temperature=0.5,
  max_tokens=100,
  messages=[
    {"role": "system", "content": "You are an AI asistant."},
    {"role": "user", "content": "Hello"}
  ]
)

print(response.choices[0].message.content)

@PaulChess
Copy link

Hello, I met the same question when I am using ChatOpenAI. I'm not using azure. Here is my source code:

from langchain.chat_models import ChatOpenAI
from langchain.schema import (HumanMessage, SystemMessage)
import os

os.environ["OPENAI_API_KEY"] = "xxx"

chat = ChatOpenAI(
  model="gpt-4",
  temperature=0.5,
  max_tokens=200
)

messages = [
  SystemMessage(content='You are a creative assistant'),
  HumanMessage(content='Hello')
]

response = chat(messages)

print(response)

My environment: python version: 3.11 ,langchain version: 0.0.352

Here is my error stack:

  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/src/langcha.py", line 18, in <module>
    response = chat(messages)
               ^^^^^^^^^^^^^^
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 636, in __call__
    generation = self.generate(
                 ^^^^^^^^^^^^^^
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 382, in generate
    raise e
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 372, in generate
    self._generate_with_cache(
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 528, in _generate_with_cache
    return self._generate(
           ^^^^^^^^^^^^^^^
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/langchain_community/chat_models/openai.py", line 435, in _generate
    response = self.completion_with_retry(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/langchain_community/chat_models/openai.py", line 352, in completion_with_retry
    return self.client.create(**kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/openai/_utils/_utils.py", line 272, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 645, in create
    return self._post(
           ^^^^^^^^^^^
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/openai/_base_client.py", line 1088, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/openai/_base_client.py", line 853, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "/Users/paulchess/Desktop/Home/ai/langchain-practice/venv/lib/python3.11/site-packages/openai/_base_client.py", line 930, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error': {'code': '404', 'message': 'Resource not found'}}

I found it's the same with AzureChatOpenAI

Howerver, when I use OpenAI api directly,it's well, the working code below:

import os
from openai import OpenAI

os.environ["OPENAI_API_KEY"] = "xxx"

client = OpenAI()

response = client.chat.completions.create(
  model="gpt-4",
  temperature=0.5,
  max_tokens=100,
  messages=[
    {"role": "system", "content": "You are an AI asistant."},
    {"role": "user", "content": "Hello"}
  ]
)

print(response.choices[0].message.content)

My problem has been resolved by myself. Here is my source code:

import os
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
  HumanMessage,
  SystemMessage
)

os.environ["OPENAI_API_KEY"] = "sk-xxx"

chat = ChatOpenAI(
  model="gpt-4",
  temperature=0.5,
  max_tokens=100,
)

messages = [
  SystemMessage(content="You are an AI assistant."),
  HumanMessage(content="Hello")
]

response = chat(messages)

print(response)

I tried to print the ChatOpenAI result returned, and I found that the openai_api_base param was what I set when I ever used Microsoft Azure:
image

So I clear openai_api_base in environment above the code:

os.environ.pop('OPENAI_API_BASE', None)

And it works~

@dchinn1
Copy link

dchinn1 commented Feb 10, 2024

Hey there, I have got a problem with it which I am badly stuck in. Please see if I can be helped.

This is the code:

image image

And this is the error:

image

You posted your API key, make sure to rotate that immediately and you should remove it from the post.

@PaulChess
Copy link

Hey there, I have got a problem with it which I am badly stuck in. Please see if I can be helped.
This is the code:
image image
And this is the error:
image

You posted your API key, make sure to rotate that immediately and you should remove it from the post.

The picture is broken 😂

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Jun 1, 2024
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 8, 2024
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Jun 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature 🔌: chroma Primarily related to ChromaDB integrations Ɑ: embeddings Related to text embedding models module Ɑ: vector store Related to vector store module
Projects
None yet
Development

No branches or pull requests

4 participants