Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NotFoundError: 404 status code (no body) error, when hosted inference URL is used inside OpenAI #4542

Closed
BakingBrains opened this issue Feb 27, 2024 · 2 comments
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature

Comments

@BakingBrains
Copy link



const model = new OpenAI({
    temperature: 0.9,
    openAIApiKey: "xyz",
    configuration: {
      baseURL: "http://localhost:PORT/v1",
    },
});


const res = await model.call(
  "Tell me a story about a boy"
);
console.log({ res });```


**ERROR facing**

```NotFoundError: 404 status code (no body)
    at APIError.generate (file:///D:/pycharmprojects/API_SERVICES/DUMP/node_modules/openai/error.mjs:49:20)
    at OpenAI.makeStatusError (file:///D:/pycharmprojects/API_SERVICES/DUMP/node_modules/openai/core.mjs:256:25)
    at OpenAI.makeRequest (file:///D:/pycharmprojects/API_SERVICES/DUMP/node_modules/openai/core.mjs:299:30)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async file:///D:/pycharmprojects/API_SERVICES/DUMP/node_modules/@langchain/openai/dist/llms.js:472:29
    at async RetryOperation._fn (D:\pycharmprojects\API_SERVICES\DUMP\node_modules\p-retry\index.js:50:12)```


Any suggestions here?

Thank you
Copy link

dosubot bot commented Feb 27, 2024

🤖

Based on the error message you've shared, it seems like the server is unable to find the resource you're requesting. This could be due to an incorrect endpoint or the model you're trying to use is not available.

In your code, you're trying to use a baseURL of "http://localhost:PORT/v1". Please ensure that you have a server running on this URL and that it's correctly configured to handle the requests from the OpenAI model. If you're using a local server for testing, replace "PORT" with the actual port number where your server is running.

If you're trying to use a hosted model from OpenAI, you should use the correct baseURL for OpenAI's API. Here's an example of how to do it:

import { OpenAI } from "@langchain/openai";

const model = new OpenAI({
    temperature: 0.9,
    openAIApiKey: "xyz",
    configuration: {
      baseURL: "https://api.openai.com/v1",
    },
});

const res = await model.call(
  "Tell me a story about a boy"
);
console.log({ res });

In this example, the baseURL is set to "https://api.openai.com/v1", which is the correct baseURL for OpenAI's API.

Also, please ensure that you're using the correct OpenAI API key. The key should be passed as the openAIApiKey parameter when creating a new instance of the OpenAI class.

If you're still encountering the issue after checking these points, it would be helpful if you could provide more details about your setup and the exact steps you're taking when the error occurs. This will help in identifying the root cause of the issue and finding a suitable solution.

For more information, you can refer to the LangChainJS documentation and the OpenAI API documentation.

I hope this helps! If you have any more questions, feel free to ask.

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@dosubot dosubot bot added the auto:bug Related to a bug, vulnerability, unexpected error with an existing feature label Feb 27, 2024
@BakingBrains
Copy link
Author

I am using huggingface text-generation inference endpoints. so the URL is "http://x.x.x.x:PORT/v1"

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Jun 1, 2024
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 8, 2024
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Jun 8, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature
Projects
None yet
Development

No branches or pull requests

1 participant