Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OpenAI node error #6263

Closed
andersonjeccel opened this issue May 16, 2023 · 4 comments
Closed

OpenAI node error #6263

andersonjeccel opened this issue May 16, 2023 · 4 comments

Comments

@andersonjeccel
Copy link

Describe the bug
Node throws an error when I try to use maximum number of tokens.

ERROR: Bad request - please check your parameters
Request failed with status code 400

And every time I select to use the gpt3.5-turbo model, the output comes out using the gpt3.5-turbo0301 model.

To Reproduce
Steps to reproduce the behavior:

  1. Create an OpenAI node and fill it with credentials
  2. Enter a prompt
  3. Add filter option for Max Tokens
  4. Fill in 4096 or any number below that
  5. The error mentioned above will happen

If you don't use the Limit Tokens option, you will see that the output comes out in the wrong model anyway.

Expected behavior
The output should come out with the most up-to-date model, and also allow limiting tokens.

Environment (please complete the following information):

  • OS: Debian 11 Server
  • n8n Version 0.227.1
  • Node.js Version 16.19.1
  • Database system n8n default on Docker
  • Operation mode own

Additional context
Already tried to search about it, but didn't find a solution

@Joffcom
Copy link
Member

Joffcom commented May 17, 2023

Hi @andersonjeccel,

I have taken a look at both of the issues reported and I don't believe there are any bugs here, Below is what I have found let me know if there is anything you disagree with or if you have any questions.

  1. gpt-3.5-turbo returning as gpt-3.5-turbo-0301 - This appears to be working as expected, It confused me for a bit so I decided to remove n8n from the equation and did a test using Postman instead. In my test I was sending the example CURL data they provide here: https://platform.openai.com/docs/api-reference/making-requests

In the response data even though I was sending gpt-3.5-turbo in the output I was still seeing gpt-3.5-turbo-0301, This oddly is also what the OpenAI example linked above shows so based on this I don't believe there is an n8n issue here and this is just how OpenAI works.

  1. Setting the Max Tokens to 4096 fails - I again used the example CURL request in Postman to check this and if I set the max_tokens to 4096 what I actually get back along with the 400 Request failed message is some content to tell me what has gone wrong.
{
    "error": {
        "message": "This model's maximum context length is 4097 tokens. However, you requested 4110 tokens (14 in the messages, 4096 in the completion). Please reduce the length of the messages or completion.",
        "type": "invalid_request_error",
        "param": "messages",
        "code": "context_length_exceeded"
    }
}

With this one while the node is returning an error we should probably do a better job on the error and actually return the body, I also tested this in n8n with a number less than 4096 and went with 1000 and this was correctly processed so this does appear to be working.

@andersonjeccel
Copy link
Author

Hi @Joffcom
About the first issue: ok, got it and thanks for your patience!

About the second, every time I try to set any number for max number of tokens, I get that error 400.
Tried to review what could be happening for hours before creating this issue on GitHub, no success.
How can I see what's happening on my n8n installation?
Maybe a log of what OpenAI returned instead of "error 400" message...

Another question, could environment settings generate errors like this?

@Joffcom
Copy link
Member

Joffcom commented May 18, 2023

Hey @andersonjeccel

We really should do a better job of showing the actual error messages, can you share the prompt you are using so I can give it a test?

Could you also try the below prompt with a token size of 400 and see what happens?

Say this is a test!

@Joffcom
Copy link
Member

Joffcom commented May 18, 2023

Hey @andersonjeccel,

Quick update the issue with the error not being returned has been fixed here: #6270 this should be available in a release soon.

@Joffcom Joffcom closed this as completed Jun 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants