-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OpenAI node error #6263
Comments
Hi @andersonjeccel, I have taken a look at both of the issues reported and I don't believe there are any bugs here, Below is what I have found let me know if there is anything you disagree with or if you have any questions.
In the response data even though I was sending
With this one while the node is returning an error we should probably do a better job on the error and actually return the body, I also tested this in n8n with a number less than 4096 and went with 1000 and this was correctly processed so this does appear to be working. |
Hi @Joffcom About the second, every time I try to set any number for max number of tokens, I get that error 400. Another question, could environment settings generate errors like this? |
Hey @andersonjeccel We really should do a better job of showing the actual error messages, can you share the prompt you are using so I can give it a test? Could you also try the below prompt with a token size of 400 and see what happens?
|
Hey @andersonjeccel, Quick update the issue with the error not being returned has been fixed here: #6270 this should be available in a release soon. |
Describe the bug
Node throws an error when I try to use maximum number of tokens.
ERROR: Bad request - please check your parameters
Request failed with status code 400
And every time I select to use the gpt3.5-turbo model, the output comes out using the gpt3.5-turbo0301 model.
To Reproduce
Steps to reproduce the behavior:
If you don't use the Limit Tokens option, you will see that the output comes out in the wrong model anyway.
Expected behavior
The output should come out with the most up-to-date model, and also allow limiting tokens.
Environment (please complete the following information):
Additional context
Already tried to search about it, but didn't find a solution
The text was updated successfully, but these errors were encountered: