You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
openai.createCompletion({}) throws an error with message "Request failed with status code 400" with the following call: const response = await openai.createCompletion({ model: "text-davinci-003", prompt: p, max_tokens, temperature });
Where
p = "Devin: Hello, how can I help you? you: What can you do for me Devin: I can help you with any questions you may have about our products or services. I can also provide you with information about our company and answer any other questions you may have. you: Okay tell me about your company Devin: Sure! Our company is a leading provider of innovative technology solutions. We specialize in developing custom software and hardware solutions for businesses of all sizes. We have alto"
max_tokens = 4000
temperature = 0.0
My configuration is configured correctly, as all calls with prompt < 478 characters works, but once I get past this character limit, it starts to fail every time.
To Reproduce
call const response = await openai.createCompletion({ model: "text-davinci-003", prompt: p, max_tokens, temperature });
with p = any string longer than 478 characters. Use example string above.
Code snippets
Error responsegivenbacktome:
`{"message":"Request failed with status code 400","name":"Error","stack":"Error: Request failed with status code 400\n at createError (node_modules/axios/lib/core/createError.js:16:15)\n at settle (node_modules/axios/lib/core/settle.js:17:12)\n at IncomingMessage.handleStreamEnd (node_modules/axios/lib/adapters/http.js:322:11)\n at IncomingMessage.emit (node:events:539:35)\n at endReadableNT (node:internal/streams/readable:1345:12)\n at processTicksAndRejections (node:internal/process/task_queues:83:21)","config":{"transitional":{"silentJSONParsing":true,"forcedJSONParsing":true,"clarifyTimeoutError":false},"transformRequest":[null],"transformResponse":[null],"timeout":0,"xsrfCookieName":"XSRF-TOKEN","xsrfHeaderName":"X-XSRF-TOKEN","maxContentLength":-1,"maxBodyLength":-1,"headers":{"Accept":"application/json, text/plain, */*","Content-Type":"application/json","User-Agent":"OpenAI/NodeJS/3.1.0","Authorization":"Bearer sk-***","Content-Length":553},"method":"post","data":"{\"model\":\"text-davinci-003\",\"prompt\":\"Devin: Hello, how can I help you? you: What can you do for me Devin: I can help you with any questions you may have about our products or services. I can also provide you with information about our company and answer any other questions you may have. you: Okay tell me about your company Devin: Sure! Our company is a leading provider of innovative technology solutions. We specialize in developing custom software and hardware solutions for businesses of all sizes. We have alto\",\"max_tokens\":4000,\"temperature\":0}","url":"https://api.openai.com/v1/completions"},"status":400}`TheabovewasprintedusingJSON.stringifyFYI
OS
macos
Node version
node 16
Library version
3.1.0
The text was updated successfully, but these errors were encountered:
You've hit the token limit. It is 4096 tokens including both the prompt and completion. Here your completion is set up-to 4000, and that means your prompt have to less than 96 tokens.
Describe the bug
openai.createCompletion({}) throws an error with message "Request failed with status code 400" with the following call:
const response = await openai.createCompletion({ model: "text-davinci-003", prompt: p, max_tokens, temperature });
Where
p = "Devin: Hello, how can I help you? you: What can you do for me Devin: I can help you with any questions you may have about our products or services. I can also provide you with information about our company and answer any other questions you may have. you: Okay tell me about your company Devin: Sure! Our company is a leading provider of innovative technology solutions. We specialize in developing custom software and hardware solutions for businesses of all sizes. We have alto"
max_tokens = 4000
temperature = 0.0
My configuration is configured correctly, as all calls with prompt < 478 characters works, but once I get past this character limit, it starts to fail every time.
To Reproduce
const response = await openai.createCompletion({ model: "text-davinci-003", prompt: p, max_tokens, temperature });
with p = any string longer than 478 characters. Use example string above.
Code snippets
OS
macos
Node version
node 16
Library version
3.1.0
The text was updated successfully, but these errors were encountered: