Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CreateCompletion fails with prompts > 478 characters #46

Closed
smaldd14 opened this issue Jan 10, 2023 · 3 comments
Closed

CreateCompletion fails with prompts > 478 characters #46

smaldd14 opened this issue Jan 10, 2023 · 3 comments
Labels
bug Something isn't working

Comments

@smaldd14
Copy link

Describe the bug

openai.createCompletion({}) throws an error with message "Request failed with status code 400" with the following call:
const response = await openai.createCompletion({ model: "text-davinci-003", prompt: p, max_tokens, temperature });
Where
p = "Devin: Hello, how can I help you? you: What can you do for me Devin: I can help you with any questions you may have about our products or services. I can also provide you with information about our company and answer any other questions you may have. you: Okay tell me about your company Devin: Sure! Our company is a leading provider of innovative technology solutions. We specialize in developing custom software and hardware solutions for businesses of all sizes. We have alto"
max_tokens = 4000
temperature = 0.0

My configuration is configured correctly, as all calls with prompt < 478 characters works, but once I get past this character limit, it starts to fail every time.

To Reproduce

  1. call
    const response = await openai.createCompletion({ model: "text-davinci-003", prompt: p, max_tokens, temperature });
    with p = any string longer than 478 characters. Use example string above.

Code snippets

Error response given back to me:
`
{"message":"Request failed with status code 400","name":"Error","stack":"Error: Request failed with status code 400\n    at createError (node_modules/axios/lib/core/createError.js:16:15)\n    at settle (node_modules/axios/lib/core/settle.js:17:12)\n    at IncomingMessage.handleStreamEnd (node_modules/axios/lib/adapters/http.js:322:11)\n    at IncomingMessage.emit (node:events:539:35)\n    at endReadableNT (node:internal/streams/readable:1345:12)\n    at processTicksAndRejections (node:internal/process/task_queues:83:21)","config":{"transitional":{"silentJSONParsing":true,"forcedJSONParsing":true,"clarifyTimeoutError":false},"transformRequest":[null],"transformResponse":[null],"timeout":0,"xsrfCookieName":"XSRF-TOKEN","xsrfHeaderName":"X-XSRF-TOKEN","maxContentLength":-1,"maxBodyLength":-1,"headers":{"Accept":"application/json, text/plain, */*","Content-Type":"application/json","User-Agent":"OpenAI/NodeJS/3.1.0","Authorization":"Bearer sk-***","Content-Length":553},"method":"post","data":"{\"model\":\"text-davinci-003\",\"prompt\":\"Devin: Hello, how can I help you? you: What can you do for me Devin: I can help you with any questions you may have about our products or services. I can also provide you with information about our company and answer any other questions you may have. you: Okay tell me about your company Devin: Sure! Our company is a leading provider of innovative technology solutions. We specialize in developing custom software and hardware solutions for businesses of all sizes. We have alto\",\"max_tokens\":4000,\"temperature\":0}","url":"https://api.openai.com/v1/completions"},"status":400}
`

The above was printed using JSON.stringify FYI

OS

macos

Node version

node 16

Library version

3.1.0

@smaldd14 smaldd14 added the bug Something isn't working label Jan 10, 2023
@smaldd14
Copy link
Author

@schnerd Any idea what could be going on here?

@smaldd14
Copy link
Author

It seems like this issue has been resolved. Can now call createCompletion with prompts > 478 characters. API seems to be making changes everyday...

@ericwangqing
Copy link

You've hit the token limit. It is 4096 tokens including both the prompt and completion. Here your completion is set up-to 4000, and that means your prompt have to less than 96 tokens.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants