Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ChatCompletionStream.fromReadableStream errors due to missing finish_reason for choice #499

Closed
1 task done
eliasm307 opened this issue Nov 14, 2023 · 10 comments
Closed
1 task done
Labels
bug Something isn't working

Comments

@eliasm307
Copy link

Confirm this is a Node library issue and not an underlying OpenAI API issue

  • This is an issue with the Node library

Describe the bug

When trying to use the API described here https://github.com/openai/openai-node/blob/2242688f14d5ab7dbf312d92a99fa4a7394907dc/examples/stream-to-client-browser.ts

I'm getting the following an error at the following point:

image

where the actual choices look like this:
image

Looks like the code expects finish_reason to be populated but the finish details are now in a property called finish_details?

To Reproduce

Setup a server that responds with chat completion streams

Then in the client try to use the ChatCompletionStream.fromReadableStream API, e.g.:

const runner = ChatCompletionStream.fromReadableStream(res.body);
await runner.finalChatCompletion();

Code snippets

No response

OS

Windows

Node version

18.12.1

Library version

4.16.1

@eliasm307 eliasm307 added the bug Something isn't working label Nov 14, 2023
petrgazarov added a commit to petrgazarov/openai-node that referenced this issue Nov 15, 2023
@petrgazarov
Copy link

I'm seeing this only when using the gpt-4-vision-preview model. Other models still return finish_reason param, it seems.

Here is a workaround patch: petrgazarov@093fcbd. I'm happy to send a PR if the maintainers are interested.

@eliasm307
Copy link
Author

@petrgazarov thanks for looking into this, I am also getting this issue when using gpt-4-vision-preview, didn't think it was relevant so wasn't mentioned initially

I have not got this issue before so I guess that's the key difference

@rattrayalex
Copy link
Collaborator

Thanks for reporting, we're investigating.

@jonluca
Copy link

jonluca commented Dec 10, 2023

Is this something that openai needs to fix upstream? This is causing streamed responses using gpt-4-vision-preview and vercel/ai to break

@rattrayalex
Copy link
Collaborator

Hey @jonluca ! Yes, this needs to be fixed on the backend. Let me check with the team about this, I'm sorry that's still happening.

@jonluca
Copy link

jonluca commented Dec 11, 2023

Yeah the API returns finish_details - I'm actually not getting an exception now though, so not sure what was done internally to fix that/clean it up.

CleanShot 2023-12-10 at 17 39 54@2x

@vitorfdl
Copy link

I'm still receiving the error when using the openai package

@rattrayalex
Copy link
Collaborator

@vitorfdl what is the error you're getting?

@iterprise
Copy link

iterprise commented Dec 15, 2023

const messages =  [
            { role: 'user', content: 
                [
                    {
                        type: 'image_url',
                        image_url: 'bla',
                    }
                ]
            }
        ];
    const stream = await openai.beta.chat.completions.stream({
        model,
        messages,
        stream: true,
    });
  
    stream.on('content', (delta, snapshot) => {
      console.log(delta)
    });
  
    stream.finalChatCompletion().then( () => {
        
    } );

I got an error

(node:1824057) UnhandledPromiseRejectionWarning: Error: missing finish_reason for choice 0
    at /node_modules/openai/lib/ChatCompletionStream.mjs:213:23
    at Array.map (<anonymous>)
    at finalizeChatCompletion (/node_modules/openai/lib/ChatCompletionStream.mjs:211:26)
    at ChatCompletionStream._ChatCompletionStream_endRequest (/node_modules/openai/lib/ChatCompletionStream.mjs:107:16)
    at ChatCompletionStream._createChatCompletion (/node_modules/openai/lib/ChatCompletionStream.mjs:58:141)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async ChatCompletionStream._runChatCompletion (/node_modules/openai/lib/AbstractChatCompletionRunner.mjs:312:16)
(Use `electron --trace-warnings ...` to show where the warning was created)

@rattrayalex
Copy link
Collaborator

Thanks! Great news, this has been fixed in the API. Please file a new issue if you see any further problems.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants