Skip to content

Conversation

paulhdk
Copy link

@paulhdk paulhdk commented Aug 14, 2024

In the /chat/completions endpoint definition under x-oaiMeta it says:

x-oaiMeta:
    name: Create chat completion
    group: chat
    returns: |
        Returns a [chat completion](/docs/api-reference/chat/object) object, or a streamed sequence of [chat completion chunk](/docs/api-reference/chat/streaming) objects if the request is streamed.

The "streamed sequence of chat completion chunk objects" refers, I believe, to the CreateChatCompletionStreamResponse type, which is defined, but never explicitly linked to the createChatCompletionRequest() function.

Since it is never defined that createChatCompletionRequest() may return a stream of chunk objects, tools such as the swift-openapi-generator package, which I have been using to implement the OpenAI OpenAPI spec, cannot generate the correct code for the spec.
For more context, see the corresponding issue in the swift-openapi-generator repo.

This PR links the CreateChatCompletionStreamResponse type to the createChatCompletion() function.

@PSchmiedmayer
Copy link

Thank you @paulhdk for creating the PR; it would be amazing to see this merged to allow generators to pick up on this information and correctly parse steaming requests.

paulhdk added a commit to paulhdk/SpeziLLM that referenced this pull request Aug 27, 2024
endpoint

This enables swift-openapi-generator to generate streamed responses.

See: openai/openai-openapi#311
paulhdk added a commit to paulhdk/SpeziLLM that referenced this pull request Aug 27, 2024
endpoint

This enables swift-openapi-generator to generate streamed responses.

See: openai/openai-openapi#311
paulhdk added a commit to paulhdk/SpeziLLM that referenced this pull request Aug 27, 2024
endpoint

This enables swift-openapi-generator to generate streamed responses.

See: openai/openai-openapi#311
paulhdk added a commit to paulhdk/SpeziLLM that referenced this pull request Aug 27, 2024
endpoint

This enables swift-openapi-generator to generate streamed responses.

See: openai/openai-openapi#311
paulhdk added a commit to paulhdk/SpeziLLM that referenced this pull request Aug 27, 2024
endpoint

This enables swift-openapi-generator to generate streamed responses.

See: openai/openai-openapi#311
paulhdk added a commit to paulhdk/SpeziLLM that referenced this pull request Aug 27, 2024
endpoint

This enables swift-openapi-generator to generate streamed responses.

See: openai/openai-openapi#311
paulhdk added a commit to paulhdk/SpeziLLM that referenced this pull request Sep 13, 2024
endpoint

This enables swift-openapi-generator to generate streamed responses.

See: openai/openai-openapi#311
paulhdk added a commit to StanfordSpezi/SpeziLLM that referenced this pull request Dec 16, 2024
endpoint

This enables swift-openapi-generator to generate streamed responses.

See: openai/openai-openapi#311
@PSchmiedmayer
Copy link

@kwhinnery-openai Wondering why this PR was closed; would it make sense to add this to the specification to properly parse streams based on the spec? It would be great to have this in here to avoid that we need to maintain a copy of the spec of our client stubs. Thank you for the support and maintaining this repo!

CC: @paulhdk.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants