-
Notifications
You must be signed in to change notification settings - Fork 45
AI Transport: Add a guide for token streaming using the OpenAI SDK #3024
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: AIT-129-AIT-Docs-release-branch
Are you sure you want to change the base?
AI Transport: Add a guide for token streaming using the OpenAI SDK #3024
Conversation
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
deae500 to
3d12fdf
Compare
| You should see publisher output similar to the following: | ||
| ```text |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@GregHolmes is there some sort of collapsible component that I can use for this (like a <details> I guess)? I'd like to include this output but not force the the user to scroll through it all if just skimming.
| This is only a representative example for a simple "text in, text out" use case, and may not reflect the exact sequence of events that you'll observe from the OpenAI API. It also does not handle response generation errors or refusals. | ||
| </Aside> | ||
|
|
||
| 1. `{ type: 'response.created', response: { id: 'resp_abc123', … }, … }`: This gives us the response ID, which we'll include in all messages that we publish to Ably for this response. When we receive this event, we'll publish an Ably message named `start`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@GregHolmes this list, and these JSON events, came out looking a bit cramped and ugly — any suggestions on what I could do to improve this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think having all the braces makes this look more complicated than it is and not helping the spacing. If you put it into a table, you could have a column with the heading type for the OpenAI messages and just put response.in_progress or response.output_item.added, then a column for any other interesting fields and a column with the Ably message mapping...?
Alternatively, keep the list format but just put the type name into the text and highlight important fields separately e.g.
3. response.output_item.added with item.type = 'reasoning': we'll ignore this event since we're only interested in messages
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The more I think about this, the more I think we should skip this list. I love the technical detail, but it isn't reflected in the code (because we're handling only the specific events we need) and it doesn't aid understanding of Ably. So, I would suggest making the "Understanding Responses API events" section into a clear description of the flow of relevant events we get back from OpenAI, then have a table showing how we'll map those relevant messages to Ably events.
3d12fdf to
9876218
Compare
9876218 to
281713a
Compare
281713a to
4f188f3
Compare
0e663af to
52e32d8
Compare
| **Software requirements:** | ||
| - Node.js 20 or higher | ||
|
|
||
| We'll be using the OpenAPI SDK v4.x. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit - I wouldn't put this in pre-reqs, because you're going to cover installing it below. Instead, I'd mention the version inline right before the install snippet at line 43
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure — I've added the following in the installing section (the only reason to mention the version number was to highlight that we aren't covering all possible OpenAI SDK versions):
<Aside data-type="note">
We're using version 4.x of the OpenAI SDK. Some details of interacting with the OpenAI SDK may diverge from those given here if you're using a different major version.
</Aside>
| This is only a representative example for a simple "text in, text out" use case, and may not reflect the exact sequence of events that you'll observe from the OpenAI API. It also does not handle response generation errors or refusals. | ||
| </Aside> | ||
|
|
||
| 1. `{ type: 'response.created', response: { id: 'resp_abc123', … }, … }`: This gives us the response ID, which we'll include in all messages that we publish to Ably for this response. When we receive this event, we'll publish an Ably message named `start`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think having all the braces makes this look more complicated than it is and not helping the spacing. If you put it into a table, you could have a column with the heading type for the OpenAI messages and just put response.in_progress or response.output_item.added, then a column for any other interesting fields and a column with the Ably message mapping...?
Alternatively, keep the list format but just put the type name into the text and highlight important fields separately e.g.
3. response.output_item.added with item.type = 'reasoning': we'll ignore this event since we're only interested in messages
|
|
||
| 1. `{ type: 'response.created', response: { id: 'resp_abc123', … }, … }`: This gives us the response ID, which we'll include in all messages that we publish to Ably for this response. When we receive this event, we'll publish an Ably message named `start`. | ||
| 2. `{ type: 'response.in_progress', … }`: We'll ignore this event. | ||
| 3. `{ type: 'response.output_item.added', output_index: 0, item: { type: 'reasoning', … }, … }`: We'll ignore this event since its `item.type` is not `message`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we're ignoring this event because we're not processing reasoning, we're only processing the response message. Either say this here or put it in some summary text at the top of the list
|
|
||
| ## Understanding Responses API events <a id="understanding-events"/> | ||
|
|
||
| OpenAI's Responses API streams results as a series of events when you set `stream: true`. The primary event type for text generation is `response.output_text.delta`, which contains incremental text content. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is missing a sentence describing the overall response structure heirarchy (output array of content items, and one of those content items is the response, which is streamed as response.output_text.delta events). That will then make the list of events flow more naturally, because I'm expecting the heirarchy
| This is only a representative example for a simple "text in, text out" use case, and may not reflect the exact sequence of events that you'll observe from the OpenAI API. It also does not handle response generation errors or refusals. | ||
| </Aside> | ||
|
|
||
| 1. `{ type: 'response.created', response: { id: 'resp_abc123', … }, … }`: This gives us the response ID, which we'll include in all messages that we publish to Ably for this response. When we receive this event, we'll publish an Ably message named `start`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The more I think about this, the more I think we should skip this list. I love the technical detail, but it isn't reflected in the code (because we're handling only the specific events we need) and it doesn't aid understanding of Ably. So, I would suggest making the "Understanding Responses API events" section into a clear description of the flow of relevant events we get back from OpenAI, then have a table showing how we'll map those relevant messages to Ably events.
|
|
||
| **Key points**: | ||
|
|
||
| - **Multiple concurrent responses are handled correctly**: The subscriber receives interleaved tokens for three concurrent AI responses, and correctly pieces together the three separate messages: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I never saw interleaved responses when I ran this script, any idea why? Luck, location, something else? It's not particularly important but I just want to make sure there isn't a change in the example code that is causing the behaviour
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was only seeing them intermittently when I first wrote it, and now I'm unable to get any at all, too! I think it would be good if users could observe this behaviour. One option would be to add small random delays before processing each event, what do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(It's not ideal and distracts from the content of the guide)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternatively we could spin up two publisher instances at the same time: node publisher.mjs & node publisher.mjs — on testing this locally it seems to more reliably give interleaved events. But again it complicates the guide.
|
|
||
| ### Publisher code | ||
|
|
||
| Add the following code to `publisher.mjs`: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this would benefit from a quick summary of what the code is doing e.g. "The publisher needs to send the prompt to OpenAI and then process the response stream, publishing relevant events to the Ably channel". Same for the subscriber code snippet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Zak has some good examples in #3018
4f188f3 to
cabebd0
Compare
|
@rainbowFi I've updated the publisher to add an additional prompt ("Write a one-line poem about carrot cake"); missed this out of the original code but it was used when generating the responses shown here |
This guide provides a concrete example of how to implement the message-per-token pattern that Mike documented (the one linked to in this guide). I initially got Claude to generate this but replaced a fair chunk of its output. I trusted that its prose is consistent with our tone of voice and AI Transport marketing position (whether mine is, I have no idea) and in general trusted its judgements about how to structure the document. I would definitely welcome opinions on all of the above, especially from those familiar with how we usually write docs. I have tried to avoid repeating too much content from the message-per-token page and have in particular not tried to give an example of hydration since it seems like a provider-agnostic concept.
cabebd0 to
f827802
Compare
Description
This guide provides a concrete example of how to implement the message-per-token pattern that Mike documented in #3014.
I initially got Claude to generate this but replaced a fair chunk of its output. I trusted that its prose is consistent with our tone of voice and AI Transport marketing position (whether mine is, I have no idea) and in general trusted its judgements about how to structure the document. I would definitely welcome opinions on all of the above, especially from those familiar with how we usually write docs.
I have tried to avoid repeating too much content from the message-per-token page and have in particular not tried to give an example of hydration since it seems like a provider-agnostic concept.
Checklist