-
Notifications
You must be signed in to change notification settings - Fork 34
Description
Here's a script that reproduces the issue:
#!/usr/bin/env ruby
# frozen_string_literal: true
require_relative "../../lib/openai"
require "monitor"
client = OpenAI::Client.new
MODEL = "o4-mini"
INSTRUCTIONS = "You are a creative storyteller."
begin
lock = Monitor.new
# First request
response_id = ""
lock.enter
stream = client.responses.stream(
model: MODEL,
instructions: INSTRUCTIONS,
input: "Tell me a very short story about a robot learning to paint."
)
stream.each do |event|
case event
when OpenAI::Models::Responses::ResponseCreatedEvent
response_id = event.response.id if response_id.empty?
when OpenAI::Streaming::ResponseTextDeltaEvent
print(event.delta)
when OpenAI::Streaming::ResponseTextDoneEvent
puts
puts("---")
lock.exit
end
end
# Second request
lock.enter
stream = client.responses.stream(
model: MODEL,
instructions: INSTRUCTIONS,
input: "Tell me another one!",
previous_response_id: response_id
)
stream.each do |event|
case event
when OpenAI::Streaming::ResponseTextDeltaEvent
print(event.delta)
when OpenAI::Streaming::ResponseTextDoneEvent
lock.exit
end
end
endThis fails with the following error
/Users/garriguv/Developer/openai/openai-ruby/lib/openai/internal/transport/base_client.rb:410:in `send_request': {:url=>"https://api.openai.com/v1/responses/resp_060184248f04d3090068ed6a286e188194b078199a93f23877?stream=true", :status=>400, :body=>{:error=>{:message=>"This response cannot be streamed because it was not created with background=true.", :type=>"invalid_request_error", :param=>"stream", :code=>"invalid_request_error"}}} (OpenAI::Errors::BadRequestError)
from /Users/garriguv/Developer/openai/openai-ruby/lib/openai/internal/transport/base_client.rb:483:in `request'
from /Users/garriguv/Developer/openai/openai-ruby/lib/openai/resources/responses.rb:401:in `retrieve_streaming_internal'
from /Users/garriguv/Developer/openai/openai-ruby/lib/openai/resources/responses.rb:201:in `stream'
from examples/responses/streaming_previous_response.rb:41:in `<main>'
I think that this is because the stream method in the responses.rb resource is attempting to resume streaming of an existing response and not creating a new one.
This code path is executed:
openai-ruby/lib/openai/resources/responses.rb
Lines 199 to 205 in 2f30172
| retrieve_params = params.slice(:include, :request_options) | |
| raw_stream = retrieve_streaming_internal( | |
| previous_response_id, | |
| params: retrieve_params, | |
| unwrap: unwrap | |
| ) |
When this is the code path that should be executed:
openai-ruby/lib/openai/resources/responses.rb
Lines 207 to 218 in 2f30172
| parsed[:stream] = true | |
| raw_stream = @client.request( | |
| method: :post, | |
| path: "responses", | |
| headers: {"accept" => "text/event-stream"}, | |
| body: parsed, | |
| stream: OpenAI::Internal::Stream, | |
| model: OpenAI::Models::Responses::ResponseStreamEvent, | |
| unwrap: unwrap, | |
| options: options | |
| ) |
Modifying the stream method to only resume when both previous_response_id and starting_after solves the issue:
- if previous_response_id
+ if previous_response_id && starting_afterThis would be a breaking change so I'm not sure how you'd prefer to handle it. I think it would be nicer to differentiate between creating and resuming a streaming response.
Interestingly, both the Python SDK (https://github.com/openai/openai-python/blob/e5f93f5daee9f3fc7646833ac235b1693f192a56/src/openai/resources/responses/responses.py#L1052-L1070) and Node SDKs (https://github.com/openai/openai-node/blob/master/src/lib/responses/ResponseStream.ts#L179-L191) have similar logic, so maybe I'm just not using the API correctly?
What's your take?