Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.35.2"
".": "0.36.0"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 135
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-eeba8addf3a5f412e5ce8d22031e60c61650cee3f5d9e587a2533f6818a249ea.yml
openapi_spec_hash: 0a4d8ad2469823ce24a3fd94f23f1c2b
config_hash: 0bb1941a78ece0b610a2fbba7d74a84c
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-ca24bc4d8125b5153514ce643c4e3220f25971b7d67ca384d56d493c72c0d977.yml
openapi_spec_hash: c6f048c7b3d29f4de48fde0e845ba33f
config_hash: b876221dfb213df9f0a999e75d38a65e
8 changes: 8 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
# Changelog

## 0.36.0 (2025-11-13)

Full Changelog: [v0.35.2...v0.36.0](https://github.com/openai/openai-ruby/compare/v0.35.2...v0.36.0)

### Features

* **api:** gpt 5.1 ([26ece0e](https://github.com/openai/openai-ruby/commit/26ece0eb68486e40066c89f626b9a83c4f274889))

## 0.35.2 (2025-11-05)

Full Changelog: [v0.35.1...v0.35.2](https://github.com/openai/openai-ruby/compare/v0.35.1...v0.35.2)
Expand Down
2 changes: 1 addition & 1 deletion Gemfile.lock
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ GIT
PATH
remote: .
specs:
openai (0.35.2)
openai (0.36.0)
connection_pool

GEM
Expand Down
36 changes: 21 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ To use this gem, install via Bundler by adding the following to your application
<!-- x-release-please-start-version -->

```ruby
gem "openai", "~> 0.35.2"
gem "openai", "~> 0.36.0"
```

<!-- x-release-please-end -->
Expand All @@ -30,7 +30,10 @@ openai = OpenAI::Client.new(
api_key: ENV["OPENAI_API_KEY"] # This is the default and can be omitted
)

chat_completion = openai.chat.completions.create(messages: [{role: "user", content: "Say this is a test"}], model: :"gpt-5")
chat_completion = openai.chat.completions.create(
messages: [{role: "user", content: "Say this is a test"}],
model: :"gpt-5.1"
)

puts(chat_completion)
```
Expand All @@ -42,7 +45,7 @@ We provide support for streaming responses using Server-Sent Events (SSE).
```ruby
stream = openai.responses.stream(
input: "Write a haiku about OpenAI.",
model: :"gpt-5"
model: :"gpt-5.1"
)

stream.each do |event|
Expand Down Expand Up @@ -340,7 +343,7 @@ openai = OpenAI::Client.new(
# Or, configure per-request:
openai.chat.completions.create(
messages: [{role: "user", content: "How can I get the name of the current day in JavaScript?"}],
model: :"gpt-5",
model: :"gpt-5.1",
request_options: {max_retries: 5}
)
```
Expand All @@ -358,7 +361,7 @@ openai = OpenAI::Client.new(
# Or, configure per-request:
openai.chat.completions.create(
messages: [{role: "user", content: "How can I list all files in a directory using Python?"}],
model: :"gpt-5",
model: :"gpt-5.1",
request_options: {timeout: 5}
)
```
Expand Down Expand Up @@ -393,7 +396,7 @@ Note: the `extra_` parameters of the same name overrides the documented paramete
chat_completion =
openai.chat.completions.create(
messages: [{role: "user", content: "How can I get the name of the current day in JavaScript?"}],
model: :"gpt-5",
model: :"gpt-5.1",
request_options: {
extra_query: {my_query_parameter: value},
extra_body: {my_body_parameter: value},
Expand Down Expand Up @@ -441,20 +444,23 @@ You can provide typesafe request parameters like so:
```ruby
openai.chat.completions.create(
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(content: "Say this is a test")],
model: :"gpt-5"
model: :"gpt-5.1"
)
```

Or, equivalently:

```ruby
# Hashes work, but are not typesafe:
openai.chat.completions.create(messages: [{role: "user", content: "Say this is a test"}], model: :"gpt-5")
openai.chat.completions.create(
messages: [{role: "user", content: "Say this is a test"}],
model: :"gpt-5.1"
)

# You can also splat a full Params class:
params = OpenAI::Chat::CompletionCreateParams.new(
messages: [OpenAI::Chat::ChatCompletionUserMessageParam.new(content: "Say this is a test")],
model: :"gpt-5"
model: :"gpt-5.1"
)
openai.chat.completions.create(**params)
```
Expand All @@ -464,25 +470,25 @@ openai.chat.completions.create(**params)
Since this library does not depend on `sorbet-runtime`, it cannot provide [`T::Enum`](https://sorbet.org/docs/tenum) instances. Instead, we provide "tagged symbols" instead, which is always a primitive at runtime:

```ruby
# :minimal
puts(OpenAI::ReasoningEffort::MINIMAL)
# :"in-memory"
puts(OpenAI::Chat::CompletionCreateParams::PromptCacheRetention::IN_MEMORY)

# Revealed type: `T.all(OpenAI::ReasoningEffort, Symbol)`
T.reveal_type(OpenAI::ReasoningEffort::MINIMAL)
# Revealed type: `T.all(OpenAI::Chat::CompletionCreateParams::PromptCacheRetention, Symbol)`
T.reveal_type(OpenAI::Chat::CompletionCreateParams::PromptCacheRetention::IN_MEMORY)
```

Enum parameters have a "relaxed" type, so you can either pass in enum constants or their literal value:

```ruby
# Using the enum constants preserves the tagged type information:
openai.chat.completions.create(
reasoning_effort: OpenAI::ReasoningEffort::MINIMAL,
prompt_cache_retention: OpenAI::Chat::CompletionCreateParams::PromptCacheRetention::IN_MEMORY,
# …
)

# Literal values are also permissible:
openai.chat.completions.create(
reasoning_effort: :minimal,
prompt_cache_retention: :"in-memory",
# …
)
```
Expand Down
9 changes: 9 additions & 0 deletions lib/openai.rb
Original file line number Diff line number Diff line change
Expand Up @@ -528,15 +528,19 @@
require_relative "openai/models/response_format_text"
require_relative "openai/models/response_format_text_grammar"
require_relative "openai/models/response_format_text_python"
require_relative "openai/models/responses/apply_patch_tool"
require_relative "openai/models/responses/computer_tool"
require_relative "openai/models/responses/custom_tool"
require_relative "openai/models/responses/easy_input_message"
require_relative "openai/models/responses/file_search_tool"
require_relative "openai/models/responses/function_shell_tool"
require_relative "openai/models/responses/function_tool"
require_relative "openai/models/responses/input_item_list_params"
require_relative "openai/models/responses/input_token_count_params"
require_relative "openai/models/responses/input_token_count_response"
require_relative "openai/models/responses/response"
require_relative "openai/models/responses/response_apply_patch_tool_call"
require_relative "openai/models/responses/response_apply_patch_tool_call_output"
require_relative "openai/models/responses/response_audio_delta_event"
require_relative "openai/models/responses/response_audio_done_event"
require_relative "openai/models/responses/response_audio_transcript_delta_event"
Expand Down Expand Up @@ -576,6 +580,9 @@
require_relative "openai/models/responses/response_function_call_arguments_done_event"
require_relative "openai/models/responses/response_function_call_output_item"
require_relative "openai/models/responses/response_function_call_output_item_list"
require_relative "openai/models/responses/response_function_shell_call_output_content"
require_relative "openai/models/responses/response_function_shell_tool_call"
require_relative "openai/models/responses/response_function_shell_tool_call_output"
require_relative "openai/models/responses/response_function_tool_call_item"
require_relative "openai/models/responses/response_function_tool_call_output_item"
require_relative "openai/models/responses/response_function_web_search"
Expand Down Expand Up @@ -634,10 +641,12 @@
require_relative "openai/models/responses/response_web_search_call_searching_event"
require_relative "openai/models/responses/tool"
require_relative "openai/models/responses/tool_choice_allowed"
require_relative "openai/models/responses/tool_choice_apply_patch"
require_relative "openai/models/responses/tool_choice_custom"
require_relative "openai/models/responses/tool_choice_function"
require_relative "openai/models/responses/tool_choice_mcp"
require_relative "openai/models/responses/tool_choice_options"
require_relative "openai/models/responses/tool_choice_shell"
require_relative "openai/models/responses/tool_choice_types"
require_relative "openai/models/responses/web_search_preview_tool"
require_relative "openai/models/responses/web_search_tool"
Expand Down
12 changes: 6 additions & 6 deletions lib/openai/internal/type/enum.rb
Original file line number Diff line number Diff line change
Expand Up @@ -19,23 +19,23 @@ module Type
# @example
# # `chat_model` is a `OpenAI::ChatModel`
# case chat_model
# when OpenAI::ChatModel::GPT_5
# when OpenAI::ChatModel::GPT_5_1
# # ...
# when OpenAI::ChatModel::GPT_5_MINI
# when OpenAI::ChatModel::GPT_5_1_2025_11_13
# # ...
# when OpenAI::ChatModel::GPT_5_NANO
# when OpenAI::ChatModel::GPT_5_1_CODEX
# # ...
# else
# puts(chat_model)
# end
#
# @example
# case chat_model
# in :"gpt-5"
# in :"gpt-5.1"
# # ...
# in :"gpt-5-mini"
# in :"gpt-5.1-2025-11-13"
# # ...
# in :"gpt-5-nano"
# in :"gpt-5.1-codex"
# # ...
# else
# puts(chat_model)
Expand Down
15 changes: 9 additions & 6 deletions lib/openai/models/batch_create_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,10 @@ class BatchCreateParams < OpenAI::Internal::Type::BaseModel

# @!attribute endpoint
# The endpoint to be used for all requests in the batch. Currently
# `/v1/responses`, `/v1/chat/completions`, `/v1/embeddings`, and `/v1/completions`
# are supported. Note that `/v1/embeddings` batches are also restricted to a
# maximum of 50,000 embedding inputs across all requests in the batch.
# `/v1/responses`, `/v1/chat/completions`, `/v1/embeddings`, `/v1/completions`,
# and `/v1/moderations` are supported. Note that `/v1/embeddings` batches are also
# restricted to a maximum of 50,000 embedding inputs across all requests in the
# batch.
#
# @return [Symbol, OpenAI::Models::BatchCreateParams::Endpoint]
required :endpoint, enum: -> { OpenAI::BatchCreateParams::Endpoint }
Expand Down Expand Up @@ -83,16 +84,18 @@ module CompletionWindow
end

# The endpoint to be used for all requests in the batch. Currently
# `/v1/responses`, `/v1/chat/completions`, `/v1/embeddings`, and `/v1/completions`
# are supported. Note that `/v1/embeddings` batches are also restricted to a
# maximum of 50,000 embedding inputs across all requests in the batch.
# `/v1/responses`, `/v1/chat/completions`, `/v1/embeddings`, `/v1/completions`,
# and `/v1/moderations` are supported. Note that `/v1/embeddings` batches are also
# restricted to a maximum of 50,000 embedding inputs across all requests in the
# batch.
module Endpoint
extend OpenAI::Internal::Type::Enum

V1_RESPONSES = :"/v1/responses"
V1_CHAT_COMPLETIONS = :"/v1/chat/completions"
V1_EMBEDDINGS = :"/v1/embeddings"
V1_COMPLETIONS = :"/v1/completions"
V1_MODERATIONS = :"/v1/moderations"

# @!method self.values
# @return [Array<Symbol>]
Expand Down
14 changes: 9 additions & 5 deletions lib/openai/models/beta/assistant_create_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -51,12 +51,16 @@ class AssistantCreateParams < OpenAI::Internal::Type::BaseModel
# @!attribute reasoning_effort
# Constrains effort on reasoning for
# [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
# supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
# reasoning effort can result in faster responses and fewer tokens used on
# reasoning in a response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
# - `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
# reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
# calls are supported for all reasoning values in gpt-5.1.
# - All models before `gpt-5.1` default to `medium` reasoning effort, and do not
# support `none`.
# - The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true
Expand Down
14 changes: 9 additions & 5 deletions lib/openai/models/beta/assistant_update_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -51,12 +51,16 @@ class AssistantUpdateParams < OpenAI::Internal::Type::BaseModel
# @!attribute reasoning_effort
# Constrains effort on reasoning for
# [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
# supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
# reasoning effort can result in faster responses and fewer tokens used on
# reasoning in a response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
# - `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
# reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
# calls are supported for all reasoning values in gpt-5.1.
# - All models before `gpt-5.1` default to `medium` reasoning effort, and do not
# support `none`.
# - The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true
Expand Down
16 changes: 10 additions & 6 deletions lib/openai/models/beta/threads/run_create_params.rb
Original file line number Diff line number Diff line change
Expand Up @@ -109,12 +109,16 @@ class RunCreateParams < OpenAI::Internal::Type::BaseModel
# @!attribute reasoning_effort
# Constrains effort on reasoning for
# [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
# supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
# effort can result in faster responses and fewer tokens used on reasoning in a
# response.
#
# Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
# effort.
# supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
# reasoning effort can result in faster responses and fewer tokens used on
# reasoning in a response.
#
# - `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
# reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
# calls are supported for all reasoning values in gpt-5.1.
# - All models before `gpt-5.1` default to `medium` reasoning effort, and do not
# support `none`.
# - The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
#
# @return [Symbol, OpenAI::Models::ReasoningEffort, nil]
optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true
Expand Down
Loading