Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "2.7.2"
".": "2.8.0"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 136
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-eeba8addf3a5f412e5ce8d22031e60c61650cee3f5d9e587a2533f6818a249ea.yml
openapi_spec_hash: 0a4d8ad2469823ce24a3fd94f23f1c2b
config_hash: 0bb1941a78ece0b610a2fbba7d74a84c
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-ca24bc4d8125b5153514ce643c4e3220f25971b7d67ca384d56d493c72c0d977.yml
openapi_spec_hash: c6f048c7b3d29f4de48fde0e845ba33f
config_hash: b876221dfb213df9f0a999e75d38a65e
13 changes: 13 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,18 @@
# Changelog

## 2.8.0 (2025-11-13)

Full Changelog: [v2.7.2...v2.8.0](https://github.com/openai/openai-python/compare/v2.7.2...v2.8.0)

### Features

* **api:** gpt 5.1 ([8d9f2ca](https://github.com/openai/openai-python/commit/8d9f2cab4cb2e12f6e2ab1de967f858736a656ac))


### Bug Fixes

* **compat:** update signatures of `model_dump` and `model_dump_json` for Pydantic v1 ([c7bd234](https://github.com/openai/openai-python/commit/c7bd234b18239fcdbf0edb1b51ca9116c0ac7251))

## 2.7.2 (2025-11-10)

Full Changelog: [v2.7.1...v2.7.2](https://github.com/openai/openai-python/compare/v2.7.1...v2.7.2)
Expand Down
9 changes: 9 additions & 0 deletions api.md
Original file line number Diff line number Diff line change
Expand Up @@ -732,12 +732,16 @@ Types:

```python
from openai.types.responses import (
ApplyPatchTool,
ComputerTool,
CustomTool,
EasyInputMessage,
FileSearchTool,
FunctionShellTool,
FunctionTool,
Response,
ResponseApplyPatchToolCall,
ResponseApplyPatchToolCallOutput,
ResponseAudioDeltaEvent,
ResponseAudioDoneEvent,
ResponseAudioTranscriptDeltaEvent,
Expand Down Expand Up @@ -774,6 +778,9 @@ from openai.types.responses import (
ResponseFunctionCallArgumentsDoneEvent,
ResponseFunctionCallOutputItem,
ResponseFunctionCallOutputItemList,
ResponseFunctionShellCallOutputContent,
ResponseFunctionShellToolCall,
ResponseFunctionShellToolCallOutput,
ResponseFunctionToolCall,
ResponseFunctionToolCallItem,
ResponseFunctionToolCallOutputItem,
Expand Down Expand Up @@ -836,10 +843,12 @@ from openai.types.responses import (
ResponseWebSearchCallSearchingEvent,
Tool,
ToolChoiceAllowed,
ToolChoiceApplyPatch,
ToolChoiceCustom,
ToolChoiceFunction,
ToolChoiceMcp,
ToolChoiceOptions,
ToolChoiceShell,
ToolChoiceTypes,
WebSearchPreviewTool,
WebSearchTool,
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "openai"
version = "2.7.2"
version = "2.8.0"
description = "The official Python library for the openai API"
dynamic = ["readme"]
license = "Apache-2.0"
Expand Down
41 changes: 29 additions & 12 deletions src/openai/_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -282,32 +282,41 @@ def model_dump(
mode: Literal["json", "python"] | str = "python",
include: IncEx | None = None,
exclude: IncEx | None = None,
context: Any | None = None,
by_alias: bool | None = None,
exclude_unset: bool = False,
exclude_defaults: bool = False,
exclude_none: bool = False,
exclude_computed_fields: bool = False,
round_trip: bool = False,
warnings: bool | Literal["none", "warn", "error"] = True,
context: dict[str, Any] | None = None,
serialize_as_any: bool = False,
fallback: Callable[[Any], Any] | None = None,
serialize_as_any: bool = False,
) -> dict[str, Any]:
"""Usage docs: https://docs.pydantic.dev/2.4/concepts/serialization/#modelmodel_dump
Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.
Args:
mode: The mode in which `to_python` should run.
If mode is 'json', the dictionary will only contain JSON serializable types.
If mode is 'python', the dictionary may contain any Python objects.
include: A list of fields to include in the output.
exclude: A list of fields to exclude from the output.
If mode is 'json', the output will only contain JSON serializable types.
If mode is 'python', the output may contain non-JSON-serializable Python objects.
include: A set of fields to include in the output.
exclude: A set of fields to exclude from the output.
context: Additional context to pass to the serializer.
by_alias: Whether to use the field's alias in the dictionary key if defined.
exclude_unset: Whether to exclude fields that are unset or None from the output.
exclude_defaults: Whether to exclude fields that are set to their default value from the output.
exclude_none: Whether to exclude fields that have a value of `None` from the output.
round_trip: Whether to enable serialization and deserialization round-trip support.
warnings: Whether to log warnings when invalid fields are encountered.
exclude_unset: Whether to exclude fields that have not been explicitly set.
exclude_defaults: Whether to exclude fields that are set to their default value.
exclude_none: Whether to exclude fields that have a value of `None`.
exclude_computed_fields: Whether to exclude computed fields.
While this can be useful for round-tripping, it is usually recommended to use the dedicated
`round_trip` parameter instead.
round_trip: If True, dumped values should be valid as input for non-idempotent types such as Json[T].
warnings: How to handle serialization errors. False/"none" ignores them, True/"warn" logs errors,
"error" raises a [`PydanticSerializationError`][pydantic_core.PydanticSerializationError].
fallback: A function to call when an unknown value is encountered. If not provided,
a [`PydanticSerializationError`][pydantic_core.PydanticSerializationError] error is raised.
serialize_as_any: Whether to serialize fields with duck-typing serialization behavior.
Returns:
A dictionary representation of the model.
Expand All @@ -324,6 +333,8 @@ def model_dump(
raise ValueError("serialize_as_any is only supported in Pydantic v2")
if fallback is not None:
raise ValueError("fallback is only supported in Pydantic v2")
if exclude_computed_fields != False:
raise ValueError("exclude_computed_fields is only supported in Pydantic v2")
dumped = super().dict( # pyright: ignore[reportDeprecated]
include=include,
exclude=exclude,
Expand All @@ -340,15 +351,17 @@ def model_dump_json(
self,
*,
indent: int | None = None,
ensure_ascii: bool = False,
include: IncEx | None = None,
exclude: IncEx | None = None,
context: Any | None = None,
by_alias: bool | None = None,
exclude_unset: bool = False,
exclude_defaults: bool = False,
exclude_none: bool = False,
exclude_computed_fields: bool = False,
round_trip: bool = False,
warnings: bool | Literal["none", "warn", "error"] = True,
context: dict[str, Any] | None = None,
fallback: Callable[[Any], Any] | None = None,
serialize_as_any: bool = False,
) -> str:
Expand Down Expand Up @@ -380,6 +393,10 @@ def model_dump_json(
raise ValueError("serialize_as_any is only supported in Pydantic v2")
if fallback is not None:
raise ValueError("fallback is only supported in Pydantic v2")
if ensure_ascii != False:
raise ValueError("ensure_ascii is only supported in Pydantic v2")
if exclude_computed_fields != False:
raise ValueError("exclude_computed_fields is only supported in Pydantic v2")
return super().json( # type: ignore[reportDeprecated]
indent=indent,
include=include,
Expand Down
2 changes: 1 addition & 1 deletion src/openai/_version.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# File generated from our OpenAPI spec by Stainless. See CONTRIBUTING.md for details.

__title__ = "openai"
__version__ = "2.7.2" # x-release-please-version
__version__ = "2.8.0" # x-release-please-version
4 changes: 4 additions & 0 deletions src/openai/lib/_parsing/_responses.py
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,10 @@ def parse_response(
or output.type == "image_generation_call"
or output.type == "code_interpreter_call"
or output.type == "local_shell_call"
or output.type == "shell_call"
or output.type == "shell_call_output"
or output.type == "apply_patch_call"
or output.type == "apply_patch_call_output"
or output.type == "mcp_list_tools"
or output.type == "exec"
or output.type == "custom_tool_call"
Expand Down
22 changes: 14 additions & 8 deletions src/openai/resources/batches.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,9 @@ def create(
self,
*,
completion_window: Literal["24h"],
endpoint: Literal["/v1/responses", "/v1/chat/completions", "/v1/embeddings", "/v1/completions"],
endpoint: Literal[
"/v1/responses", "/v1/chat/completions", "/v1/embeddings", "/v1/completions", "/v1/moderations"
],
input_file_id: str,
metadata: Optional[Metadata] | Omit = omit,
output_expires_after: batch_create_params.OutputExpiresAfter | Omit = omit,
Expand All @@ -65,9 +67,10 @@ def create(
is supported.
endpoint: The endpoint to be used for all requests in the batch. Currently
`/v1/responses`, `/v1/chat/completions`, `/v1/embeddings`, and `/v1/completions`
are supported. Note that `/v1/embeddings` batches are also restricted to a
maximum of 50,000 embedding inputs across all requests in the batch.
`/v1/responses`, `/v1/chat/completions`, `/v1/embeddings`, `/v1/completions`,
and `/v1/moderations` are supported. Note that `/v1/embeddings` batches are also
restricted to a maximum of 50,000 embedding inputs across all requests in the
batch.
input_file_id: The ID of an uploaded file that contains requests for the new batch.
Expand Down Expand Up @@ -261,7 +264,9 @@ async def create(
self,
*,
completion_window: Literal["24h"],
endpoint: Literal["/v1/responses", "/v1/chat/completions", "/v1/embeddings", "/v1/completions"],
endpoint: Literal[
"/v1/responses", "/v1/chat/completions", "/v1/embeddings", "/v1/completions", "/v1/moderations"
],
input_file_id: str,
metadata: Optional[Metadata] | Omit = omit,
output_expires_after: batch_create_params.OutputExpiresAfter | Omit = omit,
Expand All @@ -280,9 +285,10 @@ async def create(
is supported.
endpoint: The endpoint to be used for all requests in the batch. Currently
`/v1/responses`, `/v1/chat/completions`, `/v1/embeddings`, and `/v1/completions`
are supported. Note that `/v1/embeddings` batches are also restricted to a
maximum of 50,000 embedding inputs across all requests in the batch.
`/v1/responses`, `/v1/chat/completions`, `/v1/embeddings`, `/v1/completions`,
and `/v1/moderations` are supported. Note that `/v1/embeddings` batches are also
restricted to a maximum of 50,000 embedding inputs across all requests in the
batch.
input_file_id: The ID of an uploaded file that contains requests for the new batch.
Expand Down
58 changes: 37 additions & 21 deletions src/openai/resources/beta/assistants.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,12 +98,16 @@ def create(
reasoning_effort: Constrains effort on reasoning for
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
reasoning effort can result in faster responses and fewer tokens used on
reasoning in a response.
Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
calls are supported for all reasoning values in gpt-5.1.
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
support `none`.
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
response_format: Specifies the format that the model must output. Compatible with
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
Expand Down Expand Up @@ -308,12 +312,16 @@ def update(
reasoning_effort: Constrains effort on reasoning for
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
reasoning effort can result in faster responses and fewer tokens used on
reasoning in a response.
Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
calls are supported for all reasoning values in gpt-5.1.
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
support `none`.
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
response_format: Specifies the format that the model must output. Compatible with
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
Expand Down Expand Up @@ -557,12 +565,16 @@ async def create(
reasoning_effort: Constrains effort on reasoning for
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
reasoning effort can result in faster responses and fewer tokens used on
reasoning in a response.
Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
calls are supported for all reasoning values in gpt-5.1.
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
support `none`.
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
response_format: Specifies the format that the model must output. Compatible with
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
Expand Down Expand Up @@ -767,12 +779,16 @@ async def update(
reasoning_effort: Constrains effort on reasoning for
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
supported values are `minimal`, `low`, `medium`, and `high`. Reducing reasoning
effort can result in faster responses and fewer tokens used on reasoning in a
response.
Note: The `gpt-5-pro` model defaults to (and only supports) `high` reasoning
effort.
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
reasoning effort can result in faster responses and fewer tokens used on
reasoning in a response.
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
calls are supported for all reasoning values in gpt-5.1.
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
support `none`.
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
response_format: Specifies the format that the model must output. Compatible with
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
Expand Down
Loading