Skip to content

Commit 22f646e

Browse files
feat(api): gpt-5.1-codex-max and responses/compact
1 parent c2a3cd5 commit 22f646e

File tree

65 files changed

+1029
-248
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

65 files changed

+1029
-248
lines changed

.stats.yml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
configured_endpoints: 136
2-
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-a7e92d12ebe89ca019a7ac5b29759064eefa2c38fe08d03516f2620e66abb32b.yml
3-
openapi_spec_hash: acbc703b2739447abc6312b2d753631c
4-
config_hash: b876221dfb213df9f0a999e75d38a65e
1+
configured_endpoints: 137
2+
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-fe8a79e6fd407e6c9afec60971f03076b65f711ccd6ea16457933b0e24fb1f6d.yml
3+
openapi_spec_hash: 38c0a73f4e08843732c5f8002a809104
4+
config_hash: 2c350086d87a4b4532077363087840e7

api.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -733,6 +733,7 @@ Types:
733733
```python
734734
from openai.types.responses import (
735735
ApplyPatchTool,
736+
CompactedResponse,
736737
ComputerTool,
737738
CustomTool,
738739
EasyInputMessage,
@@ -752,6 +753,8 @@ from openai.types.responses import (
752753
ResponseCodeInterpreterCallInProgressEvent,
753754
ResponseCodeInterpreterCallInterpretingEvent,
754755
ResponseCodeInterpreterToolCall,
756+
ResponseCompactionItem,
757+
ResponseCompactionItemParam,
755758
ResponseCompletedEvent,
756759
ResponseComputerToolCall,
757760
ResponseComputerToolCallOutputItem,
@@ -861,6 +864,7 @@ Methods:
861864
- <code title="get /responses/{response_id}">client.responses.<a href="./src/openai/resources/responses/responses.py">retrieve</a>(response_id, \*\*<a href="src/openai/types/responses/response_retrieve_params.py">params</a>) -> <a href="./src/openai/types/responses/response.py">Response</a></code>
862865
- <code title="delete /responses/{response_id}">client.responses.<a href="./src/openai/resources/responses/responses.py">delete</a>(response_id) -> None</code>
863866
- <code title="post /responses/{response_id}/cancel">client.responses.<a href="./src/openai/resources/responses/responses.py">cancel</a>(response_id) -> <a href="./src/openai/types/responses/response.py">Response</a></code>
867+
- <code title="post /responses/compact">client.responses.<a href="./src/openai/resources/responses/responses.py">compact</a>(\*\*<a href="src/openai/types/responses/response_compact_params.py">params</a>) -> <a href="./src/openai/types/responses/compacted_response.py">CompactedResponse</a></code>
864868

865869
## InputItems
866870

@@ -914,6 +918,7 @@ from openai.types.realtime import (
914918
InputAudioBufferClearedEvent,
915919
InputAudioBufferCommitEvent,
916920
InputAudioBufferCommittedEvent,
921+
InputAudioBufferDtmfEventReceivedEvent,
917922
InputAudioBufferSpeechStartedEvent,
918923
InputAudioBufferSpeechStoppedEvent,
919924
InputAudioBufferTimeoutTriggered,

src/openai/lib/_parsing/_responses.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -103,6 +103,7 @@ def parse_response(
103103
or output.type == "file_search_call"
104104
or output.type == "web_search_call"
105105
or output.type == "reasoning"
106+
or output.type == "compaction"
106107
or output.type == "mcp_call"
107108
or output.type == "mcp_approval_request"
108109
or output.type == "image_generation_call"

src/openai/resources/beta/assistants.py

Lines changed: 16 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -98,16 +98,17 @@ def create(
9898
9999
reasoning_effort: Constrains effort on reasoning for
100100
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
101-
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
102-
reasoning effort can result in faster responses and fewer tokens used on
103-
reasoning in a response.
101+
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
102+
Reducing reasoning effort can result in faster responses and fewer tokens used
103+
on reasoning in a response.
104104
105105
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
106106
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
107107
calls are supported for all reasoning values in gpt-5.1.
108108
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
109109
support `none`.
110110
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
111+
- `xhigh` is currently only supported for `gpt-5.1-codex-max`.
111112
112113
response_format: Specifies the format that the model must output. Compatible with
113114
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
@@ -312,16 +313,17 @@ def update(
312313
313314
reasoning_effort: Constrains effort on reasoning for
314315
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
315-
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
316-
reasoning effort can result in faster responses and fewer tokens used on
317-
reasoning in a response.
316+
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
317+
Reducing reasoning effort can result in faster responses and fewer tokens used
318+
on reasoning in a response.
318319
319320
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
320321
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
321322
calls are supported for all reasoning values in gpt-5.1.
322323
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
323324
support `none`.
324325
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
326+
- `xhigh` is currently only supported for `gpt-5.1-codex-max`.
325327
326328
response_format: Specifies the format that the model must output. Compatible with
327329
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
@@ -565,16 +567,17 @@ async def create(
565567
566568
reasoning_effort: Constrains effort on reasoning for
567569
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
568-
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
569-
reasoning effort can result in faster responses and fewer tokens used on
570-
reasoning in a response.
570+
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
571+
Reducing reasoning effort can result in faster responses and fewer tokens used
572+
on reasoning in a response.
571573
572574
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
573575
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
574576
calls are supported for all reasoning values in gpt-5.1.
575577
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
576578
support `none`.
577579
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
580+
- `xhigh` is currently only supported for `gpt-5.1-codex-max`.
578581
579582
response_format: Specifies the format that the model must output. Compatible with
580583
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
@@ -779,16 +782,17 @@ async def update(
779782
780783
reasoning_effort: Constrains effort on reasoning for
781784
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
782-
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
783-
reasoning effort can result in faster responses and fewer tokens used on
784-
reasoning in a response.
785+
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
786+
Reducing reasoning effort can result in faster responses and fewer tokens used
787+
on reasoning in a response.
785788
786789
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
787790
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
788791
calls are supported for all reasoning values in gpt-5.1.
789792
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
790793
support `none`.
791794
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
795+
- `xhigh` is currently only supported for `gpt-5.1-codex-max`.
792796
793797
response_format: Specifies the format that the model must output. Compatible with
794798
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),

src/openai/resources/beta/threads/runs/runs.py

Lines changed: 24 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -169,16 +169,17 @@ def create(
169169
170170
reasoning_effort: Constrains effort on reasoning for
171171
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
172-
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
173-
reasoning effort can result in faster responses and fewer tokens used on
174-
reasoning in a response.
172+
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
173+
Reducing reasoning effort can result in faster responses and fewer tokens used
174+
on reasoning in a response.
175175
176176
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
177177
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
178178
calls are supported for all reasoning values in gpt-5.1.
179179
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
180180
support `none`.
181181
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
182+
- `xhigh` is currently only supported for `gpt-5.1-codex-max`.
182183
183184
response_format: Specifies the format that the model must output. Compatible with
184185
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
@@ -330,16 +331,17 @@ def create(
330331
331332
reasoning_effort: Constrains effort on reasoning for
332333
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
333-
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
334-
reasoning effort can result in faster responses and fewer tokens used on
335-
reasoning in a response.
334+
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
335+
Reducing reasoning effort can result in faster responses and fewer tokens used
336+
on reasoning in a response.
336337
337338
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
338339
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
339340
calls are supported for all reasoning values in gpt-5.1.
340341
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
341342
support `none`.
342343
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
344+
- `xhigh` is currently only supported for `gpt-5.1-codex-max`.
343345
344346
response_format: Specifies the format that the model must output. Compatible with
345347
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
@@ -487,16 +489,17 @@ def create(
487489
488490
reasoning_effort: Constrains effort on reasoning for
489491
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
490-
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
491-
reasoning effort can result in faster responses and fewer tokens used on
492-
reasoning in a response.
492+
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
493+
Reducing reasoning effort can result in faster responses and fewer tokens used
494+
on reasoning in a response.
493495
494496
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
495497
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
496498
calls are supported for all reasoning values in gpt-5.1.
497499
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
498500
support `none`.
499501
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
502+
- `xhigh` is currently only supported for `gpt-5.1-codex-max`.
500503
501504
response_format: Specifies the format that the model must output. Compatible with
502505
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
@@ -1620,16 +1623,17 @@ async def create(
16201623
16211624
reasoning_effort: Constrains effort on reasoning for
16221625
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
1623-
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
1624-
reasoning effort can result in faster responses and fewer tokens used on
1625-
reasoning in a response.
1626+
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
1627+
Reducing reasoning effort can result in faster responses and fewer tokens used
1628+
on reasoning in a response.
16261629
16271630
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
16281631
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
16291632
calls are supported for all reasoning values in gpt-5.1.
16301633
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
16311634
support `none`.
16321635
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
1636+
- `xhigh` is currently only supported for `gpt-5.1-codex-max`.
16331637
16341638
response_format: Specifies the format that the model must output. Compatible with
16351639
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
@@ -1781,16 +1785,17 @@ async def create(
17811785
17821786
reasoning_effort: Constrains effort on reasoning for
17831787
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
1784-
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
1785-
reasoning effort can result in faster responses and fewer tokens used on
1786-
reasoning in a response.
1788+
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
1789+
Reducing reasoning effort can result in faster responses and fewer tokens used
1790+
on reasoning in a response.
17871791
17881792
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
17891793
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
17901794
calls are supported for all reasoning values in gpt-5.1.
17911795
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
17921796
support `none`.
17931797
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
1798+
- `xhigh` is currently only supported for `gpt-5.1-codex-max`.
17941799
17951800
response_format: Specifies the format that the model must output. Compatible with
17961801
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),
@@ -1938,16 +1943,17 @@ async def create(
19381943
19391944
reasoning_effort: Constrains effort on reasoning for
19401945
[reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
1941-
supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing
1942-
reasoning effort can result in faster responses and fewer tokens used on
1943-
reasoning in a response.
1946+
supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`.
1947+
Reducing reasoning effort can result in faster responses and fewer tokens used
1948+
on reasoning in a response.
19441949
19451950
- `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported
19461951
reasoning values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool
19471952
calls are supported for all reasoning values in gpt-5.1.
19481953
- All models before `gpt-5.1` default to `medium` reasoning effort, and do not
19491954
support `none`.
19501955
- The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
1956+
- `xhigh` is currently only supported for `gpt-5.1-codex-max`.
19511957
19521958
response_format: Specifies the format that the model must output. Compatible with
19531959
[GPT-4o](https://platform.openai.com/docs/models#gpt-4o),

0 commit comments

Comments
 (0)