Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "4.8.0"
".": "4.9.0"
}
8 changes: 4 additions & 4 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 135
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-ca24bc4d8125b5153514ce643c4e3220f25971b7d67ca384d56d493c72c0d977.yml
openapi_spec_hash: c6f048c7b3d29f4de48fde0e845ba33f
config_hash: b876221dfb213df9f0a999e75d38a65e
configured_endpoints: 136
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-fe8a79e6fd407e6c9afec60971f03076b65f711ccd6ea16457933b0e24fb1f6d.yml
openapi_spec_hash: 38c0a73f4e08843732c5f8002a809104
config_hash: 2c350086d87a4b4532077363087840e7
19 changes: 19 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,24 @@
# Changelog

## 4.9.0 (2025-12-04)

Full Changelog: [v4.8.0...v4.9.0](https://github.com/openai/openai-java/compare/v4.8.0...v4.9.0)

### Features

* **api:** gpt-5.1-codex-max and responses/compact ([651c44f](https://github.com/openai/openai-java/commit/651c44f570ba07784d715a382d94b255fd3afa60))


### Bug Fixes

* **api:** align types of input items / output items for typescript ([9202c69](https://github.com/openai/openai-java/commit/9202c695d939def7c9598e9ee75999b8ebd87e32))
* **client:** cancel okhttp call when future cancelled ([c665e21](https://github.com/openai/openai-java/commit/c665e21c83123931baed5b21b9bbaa96a4d77495))


### Documentation

* remove `$` for better copy-pasteabality ([66f7a4b](https://github.com/openai/openai-java/commit/66f7a4b3d2b88fc3e80c1552d0a0df86cd45c1ff))

## 4.8.0 (2025-11-13)

Full Changelog: [v4.7.2...v4.8.0](https://github.com/openai/openai-java/compare/v4.7.2...v4.8.0)
Expand Down
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,16 @@

<!-- x-release-please-start-version -->

[![Maven Central](https://img.shields.io/maven-central/v/com.openai/openai-java)](https://central.sonatype.com/artifact/com.openai/openai-java/4.8.0)
[![javadoc](https://javadoc.io/badge2/com.openai/openai-java/4.8.0/javadoc.svg)](https://javadoc.io/doc/com.openai/openai-java/4.8.0)
[![Maven Central](https://img.shields.io/maven-central/v/com.openai/openai-java)](https://central.sonatype.com/artifact/com.openai/openai-java/4.9.0)
[![javadoc](https://javadoc.io/badge2/com.openai/openai-java/4.9.0/javadoc.svg)](https://javadoc.io/doc/com.openai/openai-java/4.9.0)

<!-- x-release-please-end -->

The OpenAI Java SDK provides convenient access to the [OpenAI REST API](https://platform.openai.com/docs) from applications written in Java.

<!-- x-release-please-start-version -->

The REST API documentation can be found on [platform.openai.com](https://platform.openai.com/docs). Javadocs are available on [javadoc.io](https://javadoc.io/doc/com.openai/openai-java/4.8.0).
The REST API documentation can be found on [platform.openai.com](https://platform.openai.com/docs). Javadocs are available on [javadoc.io](https://javadoc.io/doc/com.openai/openai-java/4.9.0).

<!-- x-release-please-end -->

Expand All @@ -24,7 +24,7 @@ The REST API documentation can be found on [platform.openai.com](https://platfor
### Gradle

```kotlin
implementation("com.openai:openai-java:4.8.0")
implementation("com.openai:openai-java:4.9.0")
```

### Maven
Expand All @@ -33,7 +33,7 @@ implementation("com.openai:openai-java:4.8.0")
<dependency>
<groupId>com.openai</groupId>
<artifactId>openai-java</artifactId>
<version>4.8.0</version>
<version>4.9.0</version>
</dependency>
```

Expand Down Expand Up @@ -1310,13 +1310,13 @@ The SDK uses the standard [OkHttp logging interceptor](https://github.com/square
Enable logging by setting the `OPENAI_LOG` environment variable to `info`:

```sh
$ export OPENAI_LOG=info
export OPENAI_LOG=info
```

Or to `debug` for more verbose logging:

```sh
$ export OPENAI_LOG=debug
export OPENAI_LOG=debug
```

## ProGuard and R8
Expand All @@ -1342,7 +1342,7 @@ If you're using Spring Boot, then you can use the SDK's [Spring Boot starter](ht
#### Gradle

```kotlin
implementation("com.openai:openai-java-spring-boot-starter:4.8.0")
implementation("com.openai:openai-java-spring-boot-starter:4.9.0")
```

#### Maven
Expand All @@ -1351,7 +1351,7 @@ implementation("com.openai:openai-java-spring-boot-starter:4.8.0")
<dependency>
<groupId>com.openai</groupId>
<artifactId>openai-java-spring-boot-starter</artifactId>
<version>4.8.0</version>
<version>4.9.0</version>
</dependency>
```

Expand Down
2 changes: 1 addition & 1 deletion build.gradle.kts
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ repositories {

allprojects {
group = "com.openai"
version = "4.8.0" // x-release-please-version
version = "4.9.0" // x-release-please-version
}

subprojects {
Expand Down
3 changes: 3 additions & 0 deletions gradle.properties
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,9 @@ org.gradle.caching=true
org.gradle.configuration-cache=true
org.gradle.parallel=true
org.gradle.daemon=false
kotlin.daemon.enabled=false
kotlin.compiler.execution.strategy=in-process
kotlin.incremental=false
# These options improve our compilation and test performance. They are inherited by the Kotlin daemon.
org.gradle.jvmargs=\
-Xms2g \
Expand Down
1 change: 1 addition & 0 deletions openai-java-client-okhttp/build.gradle.kts
Original file line number Diff line number Diff line change
Expand Up @@ -11,4 +11,5 @@ dependencies {

testImplementation(kotlin("test"))
testImplementation("org.assertj:assertj-core:3.25.3")
testImplementation("com.github.tomakehurst:wiremock-jre8:2.35.2")
}
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ import java.io.IOException
import java.io.InputStream
import java.net.Proxy
import java.time.Duration
import java.util.concurrent.CancellationException
import java.util.concurrent.CompletableFuture
import javax.net.ssl.HostnameVerifier
import javax.net.ssl.SSLSocketFactory
Expand All @@ -29,8 +30,8 @@ import okhttp3.Response
import okhttp3.logging.HttpLoggingInterceptor
import okio.BufferedSink

class OkHttpClient private constructor(private val okHttpClient: okhttp3.OkHttpClient) :
HttpClient {
class OkHttpClient
private constructor(@JvmSynthetic internal val okHttpClient: okhttp3.OkHttpClient) : HttpClient {

override fun execute(request: HttpRequest, requestOptions: RequestOptions): HttpResponse {
val call = newCall(request, requestOptions)
Expand All @@ -50,20 +51,25 @@ class OkHttpClient private constructor(private val okHttpClient: okhttp3.OkHttpC
): CompletableFuture<HttpResponse> {
val future = CompletableFuture<HttpResponse>()

request.body?.run { future.whenComplete { _, _ -> close() } }

newCall(request, requestOptions)
.enqueue(
object : Callback {
override fun onResponse(call: Call, response: Response) {
future.complete(response.toResponse())
}
val call = newCall(request, requestOptions)
call.enqueue(
object : Callback {
override fun onResponse(call: Call, response: Response) {
future.complete(response.toResponse())
}

override fun onFailure(call: Call, e: IOException) {
future.completeExceptionally(OpenAIIoException("Request failed", e))
}
override fun onFailure(call: Call, e: IOException) {
future.completeExceptionally(OpenAIIoException("Request failed", e))
}
)
}
)

future.whenComplete { _, e ->
if (e is CancellationException) {
call.cancel()
}
request.body?.close()
}

return future
}
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
package com.openai.client.okhttp

import com.github.tomakehurst.wiremock.client.WireMock.*
import com.github.tomakehurst.wiremock.junit5.WireMockRuntimeInfo
import com.github.tomakehurst.wiremock.junit5.WireMockTest
import com.openai.core.http.HttpMethod
import com.openai.core.http.HttpRequest
import org.assertj.core.api.Assertions.assertThat
import org.junit.jupiter.api.BeforeEach
import org.junit.jupiter.api.Test
import org.junit.jupiter.api.parallel.ResourceLock

@WireMockTest
@ResourceLock("https://github.com/wiremock/wiremock/issues/169")
internal class OkHttpClientTest {

private lateinit var baseUrl: String
private lateinit var httpClient: OkHttpClient

@BeforeEach
fun beforeEach(wmRuntimeInfo: WireMockRuntimeInfo) {
baseUrl = wmRuntimeInfo.httpBaseUrl
httpClient = OkHttpClient.builder().build()
}

@Test
fun executeAsync_whenFutureCancelled_cancelsUnderlyingCall() {
stubFor(post(urlPathEqualTo("/something")).willReturn(ok()))
val responseFuture =
httpClient.executeAsync(
HttpRequest.builder()
.method(HttpMethod.POST)
.baseUrl(baseUrl)
.addPathSegment("something")
.build()
)
val call = httpClient.okHttpClient.dispatcher.runningCalls().single()

responseFuture.cancel(false)

// Should have cancelled the underlying call
assertThat(call.isCanceled()).isTrue()
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -258,6 +258,8 @@ private constructor(

@JvmField val GPT_5_PRO_2025_10_06 = of("gpt-5-pro-2025-10-06")

@JvmField val GPT_5_1_CODEX_MAX = of("gpt-5.1-codex-max")

@JvmStatic fun of(value: String) = ResponsesOnlyModel(JsonField.of(value))
}

Expand All @@ -276,6 +278,7 @@ private constructor(
GPT_5_CODEX,
GPT_5_PRO,
GPT_5_PRO_2025_10_06,
GPT_5_1_CODEX_MAX,
}

/**
Expand All @@ -301,6 +304,7 @@ private constructor(
GPT_5_CODEX,
GPT_5_PRO,
GPT_5_PRO_2025_10_06,
GPT_5_1_CODEX_MAX,
/**
* An enum member indicating that [ResponsesOnlyModel] was instantiated with an unknown
* value.
Expand Down Expand Up @@ -330,6 +334,7 @@ private constructor(
GPT_5_CODEX -> Value.GPT_5_CODEX
GPT_5_PRO -> Value.GPT_5_PRO
GPT_5_PRO_2025_10_06 -> Value.GPT_5_PRO_2025_10_06
GPT_5_1_CODEX_MAX -> Value.GPT_5_1_CODEX_MAX
else -> Value._UNKNOWN
}

Expand Down Expand Up @@ -357,6 +362,7 @@ private constructor(
GPT_5_CODEX -> Known.GPT_5_CODEX
GPT_5_PRO -> Known.GPT_5_PRO
GPT_5_PRO_2025_10_06 -> Known.GPT_5_PRO_2025_10_06
GPT_5_1_CODEX_MAX -> Known.GPT_5_1_CODEX_MAX
else -> throw OpenAIInvalidDataException("Unknown ResponsesOnlyModel: $value")
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -45,14 +45,15 @@ private constructor(
/**
* Constrains effort on reasoning for
* [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently supported
* values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing reasoning effort can
* result in faster responses and fewer tokens used on reasoning in a response.
* values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`. Reducing reasoning effort
* can result in faster responses and fewer tokens used on reasoning in a response.
* - `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported reasoning
* values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool calls are supported for
* all reasoning values in gpt-5.1.
* - All models before `gpt-5.1` default to `medium` reasoning effort, and do not support
* `none`.
* - The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
* - `xhigh` is currently only supported for `gpt-5.1-codex-max`.
*
* @throws OpenAIInvalidDataException if the JSON field has an unexpected type (e.g. if the
* server responded with an unexpected value).
Expand Down Expand Up @@ -144,14 +145,16 @@ private constructor(
/**
* Constrains effort on reasoning for
* [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently
* supported values are `none`, `minimal`, `low`, `medium`, and `high`. Reducing reasoning
* effort can result in faster responses and fewer tokens used on reasoning in a response.
* supported values are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`. Reducing
* reasoning effort can result in faster responses and fewer tokens used on reasoning in a
* response.
* - `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported reasoning
* values for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool calls are supported
* for all reasoning values in gpt-5.1.
* - All models before `gpt-5.1` default to `medium` reasoning effort, and do not support
* `none`.
* - The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
* - `xhigh` is currently only supported for `gpt-5.1-codex-max`.
*/
fun effort(effort: ReasoningEffort?) = effort(JsonField.ofNullable(effort))

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,14 @@ import com.openai.errors.OpenAIInvalidDataException
/**
* Constrains effort on reasoning for
* [reasoning models](https://platform.openai.com/docs/guides/reasoning). Currently supported values
* are `none`, `minimal`, `low`, `medium`, and `high`. Reducing reasoning effort can result in
* faster responses and fewer tokens used on reasoning in a response.
* are `none`, `minimal`, `low`, `medium`, `high`, and `xhigh`. Reducing reasoning effort can result
* in faster responses and fewer tokens used on reasoning in a response.
* - `gpt-5.1` defaults to `none`, which does not perform reasoning. The supported reasoning values
* for `gpt-5.1` are `none`, `low`, `medium`, and `high`. Tool calls are supported for all
* reasoning values in gpt-5.1.
* - All models before `gpt-5.1` default to `medium` reasoning effort, and do not support `none`.
* - The `gpt-5-pro` model defaults to (and only supports) `high` reasoning effort.
* - `xhigh` is currently only supported for `gpt-5.1-codex-max`.
*/
class ReasoningEffort @JsonCreator private constructor(private val value: JsonField<String>) :
Enum {
Expand All @@ -42,6 +43,8 @@ class ReasoningEffort @JsonCreator private constructor(private val value: JsonFi

@JvmField val HIGH = of("high")

@JvmField val XHIGH = of("xhigh")

@JvmStatic fun of(value: String) = ReasoningEffort(JsonField.of(value))
}

Expand All @@ -52,6 +55,7 @@ class ReasoningEffort @JsonCreator private constructor(private val value: JsonFi
LOW,
MEDIUM,
HIGH,
XHIGH,
}

/**
Expand All @@ -69,6 +73,7 @@ class ReasoningEffort @JsonCreator private constructor(private val value: JsonFi
LOW,
MEDIUM,
HIGH,
XHIGH,
/**
* An enum member indicating that [ReasoningEffort] was instantiated with an unknown value.
*/
Expand All @@ -89,6 +94,7 @@ class ReasoningEffort @JsonCreator private constructor(private val value: JsonFi
LOW -> Value.LOW
MEDIUM -> Value.MEDIUM
HIGH -> Value.HIGH
XHIGH -> Value.XHIGH
else -> Value._UNKNOWN
}

Expand All @@ -107,6 +113,7 @@ class ReasoningEffort @JsonCreator private constructor(private val value: JsonFi
LOW -> Known.LOW
MEDIUM -> Known.MEDIUM
HIGH -> Known.HIGH
XHIGH -> Known.XHIGH
else -> throw OpenAIInvalidDataException("Unknown ReasoningEffort: $value")
}

Expand Down
Loading
Loading