Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "4.6.1"
".": "4.7.0"
}
6 changes: 3 additions & 3 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 135
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-a4bb37d110a22c2888f53e21281434686a6fffa3e672a40f2503ad9bd2759063.yml
openapi_spec_hash: 2d59eefb494dff4eea8c3d008c7e2070
config_hash: 50ee3382a63c021a9f821a935950e926
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/openai%2Fopenai-3c5d1593d7c6f2b38a7d78d7906041465ee9d6e9022f0651e1da194654488108.yml
openapi_spec_hash: 0a4d8ad2469823ce24a3fd94f23f1c2b
config_hash: 032995825500a503a76da119f5354905
14 changes: 14 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,19 @@
# Changelog

## 4.7.0 (2025-11-04)

Full Changelog: [v4.6.1...v4.7.0](https://github.com/openai/openai-java/compare/v4.6.1...v4.7.0)

### Features

* **api:** Realtime API token_limits, Hybrid searching ranking options ([bd9bcfd](https://github.com/openai/openai-java/commit/bd9bcfdd560cfc8df2a9336d162a0ee1f6604b84))
* **api:** remove InputAudio from ResponseInputContent ([630fecf](https://github.com/openai/openai-java/commit/630fecf8f0e04ce82ac0f0df9b5d60df4edd0655))


### Bug Fixes

* **api:** docs updates ([3e970ec](https://github.com/openai/openai-java/commit/3e970ec8c9b3895b087f5722a0bfae23ca9e4e2c))

## 4.6.1 (2025-10-20)

Full Changelog: [v4.6.0...v4.6.1](https://github.com/openai/openai-java/compare/v4.6.0...v4.6.1)
Expand Down
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,16 +2,16 @@

<!-- x-release-please-start-version -->

[![Maven Central](https://img.shields.io/maven-central/v/com.openai/openai-java)](https://central.sonatype.com/artifact/com.openai/openai-java/4.6.1)
[![javadoc](https://javadoc.io/badge2/com.openai/openai-java/4.6.1/javadoc.svg)](https://javadoc.io/doc/com.openai/openai-java/4.6.1)
[![Maven Central](https://img.shields.io/maven-central/v/com.openai/openai-java)](https://central.sonatype.com/artifact/com.openai/openai-java/4.7.0)
[![javadoc](https://javadoc.io/badge2/com.openai/openai-java/4.7.0/javadoc.svg)](https://javadoc.io/doc/com.openai/openai-java/4.7.0)

<!-- x-release-please-end -->

The OpenAI Java SDK provides convenient access to the [OpenAI REST API](https://platform.openai.com/docs) from applications written in Java.

<!-- x-release-please-start-version -->

The REST API documentation can be found on [platform.openai.com](https://platform.openai.com/docs). Javadocs are available on [javadoc.io](https://javadoc.io/doc/com.openai/openai-java/4.6.1).
The REST API documentation can be found on [platform.openai.com](https://platform.openai.com/docs). Javadocs are available on [javadoc.io](https://javadoc.io/doc/com.openai/openai-java/4.7.0).

<!-- x-release-please-end -->

Expand All @@ -24,7 +24,7 @@ The REST API documentation can be found on [platform.openai.com](https://platfor
### Gradle

```kotlin
implementation("com.openai:openai-java:4.6.1")
implementation("com.openai:openai-java:4.7.0")
```

### Maven
Expand All @@ -33,7 +33,7 @@ implementation("com.openai:openai-java:4.6.1")
<dependency>
<groupId>com.openai</groupId>
<artifactId>openai-java</artifactId>
<version>4.6.1</version>
<version>4.7.0</version>
</dependency>
```

Expand Down Expand Up @@ -1342,7 +1342,7 @@ If you're using Spring Boot, then you can use the SDK's [Spring Boot starter](ht
#### Gradle

```kotlin
implementation("com.openai:openai-java-spring-boot-starter:4.6.1")
implementation("com.openai:openai-java-spring-boot-starter:4.7.0")
```

#### Maven
Expand All @@ -1351,7 +1351,7 @@ implementation("com.openai:openai-java-spring-boot-starter:4.6.1")
<dependency>
<groupId>com.openai</groupId>
<artifactId>openai-java-spring-boot-starter</artifactId>
<version>4.6.1</version>
<version>4.7.0</version>
</dependency>
```

Expand Down
2 changes: 1 addition & 1 deletion build.gradle.kts
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ repositories {

allprojects {
group = "com.openai"
version = "4.6.1" // x-release-please-version
version = "4.7.0" // x-release-please-version
}

subprojects {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,16 +38,20 @@ private constructor(
private val _json: JsonValue? = null,
) {

/** Unconstrained free-form text. */
fun text(): Optional<JsonValue> = Optional.ofNullable(text)

/** A grammar defined by the user. */
fun grammar(): Optional<Grammar> = Optional.ofNullable(grammar)

fun isText(): Boolean = text != null

fun isGrammar(): Boolean = grammar != null

/** Unconstrained free-form text. */
fun asText(): JsonValue = text.getOrThrow("text")

/** A grammar defined by the user. */
fun asGrammar(): Grammar = grammar.getOrThrow("grammar")

fun _json(): Optional<JsonValue> = Optional.ofNullable(_json)
Expand Down Expand Up @@ -130,9 +134,11 @@ private constructor(

companion object {

/** Unconstrained free-form text. */
@JvmStatic
fun ofText() = CustomToolInputFormat(text = JsonValue.from(mapOf("type" to "text")))

/** A grammar defined by the user. */
@JvmStatic fun ofGrammar(grammar: Grammar) = CustomToolInputFormat(grammar = grammar)
}

Expand All @@ -142,8 +148,10 @@ private constructor(
*/
interface Visitor<out T> {

/** Unconstrained free-form text. */
fun visitText(text: JsonValue): T

/** A grammar defined by the user. */
fun visitGrammar(grammar: Grammar): T

/**
Expand Down Expand Up @@ -202,6 +210,7 @@ private constructor(
}
}

/** A grammar defined by the user. */
class Grammar
@JsonCreator(mode = JsonCreator.Mode.DISABLED)
private constructor(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,8 @@ private constructor(
* A summary of the reasoning performed by the model. This can be useful for debugging and
* understanding the model's reasoning process. One of `auto`, `concise`, or `detailed`.
*
* `concise` is only supported for `computer-use-preview` models.
*
* @throws OpenAIInvalidDataException if the JSON field has an unexpected type (e.g. if the
* server responded with an unexpected value).
*/
Expand Down Expand Up @@ -187,6 +189,8 @@ private constructor(
/**
* A summary of the reasoning performed by the model. This can be useful for debugging and
* understanding the model's reasoning process. One of `auto`, `concise`, or `detailed`.
*
* `concise` is only supported for `computer-use-preview` models.
*/
fun summary(summary: Summary?) = summary(JsonField.ofNullable(summary))

Expand Down Expand Up @@ -406,6 +410,8 @@ private constructor(
/**
* A summary of the reasoning performed by the model. This can be useful for debugging and
* understanding the model's reasoning process. One of `auto`, `concise`, or `detailed`.
*
* `concise` is only supported for `computer-use-preview` models.
*/
class Summary @JsonCreator private constructor(private val value: JsonField<String>) : Enum {

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,17 +25,14 @@ import kotlin.io.path.name
/**
* Upload a file that can be used across various endpoints. Individual files can be up to 512 MB,
* and the size of all files uploaded by one organization can be up to 1 TB.
*
* The Assistants API supports files up to 2 million tokens and of specific file types. See the
* [Assistants Tools guide](https://platform.openai.com/docs/assistants/tools) for details.
*
* The Fine-tuning API only supports `.jsonl` files. The input also has certain required formats for
* fine-tuning [chat](https://platform.openai.com/docs/api-reference/fine-tuning/chat-input) or
* [completions](https://platform.openai.com/docs/api-reference/fine-tuning/completions-input)
* models.
*
* The Batch API only supports `.jsonl` files up to 200 MB in size. The input also has a specific
* required [format](https://platform.openai.com/docs/api-reference/batch/request-input).
* - The Assistants API supports files up to 2 million tokens and of specific file types. See the
* [Assistants Tools guide](https://platform.openai.com/docs/assistants/tools) for details.
* - The Fine-tuning API only supports `.jsonl` files. The input also has certain required formats
* for fine-tuning [chat](https://platform.openai.com/docs/api-reference/fine-tuning/chat-input)
* or [completions](https://platform.openai.com/docs/api-reference/fine-tuning/completions-input)
* models.
* - The Batch API only supports `.jsonl` files up to 200 MB in size. The input also has a specific
* required [format](https://platform.openai.com/docs/api-reference/batch/request-input).
*
* Please [contact us](https://help.openai.com/) if you need to increase these storage limits.
*/
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,9 @@ private constructor(
fun background(): Optional<Background> = body.background()

/**
* Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for `gpt-image-1`. Unsupported for `gpt-image-1-mini`. Supports `high` and `low`. Defaults to `low`.
* Control how much effort the model will exert to match the style and features, especially
* facial features, of input images. This parameter is only supported for `gpt-image-1`.
* Unsupported for `gpt-image-1-mini`. Supports `high` and `low`. Defaults to `low`.
*
* @throws OpenAIInvalidDataException if the JSON field has an unexpected type (e.g. if the
* server responded with an unexpected value).
Expand Down Expand Up @@ -429,7 +431,9 @@ private constructor(
}

/**
* Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for `gpt-image-1`. Unsupported for `gpt-image-1-mini`. Supports `high` and `low`. Defaults to `low`.
* Control how much effort the model will exert to match the style and features, especially
* facial features, of input images. This parameter is only supported for `gpt-image-1`.
* Unsupported for `gpt-image-1-mini`. Supports `high` and `low`. Defaults to `low`.
*/
fun inputFidelity(inputFidelity: InputFidelity?) = apply {
body.inputFidelity(inputFidelity)
Expand Down Expand Up @@ -903,7 +907,9 @@ private constructor(
fun background(): Optional<Background> = background.value.getOptional("background")

/**
* Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for `gpt-image-1`. Unsupported for `gpt-image-1-mini`. Supports `high` and `low`. Defaults to `low`.
* Control how much effort the model will exert to match the style and features, especially
* facial features, of input images. This parameter is only supported for `gpt-image-1`.
* Unsupported for `gpt-image-1-mini`. Supports `high` and `low`. Defaults to `low`.
*
* @throws OpenAIInvalidDataException if the JSON field has an unexpected type (e.g. if the
* server responded with an unexpected value).
Expand Down Expand Up @@ -1297,7 +1303,10 @@ private constructor(
}

/**
* Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for `gpt-image-1`. Unsupported for `gpt-image-1-mini`. Supports `high` and `low`. Defaults to `low`.
* Control how much effort the model will exert to match the style and features,
* especially facial features, of input images. This parameter is only supported for
* `gpt-image-1`. Unsupported for `gpt-image-1-mini`. Supports `high` and `low`.
* Defaults to `low`.
*/
fun inputFidelity(inputFidelity: InputFidelity?) =
inputFidelity(MultipartField.of(inputFidelity))
Expand Down Expand Up @@ -1981,7 +1990,9 @@ private constructor(
}

/**
* Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for `gpt-image-1`. Unsupported for `gpt-image-1-mini`. Supports `high` and `low`. Defaults to `low`.
* Control how much effort the model will exert to match the style and features, especially
* facial features, of input images. This parameter is only supported for `gpt-image-1`.
* Unsupported for `gpt-image-1-mini`. Supports `high` and `low`. Defaults to `low`.
*/
class InputFidelity @JsonCreator private constructor(private val value: JsonField<String>) :
Enum {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -221,8 +221,17 @@ private constructor(
fun tracing(): Optional<RealtimeTracingConfig> = tracing.getOptional("tracing")

/**
* Controls how the realtime conversation is truncated prior to model inference. The default is
* `auto`.
* When the number of tokens in a conversation exceeds the model's input token limit, the
* conversation be truncated, meaning messages (starting from the oldest) will not be included
* in the model's context. A 32k context model with 4,096 max output tokens can only include
* 28,224 tokens in the context before truncation occurs. Clients can configure truncation
* behavior to truncate with a lower max token limit, which is an effective way to control token
* usage and cost. Truncation will reduce the number of cached tokens on the next turn (busting
* the cache), since messages are dropped from the beginning of the context. However, clients
* can also configure truncation to retain messages up to a fraction of the maximum context
* size, which will reduce the need for future truncations and thus improve the cache rate.
* Truncation can be disabled entirely, which means the server will never truncate but would
* instead return an error if the conversation exceeds the model's input token limit.
*
* @throws OpenAIInvalidDataException if the JSON field has an unexpected type (e.g. if the
* server responded with an unexpected value).
Expand Down Expand Up @@ -666,8 +675,18 @@ private constructor(
tracing(RealtimeTracingConfig.ofTracingConfiguration(tracingConfiguration))

/**
* Controls how the realtime conversation is truncated prior to model inference. The default
* is `auto`.
* When the number of tokens in a conversation exceeds the model's input token limit, the
* conversation be truncated, meaning messages (starting from the oldest) will not be
* included in the model's context. A 32k context model with 4,096 max output tokens can
* only include 28,224 tokens in the context before truncation occurs. Clients can configure
* truncation behavior to truncate with a lower max token limit, which is an effective way
* to control token usage and cost. Truncation will reduce the number of cached tokens on
* the next turn (busting the cache), since messages are dropped from the beginning of the
* context. However, clients can also configure truncation to retain messages up to a
* fraction of the maximum context size, which will reduce the need for future truncations
* and thus improve the cache rate. Truncation can be disabled entirely, which means the
* server will never truncate but would instead return an error if the conversation exceeds
* the model's input token limit.
*/
fun truncation(truncation: RealtimeTruncation) = truncation(JsonField.of(truncation))

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ private constructor(
private val _json: JsonValue? = null,
) {

/** Default tracing mode for the session. */
/** Enables tracing and sets default values for tracing configuration options. Always `auto`. */
fun auto(): Optional<JsonValue> = Optional.ofNullable(auto)

/** Granular configuration for tracing. */
Expand All @@ -53,7 +53,7 @@ private constructor(

fun isTracingConfiguration(): Boolean = tracingConfiguration != null

/** Default tracing mode for the session. */
/** Enables tracing and sets default values for tracing configuration options. Always `auto`. */
fun asAuto(): JsonValue = auto.getOrThrow("auto")

/** Granular configuration for tracing. */
Expand Down Expand Up @@ -144,7 +144,9 @@ private constructor(

companion object {

/** Default tracing mode for the session. */
/**
* Enables tracing and sets default values for tracing configuration options. Always `auto`.
*/
@JvmStatic fun ofAuto() = RealtimeTracingConfig(auto = JsonValue.from("auto"))

/** Granular configuration for tracing. */
Expand All @@ -159,7 +161,9 @@ private constructor(
*/
interface Visitor<out T> {

/** Default tracing mode for the session. */
/**
* Enables tracing and sets default values for tracing configuration options. Always `auto`.
*/
fun visitAuto(auto: JsonValue): T

/** Granular configuration for tracing. */
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,17 @@ import java.util.Objects
import java.util.Optional

/**
* Controls how the realtime conversation is truncated prior to model inference. The default is
* `auto`.
* When the number of tokens in a conversation exceeds the model's input token limit, the
* conversation be truncated, meaning messages (starting from the oldest) will not be included in
* the model's context. A 32k context model with 4,096 max output tokens can only include 28,224
* tokens in the context before truncation occurs. Clients can configure truncation behavior to
* truncate with a lower max token limit, which is an effective way to control token usage and cost.
* Truncation will reduce the number of cached tokens on the next turn (busting the cache), since
* messages are dropped from the beginning of the context. However, clients can also configure
* truncation to retain messages up to a fraction of the maximum context size, which will reduce the
* need for future truncations and thus improve the cache rate. Truncation can be disabled entirely,
* which means the server will never truncate but would instead return an error if the conversation
* exceeds the model's input token limit.
*/
@JsonDeserialize(using = RealtimeTruncation.Deserializer::class)
@JsonSerialize(using = RealtimeTruncation.Serializer::class)
Expand Down
Loading