diff --git a/docs/src/content/docs/guides/models.mdx b/docs/src/content/docs/guides/models.mdx
index 0476d427..945e0b91 100644
--- a/docs/src/content/docs/guides/models.mdx
+++ b/docs/src/content/docs/guides/models.mdx
@@ -29,6 +29,53 @@ In day‑to‑day work you normally only interact with model **names** and occas
title="Specifying a model per‑agent"
/>
+## Default model
+
+When you don't specify a model when initializing an `Agent`, the default model will be used. The default is currently [`gpt-4.1`](https://platform.openai.com/docs/models/gpt-4.1), which offers a strong balance of predictability for agentic workflows and low latency.
+
+If you want to switch to other models like [`gpt-5`](https://platform.openai.com/docs/models/gpt-5), there are two ways to configure your agents.
+
+First, if you want to consistently use a specific model for all agents that do not set a custom model, set the `OPENAI_DEFAULT_MODEL` environment variable before running your agents.
+
+```bash
+export OPENAI_DEFAULT_MODEL=gpt-5
+node my-awesome-agent.js
+```
+
+Second, you can set a default model for a `Runner` instance. If you don't set a model for an agent, this `Runner`'s default model will be used.
+
+
+
+### GPT-5 models
+
+When you use any of GPT-5's reasoning models ([`gpt-5`](https://platform.openai.com/docs/models/gpt-5), [`gpt-5-mini`](https://platform.openai.com/docs/models/gpt-5-mini), or [`gpt-5-nano`](https://platform.openai.com/docs/models/gpt-5-nano)) this way, the SDK applies sensible `modelSettings` by default. Specifically, it sets both `reasoning.effort` and `verbosity` to `"low"`. To adjust the reasoning effort for the default model, pass your own `modelSettings`:
+
+```ts
+import { Agent } from '@openai/agents';
+
+const myAgent = new Agent({
+ name: 'My Agent',
+ instructions: "You're a helpful agent.",
+ modelSettings: {
+ reasoning: { effort: 'minimal' },
+ text: { verbosity: 'low' },
+ },
+ // If OPENAI_DEFAULT_MODEL=gpt-5 is set, passing only modelSettings works.
+ // It's also fine to pass a GPT-5 model name explicitly:
+ // model: 'gpt-5',
+});
+```
+
+For lower latency, using either [`gpt-5-mini`](https://platform.openai.com/docs/models/gpt-5-mini) or [`gpt-5-nano`](https://platform.openai.com/docs/models/gpt-5-nano) with `reasoning.effort="minimal"` will often return responses faster than the default settings. However, some built-in tools (such as file search and image generation) in Responses API do not support `"minimal"` reasoning effort, which is why this Agents SDK defaults to `"low"`.
+
+### Non-GPT-5 models
+
+If you pass a non–GPT-5 model name without custom `modelSettings`, the SDK reverts to generic `modelSettings` compatible with any model.
+
---
## The OpenAI provider
@@ -52,16 +99,6 @@ endpoints:
You can also plug your own `OpenAI` client via `setDefaultOpenAIClient(client)` if you need
custom networking settings.
-### Default model
-
-The OpenAI provider defaults to `gpt‑4o`. Override per agent or globally:
-
-
-
---
## ModelSettings
@@ -79,6 +116,8 @@ The OpenAI provider defaults to `gpt‑4o`. Override per agent or globally:
| `truncation` | `'auto' \| 'disabled'` | Token truncation strategy. |
| `maxTokens` | `number` | Maximum tokens in the response. |
| `store` | `boolean` | Persist the response for retrieval / RAG workflows. |
+| `reasoning.effort` | `'minimal' \| 'low' \| 'medium' \| 'high'` | Reasoning effort for gpt-5 etc. |
+| `text.verbosity` | `'low' \| 'medium' \| 'high'` | Text verbosity for gpt-5 etc. |
Attach settings at either level: