Skip to content

backport v6#5

Open
callycodes wants to merge 2206 commits intoZenningAI:mainfrom
vercel:main
Open

backport v6#5
callycodes wants to merge 2206 commits intoZenningAI:mainfrom
vercel:main

Conversation

@callycodes
Copy link
Copy Markdown

Background

Summary

Manual Verification

Checklist

  • Tests have been added / updated (for bug fixes / features)
  • Documentation has been added / updated (for bug fixes / features)
  • A patch changeset for relevant packages has been added (for bug fixes / features - run pnpm changeset in the project root)
  • Formatting issues have been fixed (run pnpm prettier-fix in the project root)
  • I have reviewed this pull request (self-review)

Future Work

Related Issues

gr2m and others added 30 commits April 2, 2026 09:06
…els (#14056)

## Background

The `@ai-sdk/anthropic` provider enforces temperature/topP mutual
exclusivity for all models, matching a constraint in the Anthropic API.
However, providers like Minimax use the Anthropic-compatible API
endpoint with non-Anthropic models (e.g. `MiniMax-M2.7`) that require
both `temperature` and `top_p` to be set simultaneously.

## Summary

- Introduce `isAnthropicModel` variable derived from `isKnownModel ||
modelId.startsWith('claude-')`
- Only enforce temperature/topP mutual exclusivity for Anthropic models
- Non-Anthropic models using the Anthropic-compatible API can now send
both parameters

## Related Issues

Port of #14052
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## @ai-sdk/amazon-bedrock@5.0.0-beta.19

### Patch Changes

-   Updated dependencies [f57c702]
    -   @ai-sdk/anthropic@4.0.0-beta.15

## @ai-sdk/anthropic@4.0.0-beta.15

### Patch Changes

- f57c702: fix(anthropic): allow both temperature and topP for
non-Anthropic models using the Anthropic-compatible API

The temperature/topP mutual exclusivity check now only applies to known
Anthropic models (model IDs starting with `claude-`). Non-Anthropic
models using the Anthropic-compatible API (e.g. Minimax) can now send
both parameters as required by their APIs.

## @ai-sdk/google-vertex@5.0.0-beta.29

### Patch Changes

-   Updated dependencies [f57c702]
    -   @ai-sdk/anthropic@4.0.0-beta.15

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
This is an automated update of the gateway model settings files.

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…#14064)

## Background

`streamModelCall` implements the retry behavior. However, for external
loop control, it should be simple and the retry concern should be
handled outside of it.

## Summary

Move `retry` behavior from `streamModelCall` to `streamText`. Add a
regression test.

## Related Issues

Towards #13570
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## ai@7.0.0-beta.59

### Patch Changes

-   Updated dependencies [4552cbf]
    -   @ai-sdk/gateway@4.0.0-beta.30

## @ai-sdk/angular@3.0.0-beta.59

### Patch Changes

-   ai@7.0.0-beta.59

## @ai-sdk/gateway@4.0.0-beta.30

### Patch Changes

- 4552cbf: chore(provider/gateway): update gateway model settings files

## @ai-sdk/langchain@3.0.0-beta.59

### Patch Changes

-   ai@7.0.0-beta.59

## @ai-sdk/llamaindex@3.0.0-beta.59

### Patch Changes

-   ai@7.0.0-beta.59

## @ai-sdk/otel@1.0.0-beta.5

### Patch Changes

-   ai@7.0.0-beta.59

## @ai-sdk/react@4.0.0-beta.59

### Patch Changes

-   ai@7.0.0-beta.59

## @ai-sdk/rsc@3.0.0-beta.60

### Patch Changes

-   ai@7.0.0-beta.59

## @ai-sdk/svelte@5.0.0-beta.59

### Patch Changes

-   ai@7.0.0-beta.59

## @ai-sdk/vue@4.0.0-beta.59

### Patch Changes

-   ai@7.0.0-beta.59

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
## background

adds docs for the tool timeout feature (#13365, #13536)

## summary

- add `toolMs` and `tools` to timeout configuration JSDoc comments in
`call-settings.ts` and `core-events.ts`
- add tool timeout examples to settings guide
(`content/docs/03-ai-sdk-core/25-settings.mdx`)

## checklist

- [ ] tests have been added / updated (for bug fixes / features)
- [x] documentation has been added / updated (for bug fixes / features)
- [ ] a _patch_ changeset for relevant packages has been added (run
`pnpm changeset` in root)
- [x] i have reviewed this pull request (self-review)
## Background

The `activeTools` parameter of `streamModelCall` was not used.

## Summary

Remove `activeTools` parameter from `streamModelCall`. Add JSDoc to
`streamModelCall`
…owner of model settings workflow (#13982)

## Background

Bot's
[commit](b06dc24#diff-a234187cf8615e214b88afef6276b23a8b53bf5265d811e5d305288014af94e4R175-R183)
left a duplicate `PR_COUNT=` assignment, causing a shell syntax error in
the notify job. Sorry I should've reviewed it more closely!

Also updating owner ahead of my last day at Vercel.

## Summary

- Removed duplicate line
- Restored `GITHUB_OUTPUT` write for Slack step
- Updated owner to Rohan T.

## Testing

Tested with my test slack channel

## Checklist

- [ ] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [ ] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)
## Summary
- Adds a blockquote to all 35 provider package READMEs pointing Vercel
users to the AI Gateway as an alternative to installing
provider-specific packages

## Test plan
- [ ] Verify blockquotes render correctly on npm
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## ai@7.0.0-beta.60

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/alibaba@2.0.0-beta.15

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs
-   Updated dependencies [38fc777]
    -   @ai-sdk/openai-compatible@3.0.0-beta.13

## @ai-sdk/amazon-bedrock@5.0.0-beta.20

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs
-   Updated dependencies [38fc777]
    -   @ai-sdk/anthropic@4.0.0-beta.16

## @ai-sdk/angular@3.0.0-beta.60

### Patch Changes

-   Updated dependencies [38fc777]
    -   ai@7.0.0-beta.60

## @ai-sdk/anthropic@4.0.0-beta.16

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/assemblyai@3.0.0-beta.11

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/azure@4.0.0-beta.20

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs
-   Updated dependencies [38fc777]
    -   @ai-sdk/openai@4.0.0-beta.20

## @ai-sdk/baseten@2.0.0-beta.13

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs
-   Updated dependencies [38fc777]
    -   @ai-sdk/openai-compatible@3.0.0-beta.13

## @ai-sdk/black-forest-labs@2.0.0-beta.10

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/bytedance@2.0.0-beta.10

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/cerebras@3.0.0-beta.13

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs
-   Updated dependencies [38fc777]
    -   @ai-sdk/openai-compatible@3.0.0-beta.13

## @ai-sdk/codemod@4.0.0-beta.1

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/cohere@4.0.0-beta.11

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/deepgram@3.0.0-beta.10

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/deepinfra@3.0.0-beta.13

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs
-   Updated dependencies [38fc777]
    -   @ai-sdk/openai-compatible@3.0.0-beta.13

## @ai-sdk/deepseek@3.0.0-beta.12

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/elevenlabs@3.0.0-beta.10

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/fal@3.0.0-beta.10

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/fireworks@3.0.0-beta.13

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs
-   Updated dependencies [38fc777]
    -   @ai-sdk/openai-compatible@3.0.0-beta.13

## @ai-sdk/gladia@3.0.0-beta.10

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/google@4.0.0-beta.22

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/google-vertex@5.0.0-beta.30

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs
-   Updated dependencies [38fc777]
    -   @ai-sdk/anthropic@4.0.0-beta.16
    -   @ai-sdk/google@4.0.0-beta.22
    -   @ai-sdk/openai-compatible@3.0.0-beta.13

## @ai-sdk/groq@4.0.0-beta.13

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/huggingface@2.0.0-beta.13

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs
-   Updated dependencies [38fc777]
    -   @ai-sdk/openai-compatible@3.0.0-beta.13

## @ai-sdk/hume@3.0.0-beta.10

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/klingai@4.0.0-beta.11

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/langchain@3.0.0-beta.60

### Patch Changes

-   Updated dependencies [38fc777]
    -   ai@7.0.0-beta.60

## @ai-sdk/llamaindex@3.0.0-beta.60

### Patch Changes

-   Updated dependencies [38fc777]
    -   ai@7.0.0-beta.60

## @ai-sdk/lmnt@3.0.0-beta.10

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/luma@3.0.0-beta.10

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/mistral@4.0.0-beta.12

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/moonshotai@3.0.0-beta.13

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs
-   Updated dependencies [38fc777]
    -   @ai-sdk/openai-compatible@3.0.0-beta.13

## @ai-sdk/openai@4.0.0-beta.20

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/openai-compatible@3.0.0-beta.13

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/otel@1.0.0-beta.6

### Patch Changes

-   Updated dependencies [38fc777]
    -   ai@7.0.0-beta.60

## @ai-sdk/perplexity@4.0.0-beta.13

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/prodia@2.0.0-beta.13

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/react@4.0.0-beta.60

### Patch Changes

-   Updated dependencies [38fc777]
    -   ai@7.0.0-beta.60

## @ai-sdk/replicate@3.0.0-beta.11

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/revai@3.0.0-beta.11

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs

## @ai-sdk/rsc@3.0.0-beta.61

### Patch Changes

-   Updated dependencies [38fc777]
    -   ai@7.0.0-beta.60

## @ai-sdk/svelte@5.0.0-beta.60

### Patch Changes

-   Updated dependencies [38fc777]
    -   ai@7.0.0-beta.60

## @ai-sdk/togetherai@3.0.0-beta.13

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs
-   Updated dependencies [38fc777]
    -   @ai-sdk/openai-compatible@3.0.0-beta.13

## @ai-sdk/vercel@3.0.0-beta.13

### Patch Changes

-   Updated dependencies [38fc777]
    -   @ai-sdk/openai-compatible@3.0.0-beta.13

## @ai-sdk/vue@4.0.0-beta.60

### Patch Changes

-   Updated dependencies [38fc777]
    -   ai@7.0.0-beta.60

## @ai-sdk/xai@4.0.0-beta.21

### Patch Changes

-   38fc777: Add AI Gateway hint to provider READMEs
-   Updated dependencies [38fc777]
    -   @ai-sdk/openai-compatible@3.0.0-beta.13

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…support (#13887)

## Background

The xAI video API supports five generation modes, but the SDK only
implemented three (text-to-video, image-to-video, editing). Video
extension (`POST /v1/videos/extensions`) and reference-to-video/R2V
(`POST /v1/videos/generations` with `reference_images`) were missing.
The xAI provider docs were also outdated, still referencing
`grok-4-fast-non-reasoning` and missing `grok-4.20` models,
`grok-imagine-image-pro`, and the new video modes.

## Summary

**Provider (`@ai-sdk/xai`):**
- Add video extension support via `extensionUrl` provider option → `POST
/v1/videos/extensions`
- Add reference-to-video (R2V) support via `referenceImageUrls` provider
option (1–7 images) → `POST /v1/videos/generations`.
- Surface `progress` field in provider metadata
- Add `error` to response schema (defensive)
- Fix `prompt: options.prompt ?? ''` for image-to-video without text

**Documentation:**
- Add Video Extension and R2V sections with code examples
- Document `extensionUrl` and `referenceImageUrls` provider options
- Document `grok-imagine-image-pro` with `resolution` (`1k`/`2k`) and
`quality` options
- Update all code examples to `grok-4.20-reasoning` /
`grok-4.20-non-reasoning`
- Update model capabilities table with grok-4.20 models
- Update video capabilities table with Extension and R2V columns
- Switch examples and docs from `{ videos }` to `{ video }` (singular).
I feel like this is better since xAI does not allow multiple videos
generation but let me know.

**Examples:**
- New: `extend.ts`, `extend-warnings.ts`, `reference-images.ts`
- Updated: `basic.ts`, `edit-concurrent.ts`, `edit-warnings.ts` with
null-safe metadata access and `{ video }` pattern

**Tests:** 62 total (up from ~25), covering all new modes, validation,
edge cases, polling, and metadata.

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [x] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

- Surface `error.message` from API in `failed` status handler (xAI
doesn't document this field yet)
- Consider adding optional `url` property to core `GeneratedFile` to
avoid `providerMetadata` dance for chaining

---------

Co-authored-by: Jaaneek <Jaaneek@users.noreply.github.com>
Co-authored-by: Felix Arntz <felix.arntz@vercel.com>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## @ai-sdk/xai@4.0.0-beta.22

### Patch Changes

- f51c95e: feat(provider/xai): add video extension and
reference-to-video (R2V) support

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
## Background

#13988

URL.href always appends a trailing slash to pathless URLs. When an MCP
server's Protected Resource Metadata returns "resource":
"https://mcp.example.com", the OAuth resource parameter gets sent as
https://mcp.example.com/ — breaking auth servers that do exact string
matching.

As specified in the [MCP
spec](https://modelcontextprotocol.io/specification/2025-11-25/basic/authorization#canonical-server-uri),
there the resource pathname shouldn't just be a trailing slash

## Summary

added a utility function that strips the trailing slash if the pathname
is just a slash

## Manual Verification

na

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Related Issues

fixes #13988
…es as supported per provider (#13816)

## Background

The AI SDK supports passing media files inline or via URL, but has no
way to upload files directly to a provider or reference previously
uploaded files across providers. Some providers return internal file IDs
(not URLs) from their upload APIs, and switching providers
mid-conversation requires a way to map the same logical file to
different provider-specific identifiers.

## Summary

Introduces `uploadFile` as a top-level function and `ProviderReference`
(`Record<string, string>`) as the provider-independent way to reference
uploaded files.

- **New spec types**: `SharedV4ProviderReference` (provider package),
`FilesV4` interface with `uploadFile` method, `UploadFileResult`;
`mediaType` and `filename`** are top-level parameters as they're widely
supported and used
- **New top-level API**: `uploadFile({ files, data, mediaType?,
filename?, providerOptions? })` in the `ai` package, with auto-detection
of media type from file bytes when not provided
- **Provider implementations**: `files()` interface on Anthropic,
Google, OpenAI, and xAI providers, each implementing
`FilesV4.uploadFile`
- Other providers don't support uploading files, or they only support
uploading files for batch inference (`*.jsonl`), which we don't support
anyway.
- **Provider reference support in messages**:
`LanguageModelV4FilePart.data` now accepts `SharedV4ProviderReference`
in addition to `DataContent`; providers that support file references
(Anthropic, Google, OpenAI, xAI) resolve them via
`resolveProviderReference`; all other providers throw
`UnsupportedFunctionalityError`
- **Spec cleanup**: `file-id` and `image-file-id` tool result output
types replaced with `file-reference` and `image-file-reference` using
`SharedV4ProviderReference` instead of `string | Record<string, string>`
- **Uploading from URL is not supported** — no provider supports this,
and auto-downloading is questionable; callers should fetch first
- **`reasoning-file` was not touched** — it is model-generated as part
of reasoning output, so provider references are not applicable
- **Docs included** — Docs about `uploadFile` and `ProviderReference`,
and a new architecture guide are included

### Design decisions

- `ProviderReference` is a plain `Record<string, string>` rather than a
wrapper class, keeping it simple to create and merge
- The `isLikelyText` heuristic for media type detection and the
`documentMediaTypeSignatures` are kept internal (not exported) — they
work well enough for `uploadFile` but are not general-purpose utilities
- `resolveProviderReference` (provider-utils) does the lookup by
provider name and throws with a clear error listing available providers
when the reference doesn't contain an entry for the current provider

### Open questions

1. Should we include a `type` property in `ProviderReference` to
distinguish different kinds of provider references (e.g. file vs skill,
see #12855)?
2. Should `mediaType` and `filename` be top-level fields in the
`uploadFile` result object?
- They're currently top-level request parameters, but in the response
they're in `providerMetadata`.
3. `mergeProviderReferences` is currently inlined in an example
(`multi-provider.ts`) — should we offer this as a utility, or leave it
for later?
4. `file-id` and `image-file-id` were removed in `toModelOutput` return
value — should we deprecate them instead and/or offer auto-migration via
codemod?
5. Out of scope: supporting providers that allow uploading files solely
for batch inference (e.g. Cohere, Groq, Mistral), which we don't support
at a provider level yet anyway - probably leave for later?

## Manual Verification

Upload file examples were added for all 4 supported providers
(Anthropic, Google, OpenAI, xAI), each with image, PDF, and text
variants.

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [x] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

Reuse the new `ProviderReference` approach for #12855.

## Related Issues

Fixes #12995
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## ai@7.0.0-beta.61

### Patch Changes

- c29a26f: feat(provider): add support for provider references and
uploading files as supported per provider
-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6
    -   @ai-sdk/gateway@4.0.0-beta.31

## @ai-sdk/alibaba@2.0.0-beta.16

### Patch Changes

- c29a26f: feat(provider): add support for provider references and
uploading files as supported per provider
-   Updated dependencies [c29a26f]
    -   @ai-sdk/openai-compatible@3.0.0-beta.14
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/amazon-bedrock@5.0.0-beta.21

### Patch Changes

- c29a26f: feat(provider): add support for provider references and
uploading files as supported per provider
-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/anthropic@4.0.0-beta.17
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/angular@3.0.0-beta.61

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   ai@7.0.0-beta.61

## @ai-sdk/anthropic@4.0.0-beta.17

### Patch Changes

- c29a26f: feat(provider): add support for provider references and
uploading files as supported per provider
-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/assemblyai@3.0.0-beta.12

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/azure@4.0.0-beta.21

### Patch Changes

- c29a26f: feat(provider): add support for provider references and
uploading files as supported per provider
-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6
    -   @ai-sdk/openai@4.0.0-beta.21

## @ai-sdk/baseten@2.0.0-beta.14

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/openai-compatible@3.0.0-beta.14
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/black-forest-labs@2.0.0-beta.11

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/bytedance@2.0.0-beta.11

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/cerebras@3.0.0-beta.14

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/openai-compatible@3.0.0-beta.14
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/cohere@4.0.0-beta.12

### Patch Changes

- c29a26f: feat(provider): add support for provider references and
uploading files as supported per provider
-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/deepgram@3.0.0-beta.11

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/deepinfra@3.0.0-beta.14

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/openai-compatible@3.0.0-beta.14
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/deepseek@3.0.0-beta.13

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/devtools@1.0.0-beta.6

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/elevenlabs@3.0.0-beta.11

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/fal@3.0.0-beta.11

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/fireworks@3.0.0-beta.14

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/openai-compatible@3.0.0-beta.14
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/gateway@4.0.0-beta.31

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/gladia@3.0.0-beta.11

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/google@4.0.0-beta.23

### Patch Changes

- c29a26f: feat(provider): add support for provider references and
uploading files as supported per provider
-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/google-vertex@5.0.0-beta.31

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/openai-compatible@3.0.0-beta.14
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/anthropic@4.0.0-beta.17
    -   @ai-sdk/provider@4.0.0-beta.6
    -   @ai-sdk/google@4.0.0-beta.23

## @ai-sdk/groq@4.0.0-beta.14

### Patch Changes

- c29a26f: feat(provider): add support for provider references and
uploading files as supported per provider
-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/huggingface@2.0.0-beta.14

### Patch Changes

- c29a26f: feat(provider): add support for provider references and
uploading files as supported per provider
-   Updated dependencies [c29a26f]
    -   @ai-sdk/openai-compatible@3.0.0-beta.14
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/hume@3.0.0-beta.11

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/klingai@4.0.0-beta.12

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/langchain@3.0.0-beta.61

### Patch Changes

-   Updated dependencies [c29a26f]
    -   ai@7.0.0-beta.61

## @ai-sdk/llamaindex@3.0.0-beta.61

### Patch Changes

-   Updated dependencies [c29a26f]
    -   ai@7.0.0-beta.61

## @ai-sdk/lmnt@3.0.0-beta.11

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/luma@3.0.0-beta.11

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/mcp@2.0.0-beta.14

### Patch Changes

- 1e89d62: fix(mcp): strip trailing slash from OAuth resource parameter
-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/mistral@4.0.0-beta.13

### Patch Changes

- c29a26f: feat(provider): add support for provider references and
uploading files as supported per provider
-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/moonshotai@3.0.0-beta.14

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/openai-compatible@3.0.0-beta.14
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/open-responses@2.0.0-beta.12

### Patch Changes

- c29a26f: feat(provider): add support for provider references and
uploading files as supported per provider
-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/openai@4.0.0-beta.21

### Patch Changes

- c29a26f: feat(provider): add support for provider references and
uploading files as supported per provider
-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/openai-compatible@3.0.0-beta.14

### Patch Changes

- c29a26f: feat(provider): add support for provider references and
uploading files as supported per provider
-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/otel@1.0.0-beta.7

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider@4.0.0-beta.6
    -   ai@7.0.0-beta.61

## @ai-sdk/perplexity@4.0.0-beta.14

### Patch Changes

- c29a26f: feat(provider): add support for provider references and
uploading files as supported per provider
-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/prodia@2.0.0-beta.14

### Patch Changes

- c29a26f: feat(provider): add support for provider references and
uploading files as supported per provider
-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/provider@4.0.0-beta.6

### Patch Changes

- c29a26f: feat(provider): add support for provider references and
uploading files as supported per provider

## @ai-sdk/provider-utils@5.0.0-beta.10

### Patch Changes

- c29a26f: feat(provider): add support for provider references and
uploading files as supported per provider
-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/react@4.0.0-beta.61

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   ai@7.0.0-beta.61

## @ai-sdk/replicate@3.0.0-beta.12

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/revai@3.0.0-beta.12

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/rsc@3.0.0-beta.62

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6
    -   ai@7.0.0-beta.61

## @ai-sdk/svelte@5.0.0-beta.61

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   ai@7.0.0-beta.61

## @ai-sdk/togetherai@3.0.0-beta.14

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/openai-compatible@3.0.0-beta.14
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/valibot@3.0.0-beta.10

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10

## @ai-sdk/vercel@3.0.0-beta.14

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/openai-compatible@3.0.0-beta.14
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

## @ai-sdk/vue@4.0.0-beta.61

### Patch Changes

-   Updated dependencies [c29a26f]
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   ai@7.0.0-beta.61

## @ai-sdk/xai@4.0.0-beta.23

### Patch Changes

- c29a26f: feat(provider): add support for provider references and
uploading files as supported per provider
-   Updated dependencies [c29a26f]
    -   @ai-sdk/openai-compatible@3.0.0-beta.14
    -   @ai-sdk/provider-utils@5.0.0-beta.10
    -   @ai-sdk/provider@4.0.0-beta.6

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
This is an automated update of the gateway model settings files.

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## ai@7.0.0-beta.62

### Patch Changes

-   Updated dependencies [11746ca]
    -   @ai-sdk/gateway@4.0.0-beta.32

## @ai-sdk/angular@3.0.0-beta.62

### Patch Changes

-   ai@7.0.0-beta.62

## @ai-sdk/gateway@4.0.0-beta.32

### Patch Changes

- 11746ca: chore(provider/gateway): update gateway model settings files

## @ai-sdk/langchain@3.0.0-beta.62

### Patch Changes

-   ai@7.0.0-beta.62

## @ai-sdk/llamaindex@3.0.0-beta.62

### Patch Changes

-   ai@7.0.0-beta.62

## @ai-sdk/otel@1.0.0-beta.8

### Patch Changes

-   ai@7.0.0-beta.62

## @ai-sdk/react@4.0.0-beta.62

### Patch Changes

-   ai@7.0.0-beta.62

## @ai-sdk/rsc@3.0.0-beta.63

### Patch Changes

-   ai@7.0.0-beta.62

## @ai-sdk/svelte@5.0.0-beta.62

### Patch Changes

-   ai@7.0.0-beta.62

## @ai-sdk/vue@4.0.0-beta.62

### Patch Changes

-   ai@7.0.0-beta.62

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…ate (#14090)

## Background

Google updated their Gemini API to use lowercase service tier values
(`standard`, `flex`, `priority`) instead of the previous uppercase
prefixed format (`SERVICE_TIER_STANDARD`, `SERVICE_TIER_FLEX`,
`SERVICE_TIER_PRIORITY`). The old values no longer work. See
[googleapis/js-genai@9bdc2ae](googleapis/js-genai@9bdc2ae).

Also outlined in their docs:
- https://ai.google.dev/gemini-api/docs/priority-inference
- https://ai.google.dev/gemini-api/docs/flex-inference

## Summary

- Updated the `serviceTier` provider option enum in google to accept
`'standard' | 'flex' | 'priority'`
- Updated tests, examples, and documentation to reflect the new values

## Manual Verification

Ran the `generate-text` and `stream-text` service tier examples against
the live Gemini API.

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [x] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

N/A

## Related Issues

N/A
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## @ai-sdk/google@4.0.0-beta.24

### Patch Changes

- 55db546: fix(provider/google): fix Gemini service tier enum after
upstream update

## @ai-sdk/google-vertex@5.0.0-beta.32

### Patch Changes

-   Updated dependencies [55db546]
    -   @ai-sdk/google@4.0.0-beta.24

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…anguage (#14069)

## Summary
- Clarify that `zeroDataRetention` and `disallowPromptTraining` filters
are **not applied** when using BYOK credentials
- Clarify that these filters **are honored** when BYOK credentials fail
and the request falls back to system credentials
- Updated JSDoc in `gateway-provider-options.ts` and documentation in
`00-ai-gateway.mdx`

## Test plan
- [ ] Review updated language in docs and JSDoc for accuracy
- [ ] Verify no functional code changes (docs/comments only)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
## Summary

- Changed `error.message` to `error.toString()` in both copies of
`getErrorMessage()`
- This preserves the error type prefix (`Error:`, `TypeError:`,
`RangeError:`, etc.) so the model can distinguish a failed tool call
from a normal string result
- Added comprehensive test suite covering all branches

## Breaking change

This is a behavior change for all callers of `getErrorMessage`. The
output for `Error` instances now includes the type prefix:

```ts
const error = new TypeError("API crashed");

// Before: "API crashed"
error.message

// After: "TypeError: API crashed"
error.toString()
```

Downstream consumers that embed this string in user-facing messages,
structured logs, or serialized API responses will see the prefix. This
is intentional — the previous behavior made tool errors
indistinguishable from successful string results when forwarded to the
model.

## Edge case: empty error message

When `error.message` is an empty string, `error.toString()` returns just
the error name (e.g. `"Error"` or `"TypeError"`), whereas the previous
behavior returned `""`. This is a net improvement — an empty string
gives no signal at all, while `"Error"` at least tells the model
something went wrong.

## Note on file duplication

Both `packages/provider/src/errors/get-error-message.ts` and
`packages/provider-utils/src/get-error-message.ts` are identical copies.
Consolidating them into a single shared module would be a good follow-up
but is out of scope for this fix.

## Changes

- `packages/provider/src/errors/get-error-message.ts`
- `packages/provider-utils/src/get-error-message.ts`
- `packages/provider-utils/src/get-error-message.test.ts` (new)

## Test plan

- [x] 15 tests covering: null/undefined, strings,
Error/TypeError/RangeError, empty messages, custom error subclasses,
plain objects/numbers/booleans/arrays
- [x] All pass (`pnpm test:node` in provider-utils — 386/386)

Closes #14002

---------

Co-authored-by: Murat Aslan <murataslan1@users.noreply.github.com>
Co-authored-by: Felix Arntz <felix.arntz@vercel.com>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## ai@7.0.0-beta.63

### Patch Changes

- 6fd51c0: fix(provider): preserve error type prefix in getErrorMessage
-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7
    -   @ai-sdk/gateway@4.0.0-beta.33

## @ai-sdk/alibaba@2.0.0-beta.17

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7
    -   @ai-sdk/openai-compatible@3.0.0-beta.15

## @ai-sdk/amazon-bedrock@5.0.0-beta.22

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7
    -   @ai-sdk/anthropic@4.0.0-beta.18

## @ai-sdk/angular@3.0.0-beta.63

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   ai@7.0.0-beta.63

## @ai-sdk/anthropic@4.0.0-beta.18

### Patch Changes

- 6fd51c0: fix(provider): preserve error type prefix in getErrorMessage
-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/assemblyai@3.0.0-beta.13

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/azure@4.0.0-beta.22

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7
    -   @ai-sdk/openai@4.0.0-beta.22

## @ai-sdk/baseten@2.0.0-beta.15

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7
    -   @ai-sdk/openai-compatible@3.0.0-beta.15

## @ai-sdk/black-forest-labs@2.0.0-beta.12

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/bytedance@2.0.0-beta.12

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/cerebras@3.0.0-beta.15

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7
    -   @ai-sdk/openai-compatible@3.0.0-beta.15

## @ai-sdk/cohere@4.0.0-beta.13

### Patch Changes

- 6fd51c0: fix(provider): preserve error type prefix in getErrorMessage
-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/deepgram@3.0.0-beta.12

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/deepinfra@3.0.0-beta.15

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7
    -   @ai-sdk/openai-compatible@3.0.0-beta.15

## @ai-sdk/deepseek@3.0.0-beta.14

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/devtools@1.0.0-beta.7

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/elevenlabs@3.0.0-beta.12

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/fal@3.0.0-beta.12

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/fireworks@3.0.0-beta.15

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7
    -   @ai-sdk/openai-compatible@3.0.0-beta.15

## @ai-sdk/gateway@4.0.0-beta.33

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/gladia@3.0.0-beta.12

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/google@4.0.0-beta.25

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/google-vertex@5.0.0-beta.33

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7
    -   @ai-sdk/openai-compatible@3.0.0-beta.15
    -   @ai-sdk/anthropic@4.0.0-beta.18
    -   @ai-sdk/google@4.0.0-beta.25

## @ai-sdk/groq@4.0.0-beta.15

### Patch Changes

- 6fd51c0: fix(provider): preserve error type prefix in getErrorMessage
-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/huggingface@2.0.0-beta.15

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7
    -   @ai-sdk/openai-compatible@3.0.0-beta.15

## @ai-sdk/hume@3.0.0-beta.12

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/klingai@4.0.0-beta.13

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/langchain@3.0.0-beta.63

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   ai@7.0.0-beta.63

## @ai-sdk/llamaindex@3.0.0-beta.63

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   ai@7.0.0-beta.63

## @ai-sdk/lmnt@3.0.0-beta.12

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/luma@3.0.0-beta.12

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/mcp@2.0.0-beta.15

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/mistral@4.0.0-beta.14

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/moonshotai@3.0.0-beta.15

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7
    -   @ai-sdk/openai-compatible@3.0.0-beta.15

## @ai-sdk/open-responses@2.0.0-beta.13

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/openai@4.0.0-beta.22

### Patch Changes

- 6fd51c0: fix(provider): preserve error type prefix in getErrorMessage
-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/openai-compatible@3.0.0-beta.15

### Patch Changes

- 6fd51c0: fix(provider): preserve error type prefix in getErrorMessage
-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/otel@1.0.0-beta.9

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider@4.0.0-beta.7
    -   ai@7.0.0-beta.63

## @ai-sdk/perplexity@4.0.0-beta.15

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/prodia@2.0.0-beta.15

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/provider@4.0.0-beta.7

### Patch Changes

- 6fd51c0: fix(provider): preserve error type prefix in getErrorMessage

## @ai-sdk/provider-utils@5.0.0-beta.11

### Patch Changes

- 6fd51c0: fix(provider): preserve error type prefix in getErrorMessage
-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/react@4.0.0-beta.63

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   ai@7.0.0-beta.63

## @ai-sdk/replicate@3.0.0-beta.13

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/revai@3.0.0-beta.13

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7

## @ai-sdk/rsc@3.0.0-beta.64

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7
    -   ai@7.0.0-beta.63

## @ai-sdk/svelte@5.0.0-beta.63

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   ai@7.0.0-beta.63

## @ai-sdk/togetherai@3.0.0-beta.15

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7
    -   @ai-sdk/openai-compatible@3.0.0-beta.15

## @ai-sdk/valibot@3.0.0-beta.11

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11

## @ai-sdk/vercel@3.0.0-beta.15

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7
    -   @ai-sdk/openai-compatible@3.0.0-beta.15

## @ai-sdk/vue@4.0.0-beta.63

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   ai@7.0.0-beta.63

## @ai-sdk/xai@4.0.0-beta.24

### Patch Changes

-   Updated dependencies [6fd51c0]
    -   @ai-sdk/provider-utils@5.0.0-beta.11
    -   @ai-sdk/provider@4.0.0-beta.7
    -   @ai-sdk/openai-compatible@3.0.0-beta.15

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
…ta (#14016)

## Summary
- Add `promptTokensDetails` and `candidatesTokensDetails` to the Gemini
response `usageSchema` so per-modality token counts (TEXT, IMAGE, AUDIO,
VIDEO) are no longer stripped by Zod parsing
- These fields now flow through to `usage.raw`, enabling downstream
consumers to distinguish token usage by modality

## Why
Gemini charges different rates for different input modalities (e.g.
audio input is $0.50/1M tokens vs $0.25/1M for text/image/video). The
ai-gateway needs per-modality token counts to bill correctly.
Previously, `promptTokensDetails` was present in the Gemini API response
but stripped during Zod schema validation, making it impossible to
differentiate modalities downstream.

## Validation
- Ran the `generate-text/google/image.ts` example and confirmed
`promptTokensDetails` now appears in `usage.raw` with both `TEXT` and
`IMAGE` modality entries
- All 328 existing tests pass; 10 snapshots updated to include the new
fields

---------

Co-authored-by: Felix Arntz <felix.arntz@vercel.com>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## @ai-sdk/google@4.0.0-beta.26

### Patch Changes

- a05109d: feat(provider/google): preserve per-modality token details in
usage data

## @ai-sdk/google-vertex@5.0.0-beta.34

### Patch Changes

-   Updated dependencies [a05109d]
    -   @ai-sdk/google@4.0.0-beta.26

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
## Background

When working on new features or bug fixes, it's useful to have a skill
that can automatically identify what examples should be added to
`examples/ai-functions` based on the current branch changes. I've been
doing this a lot.

## Summary

- Adds an internal `add-function-examples` skill that reviews current
branch changes and creates function examples

## Checklist

- [ ] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [ ] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

N/A

## Related Issues

N/A
## Summary

- Adds `hipaaCompliant` gateway provider option to restrict routing to
providers that have signed a BAA with Vercel for HIPAA compliance
- Includes schema definition, tests (hipaaCompliant alone + combined
with zeroDataRetention), documentation with example, and ai-functions
example
- Also improves `zeroDataRetention` documentation with more detailed
descriptions

## Test plan

- [ ] Verify `hipaaCompliant` option is passed through in provider
options
- [ ] Verify combined `zeroDataRetention` + `hipaaCompliant` options
work together
- [ ] Run gateway language model tests: `cd packages/gateway && pnpm
test`

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## ai@7.0.0-beta.64

### Patch Changes

-   Updated dependencies [71b0e7d]
    -   @ai-sdk/gateway@4.0.0-beta.34

## @ai-sdk/angular@3.0.0-beta.64

### Patch Changes

-   ai@7.0.0-beta.64

## @ai-sdk/gateway@4.0.0-beta.34

### Patch Changes

- 71b0e7d: feat (provider/gateway): add hipaaCompliant gateway provider
option

## @ai-sdk/langchain@3.0.0-beta.64

### Patch Changes

-   ai@7.0.0-beta.64

## @ai-sdk/llamaindex@3.0.0-beta.64

### Patch Changes

-   ai@7.0.0-beta.64

## @ai-sdk/otel@1.0.0-beta.10

### Patch Changes

-   ai@7.0.0-beta.64

## @ai-sdk/react@4.0.0-beta.64

### Patch Changes

-   ai@7.0.0-beta.64

## @ai-sdk/rsc@3.0.0-beta.65

### Patch Changes

-   ai@7.0.0-beta.64

## @ai-sdk/svelte@5.0.0-beta.64

### Patch Changes

-   ai@7.0.0-beta.64

## @ai-sdk/vue@4.0.0-beta.64

### Patch Changes

-   ai@7.0.0-beta.64

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
… messages (#14124)

## Summary
- Fix subject-verb disagreement: `Environment variables is not
supported` → `Environment variables are not supported` in
`load-api-key.ts` and `load-setting.ts` (the `fal` provider already uses
the correct `are` form)
- Fix plural inconsistency: `code execution tools is` → `code execution
tool is` in `google-prepare-tools.ts` (all other references use singular
form)

---------

Co-authored-by: yuj <yuj@ztjzsoft.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Aayush Kapoor <83492835+aayush-kapoor@users.noreply.github.com>
Co-authored-by: Aayush Kapoor <aayushkapoor34@gmail.com>
This PR was opened by the [Changesets
release](https://github.com/changesets/action) GitHub action. When
you're ready to do a release, you can merge this and the packages will
be published to npm automatically. If you're not ready to do a release
yet, that's fine, whenever you add more changesets to main, this PR will
be updated.

⚠️⚠️⚠️⚠️⚠️⚠️

`main` is currently in **pre mode** so this branch has prereleases
rather than normal releases. If you want to exit prereleases, run
`changeset pre exit` on `main`.

⚠️⚠️⚠️⚠️⚠️⚠️

# Releases
## ai@7.0.0-beta.65

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12
    -   @ai-sdk/gateway@4.0.0-beta.35

## @ai-sdk/alibaba@2.0.0-beta.18

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12
    -   @ai-sdk/openai-compatible@3.0.0-beta.16

## @ai-sdk/amazon-bedrock@5.0.0-beta.23

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12
    -   @ai-sdk/anthropic@4.0.0-beta.19

## @ai-sdk/angular@3.0.0-beta.65

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12
    -   ai@7.0.0-beta.65

## @ai-sdk/anthropic@4.0.0-beta.19

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/assemblyai@3.0.0-beta.14

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/azure@4.0.0-beta.23

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12
    -   @ai-sdk/openai@4.0.0-beta.23

## @ai-sdk/baseten@2.0.0-beta.16

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12
    -   @ai-sdk/openai-compatible@3.0.0-beta.16

## @ai-sdk/black-forest-labs@2.0.0-beta.13

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/bytedance@2.0.0-beta.13

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/cerebras@3.0.0-beta.16

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12
    -   @ai-sdk/openai-compatible@3.0.0-beta.16

## @ai-sdk/cohere@4.0.0-beta.14

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/deepgram@3.0.0-beta.13

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/deepinfra@3.0.0-beta.16

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12
    -   @ai-sdk/openai-compatible@3.0.0-beta.16

## @ai-sdk/deepseek@3.0.0-beta.15

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/elevenlabs@3.0.0-beta.13

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/fal@3.0.0-beta.13

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/fireworks@3.0.0-beta.16

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12
    -   @ai-sdk/openai-compatible@3.0.0-beta.16

## @ai-sdk/gateway@4.0.0-beta.35

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/gladia@3.0.0-beta.13

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/google@4.0.0-beta.27

### Patch Changes

- 46d1149: chore(provider-utils,google): fix grammar errors in error and
warning messages
-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/google-vertex@5.0.0-beta.35

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12
    -   @ai-sdk/google@4.0.0-beta.27
    -   @ai-sdk/anthropic@4.0.0-beta.19
    -   @ai-sdk/openai-compatible@3.0.0-beta.16

## @ai-sdk/groq@4.0.0-beta.16

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/huggingface@2.0.0-beta.16

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12
    -   @ai-sdk/openai-compatible@3.0.0-beta.16

## @ai-sdk/hume@3.0.0-beta.13

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/klingai@4.0.0-beta.14

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/langchain@3.0.0-beta.65

### Patch Changes

-   ai@7.0.0-beta.65

## @ai-sdk/llamaindex@3.0.0-beta.65

### Patch Changes

-   ai@7.0.0-beta.65

## @ai-sdk/lmnt@3.0.0-beta.13

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/luma@3.0.0-beta.13

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/mcp@2.0.0-beta.16

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/mistral@4.0.0-beta.15

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/moonshotai@3.0.0-beta.16

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12
    -   @ai-sdk/openai-compatible@3.0.0-beta.16

## @ai-sdk/open-responses@2.0.0-beta.14

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/openai@4.0.0-beta.23

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/openai-compatible@3.0.0-beta.16

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/otel@1.0.0-beta.11

### Patch Changes

-   ai@7.0.0-beta.65

## @ai-sdk/perplexity@4.0.0-beta.16

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/prodia@2.0.0-beta.16

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/provider-utils@5.0.0-beta.12

### Patch Changes

- 46d1149: chore(provider-utils,google): fix grammar errors in error and
warning messages

## @ai-sdk/react@4.0.0-beta.65

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12
    -   ai@7.0.0-beta.65

## @ai-sdk/replicate@3.0.0-beta.14

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/revai@3.0.0-beta.14

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/rsc@3.0.0-beta.66

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12
    -   ai@7.0.0-beta.65

## @ai-sdk/svelte@5.0.0-beta.65

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12
    -   ai@7.0.0-beta.65

## @ai-sdk/togetherai@3.0.0-beta.16

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12
    -   @ai-sdk/openai-compatible@3.0.0-beta.16

## @ai-sdk/valibot@3.0.0-beta.12

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12

## @ai-sdk/vercel@3.0.0-beta.16

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12
    -   @ai-sdk/openai-compatible@3.0.0-beta.16

## @ai-sdk/vue@4.0.0-beta.65

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12
    -   ai@7.0.0-beta.65

## @ai-sdk/xai@4.0.0-beta.25

### Patch Changes

-   Updated dependencies [46d1149]
    -   @ai-sdk/provider-utils@5.0.0-beta.12
    -   @ai-sdk/openai-compatible@3.0.0-beta.16

Co-authored-by: vercel-ai-sdk[bot] <225926702+vercel-ai-sdk[bot]@users.noreply.github.com>
## Summary

Fixes #13449.

`@ai-sdk/mcp` hardcoded `globalThis.fetch` in its HTTP and SSE
transports, breaking same-app MCP calls on runtimes that block
self-fetches (e.g. Cloudflare Workers error `1042` via Nitro).

- Adds an optional `fetch` field to `MCPTransportConfig` and both
`HttpMCPTransport` / `SseMCPTransport` constructors
- Falls back to `globalThis.fetch` when not provided (no breaking
change)
- Forwards the custom fetch to OAuth `auth()` calls so token
discovery/exchange uses the same fetch implementation
- Covers both `http` and `sse` transport types as requested in the issue

**Usage:**
```ts
createMCPClient({
  transport: {
    type: 'http',
    url,
    fetch: event.fetch, // e.g. Nitro / Cloudflare Workers request-local fetch
  },
})
```

## Test plan

- [x] Added `describe('custom fetch', ...)` tests to both
`mcp-http-transport.test.ts` and `mcp-sse-transport.test.ts` verifying
the custom fetch function is called for GET (start) and POST (send)
operations
- [x] All 171 existing tests continue to pass (`pnpm test:node`)
- [x] Package builds cleanly (`pnpm build` in `packages/mcp`)

---------

Co-authored-by: Aayush Kapoor <83492835+aayush-kapoor@users.noreply.github.com>
Co-authored-by: Aayush Kapoor <aayushkapoor34@gmail.com>
gr2m and others added 30 commits April 19, 2026 06:02
## Background

The JSDoc in `tool.ts` was inaccurate and lacking descriptions.

## Summary

Improve JSDoc.
## Background

Specifying `needsApproval` does not make sense for provider-executed
tools which run automatically on the provider side (see #14480 ). In
order to filter out provider-executed tools, we need to first
distinguish provider-defined from provider-executed tools.

A provider tool can be:
- provider-defined: the provider specifies input (and sometimes output)
schemas, but the execution function is used defined. The tool is
executed by the AI SDK.
- provider-executed: the provider specifies input and output schemas.
There is no user-defined execution function. The tool is executed by the
provider as part of their response.

## Summary

* introduce `isProviderExecuted` flag on provider tools
* split `ProviderToolFactory` and related factory functions into
`ProviderDefinedToolFactory` and `ProviderExecutedToolFactory``
* update provider tools

## Future Work

* exclude provider-executed tools from `toolNeedsApproval` function

## Related Issues

Separates provider-executed from provider-defined tools to enable fixing
the limitation from #14480
# Background

We want to enhance the tool approval function to allow for automatic
accept or reject without user interaction. The name `toolApproval` is
shorter and much better aligned with that desired API.

## Summary

* rename `toolNeedsApproval` to `toolApproval`
* rename `ToolNeedsApprovalConfiguration` to `ToolApprovalConfiguration`

## Future Work

* change tool approval function outputs to "not-applicable", "approved",
"rejected", "user-approval"

## Related Issues

Builds on #14480
## Background

the telemetry spans needed more visibility on the steps that were
running + needed to be aligned with genAI semantic conventions.

## Summary

`GenAIOpenTelemetry` now creates an explicit agent_step span for each
generateText / streamText step, with both chat and execute_tool parented
under that step span

## Manual Verification

ran the example
`examples/ai-functions/src/generate-text/anthropic/subagent-with-telemetry.ts`
before and after the change

### Before

<img width="474" height="349" alt="image"
src="https://github.com/user-attachments/assets/c886c674-78a6-42de-8d3b-355142bfc6ab"
/>

### After

<img width="941" height="990" alt="image"
src="https://github.com/user-attachments/assets/a10b91fd-dc40-46cf-8b24-80556ae5c1e2"
/>

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

the duration of the `chat` span might need new events so that the model
call's duration can be calculated accurately
<!--
Welcome to contributing to AI SDK! We're excited to see your changes.

We suggest you read the following contributing guide we've created
before submitting:

https://github.com/vercel/ai/blob/main/CONTRIBUTING.md
-->

## Summary

<!-- What did you change? -->

Document the published `@nozomioai/nia-ai-sdk` package in the community
providers section so users can discover Nia's tool, middleware, and
streaming workflows.

## Checklist

<!--
Do not edit this list. Leave items unchecked that don't apply. If you
need to track subtasks, create a new "## Tasks" section

Please check if the PR fulfills the following requirements:
-->

- [ ] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [ ] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)
## Background

The default tool call denial message `Tool execution denied.` led to
Open GPT 5 models rejecting further calls to the same tool with
different inputs.

## Summary

Change the default message `Tool execution denied.` to the more precise
`Tool call execution denied.`

## Related Issues
Discovered during #14643
## Background

Many user-facing agents, e.g. coding agents, allow users to set up
default approvals for certain actions (e.g. executing specific shell
commands). In such cases, there needs to be a way to automatically
approve or deny the execution of certain tool calls.

## Summary

Support automatic approval or denial of tool call executions.

* tool approval configuration return a `ToolApprovalStatus`, which can
be `'not-applicable' | 'approved' | 'denied' | 'user-approval'`
* support automatic approval ('approved', 'denied') in `streamText` and
`generateText`
* add optional `isAutomatic` flag to `ToolApprovalRequest` to indicate
when there are automatic responses

## Example
```ts
const agent = new ToolLoopAgent({
  model: openai('gpt-5.4-mini'),
  // context engineering required to make sure the model does not retry
  // the tool execution if it is not approved for a particular tool call:
  instructions:
    'When a tool call was not approved by the user, ' +
    'do not retry the tool call with the same input.' +
    'Just say that the tool execution was not approved.' +
    'You can call a denied tool call with a different input.',
  tools: { weather: weatherTool },
  toolApproval: {
    weather: ({ location }) => {
      const locationLower = location.toLowerCase();
      if (locationLower.includes('san francisco') || locationLower === 'sf') {
        return 'approved';
      }

      if (locationLower.includes('new york') || locationLower === 'nyc') {
        return 'denied';
      }

      return 'user-approval';
    },
  },
});
```
 
## Limitations
* UI messages do not support automatic tool approvals yet.
* Automatic tool approvals do not support approval reasons yet.

## Manual Verification
* [x] generateText
`examples/ai-functions/src/generate-text/openai/tool-approval.ts`
* [x] streamText
`examples/ai-functions/src/stream-text/openai/tool-approval.ts`
* [x] agent
`examples/ai-functions/src/agent/openai/generate-tool-approval.ts`

## Future Work

* add automatic tool approvals to UI messages
* support reason in automatic accept/deny

## Related Issues

Build on #14642
### Why

The OpenAI code-interpreter example prompt contains two small typos in a
single sentence:

- `and and return` (duplicated "and")
- `the sum all the results` (missing "of")

These are user-facing strings in a runnable example — first-time readers
tend to copy-paste example prompts verbatim, so small mistakes
propagate. Also, the duplicated word makes the sentence ungrammatical
and slightly changes the model's expected behavior (the LLM has to
silently correct it before responding).

### What

Single-line change in
`examples/ai-functions/src/stream-text/openai/code-interpreter-tool.ts`:

```diff
-      'Simulate rolling two dice 10000 times and and return the sum all the results.',
+      'Simulate rolling two dice 10000 times and return the sum of all the results.',
```

No logic, behavior, or API surface is touched. Examples are not released
per `CONTRIBUTING.md`, so no changeset is needed.

1 file changed, 1 insertion(+), 1 deletion(-).

---
_Part of open-source blockchain work from
[kcolbchain.com](https://kcolbchain.com) — maintained by [Abhishek
Krishna](https://abhishekkrishna.com). PR opened via [kcolbchain
contrib-bot](https://github.com/kcolbchain/kcolbchain.github.io/blob/master/deploy/contrib-bot/README.md)._

Co-authored-by: abhicris <abhicris@users.noreply.github.com>
Co-authored-by: Claude Opus 4.7 <noreply@anthropic.com>
## Background

The `eventsource-parser` performance was significantly increased in
rexxars/eventsource-parser#27

## Summary

Bump `eventsource-parser` dependency to `^3.0.8`.
## Background

Automatic tool approvals were introduced in #14643 . However, they are
not supported in UI messages yet.

## Summary

Add support for automatic tool approval in UI messages.

* add approval response ui message chunk
* add isAutomatic flag to approval part of ui tool messages
* add tool approval response parts when mapping ui messages to model
messages

## Example UI

<img width="528" height="713" alt="image"
src="https://github.com/user-attachments/assets/66a80d50-1ead-4865-a101-fb56cd257a3f"
/>

## Manual Verification

- [x] run e2e example http://localhost:3000/chat/tool-approval

## Related Issues

Builds on #14643

---------

Co-authored-by: vercel[bot] <35613825+vercel[bot]@users.noreply.github.com>
## Background

there was unnecessary data being passed through the core lifecycle
events that could be stripped away.

## Summary

- from tool execution event - step number, messages, provider, modelId
were removed
- from step start event - timeout, headers and stopWhen data was removed

## Manual Verification

na

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)
## Background

Tool approval support a `reason` for more details on why they were
approved or denied. However, this functionality is currently only
available for user approvals, not for automatic approvals.

## Summary

Add approval reason support to automatic tool approvals.

## Manual Verification

* [x] generateText
`examples/ai-functions/src/generate-text/openai/tool-approval.ts`
* [x] streamText
`examples/ai-functions/src/stream-text/openai/tool-approval.ts`
* [x] agent
`examples/ai-functions/src/agent/openai/generate-tool-approval.ts`
* [x] ui e2e example http://localhost:3000/chat/tool-approval

## Related Issues

Builds on #14643 and #14659
## Background

We were passing telemetry options/data to every core event which was not
needed (at least not for the user facing events)

## Summary

Changed it so that the telemetry data is passed when unifying user
facing callbacks with the provider facing ones so that provider
implementations of `Telemetry` interface still get all the information.

This is verified by some tests breaking in `ai` and no tests breaking in
`otel` / `devtools` package

## Manual Verification

verified by running the example
`examples/ai-functions/src/generate-text/anthropic/subagent-with-telemetry.ts`
- the functionID defined in this example is still observed in the final
trace

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [x] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)

## Future Work

- look into adding a `type: event-type` property to core-events
…ms (#14662)

# Preserve invalid tool call errors in workflow UI streams

WorkflowAgent currently assumes every tool call must end with a
workflow-generated tool result. That breaks the invalid-input path from
AI SDK core: an invalid call already emits `tool-input-error` and
`tool-output-error`, but workflow later synthesizes a fallback result
and writes it back as `tool-output-available`. The UI reducer treats
that as the final state and overwrites the original error.

This change teaches WorkflowAgent to keep invalid tool calls out of
workflow-side execution and success-style result synthesis. Valid
executable and provider-executed calls still flow through the existing
loop, but invalid calls are carried forward only as `error-text`
continuation results for the next model step. That preserves model-side
recovery behavior without emitting a synthetic `tool-result` chunk that
would mutate the UI into an `output-available` state. The final workflow
result now reports `toolResults` only for tools that actually executed.

The test coverage adds a regression around this lifecycle. It verifies
that an invalid call is not executed, that workflow does not emit a
synthetic `tool-result` write for it, that the iterator still receives
the error continuation result, and that the package returns no executed
tool result for the invalid call.

## Testing

- `pnpm --filter @ai-sdk/workflow test:node`
- `pnpm --filter @ai-sdk/workflow type-check`
## Background

When developing agents or agent frameworks on top of AI SDK, you often
want a single function that controls tool approvals. This enables you to
e.g. implement your own mechanism for saving user-defined
auto-approvals.

## Summary

* rename `ToolApprovalFunction` type to `SingleToolApprovalFunction`
* change `toolContext` argument to `context`
* add `GenericToolApprovalFunction`

## Example

```ts
const result = await generateText({
  model: openai('gpt-5.4-mini'),
  // context engineering required to make sure the model does not retry
  // the tool execution if it is not approved for a particular tool call:
  system:
    'When a tool call was not approved by the user, ' +
    'do not retry the tool call with the same input.' +
    'Just say that the tool execution was not approved.' +
    'You can call a denied tool call with a different input.',
  tools: { weather: weatherTool },
  toolApproval: ({ toolCall, tools, toolsContext, messages }) => {
    if (!toolCall.dynamic && toolCall.toolName === 'weather') {
      const locationLower = toolCall.input.location.toLowerCase();
      if (
        locationLower.includes('san francisco') ||
        locationLower === 'sf'
      ) {
        return 'approved';
      }

      if (locationLower.includes('new york') || locationLower === 'nyc') {
        return { type: 'denied', reason: 'blocked by policy' };
      }

      return 'user-approval';
    }

    return 'not-applicable';
  },
  messages,
  stopWhen: isStepCount(5),
});
```

## Manual Verification

- [x] generateText
`examples/ai-functions/src/generate-text/openai/tool-approval-generic.ts`

## Future Work

* allow returning `undefined` as alias for `not-applicable`

## Related Issues

Builds on #14643
## Background

Having to return the tool approval status `not-applicable` can lead to
unnecessary boilerplate. We should allow users to be more concise if
they want, since they can enforce explicit return values by restricting
the return types of their tool approval functions.

## Summary

Allow tool approval functions to return `undefined`. It is treated
similar to `not-applicable`.

## Example

```ts
const agent = new ToolLoopAgent({
  model: openai('gpt-5.4-mini'),
  // context engineering required to make sure the model does not retry
  // the tool execution if it is not approved for a particular tool call:
  instructions:
    'When a tool call was not approved by the user, ' +
    'do not retry the tool call with the same input.' +
    'Just say that the tool execution was not approved.' +
    'You can call a denied tool call with a different input.',
  tools: { weather: weatherTool },
  toolApproval: ({ toolCall, tools, toolsContext, messages }) => {
    if (!toolCall.dynamic && toolCall.toolName === 'weather') {
      const locationLower = toolCall.input.location.toLowerCase();
      if (locationLower.includes('san francisco') || locationLower === 'sf') {
        return 'approved';
      }

      if (locationLower.includes('new york') || locationLower === 'nyc') {
        return { type: 'denied', reason: 'blocked by policy' };
      }

      return 'user-approval';
    }

    // no additional return needed
  },
});
```

## Manual Verification

- [x] run
`examples/ai-functions/src/agent/openai/generate-tool-approval-generic.ts`

## Related Issues

Builds on #14690 and #14643
## Background

Runtime context can hold additional typed information (agent state) that
can be useful in deciding whether to automatically approve tool
executions.

## Summary

* rename `context` to `toolContext` on `SingleToolApprovalFunction`
* add `runtimeContext` to `SingleToolApprovalFunction` and
`GenericToolApprovalFunction`

## Manual Verification

- [x] run
`examples/ai-functions/src/agent/openai/generate-tool-approval-generic.ts`

## Related Issues

Builds on #14690 and #14643
## Background

Tool approval in AI SDK 7 is defined in the `toolApproval` property of
`generateText`, `streamText`, and `ToolLoopAgent`.

Different applications might need different approval strategies for the
same tools. It is the responsibility of the agent or app developer, not
the tool developer, to define approval strategies. Therefore the tool
execution approval mechanism was redesigned in AI SDK 7.

## Summary

Deprecate `needsApproval` on `Tool`.

## Future Work

* remove `needsApproval` on `Tool` in future AI SDK major versions
(earliest AI SDK 8).

## Related Issues

Builds on #14690 and #14643
## Background

we had to rename the event names to align with a proper nomenclature
across the codebase

## Summary

Generate Text events
- OnStartEvent -> GenerateTextStartEvent
- OnStepStartEvent -> GenerateTextStepStartEvent
- OnChunkEvent -> ChunkEvent
- OnStepFinishEvent -> GenerateTextStepEndEvent
- OnFinishEvent -> GenerateTextEndEvent

Rerank Events
- RerankOnStartEvent -> RerankStartEvent
- RerankOnFinishEvent -> RerankEndEvent
- RerankStartEvent -> RerankingModelCallStartEvent
- RerankFinishEvent -> RerankingModelCallEndEvent

Embed Events
- EmbedOnStartEvent -> EmbedStartEvent
- EmbedOnFinishEvent -> EmbedEndEvent
- EmbedStartEvent -> EmbeddingModelCallStartEvent
- EmbedFinishEvent -> EmbeddingModelCallEndEvent

## Manual Verification

na

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [x] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)
## Background

as a follow up to the PR #14654 and
#14606, we had to continue removing and
aligning the event data sent at each callback in the lifecycle

## Summary

- `prompt` property removed and `StandardizedPrompt` used instead
- `stepNumber` removed and made to derive from the `steps`
- `interface` -> `type` used
- properties that have overlap with `langaugeModelCallOptions` were
stripped and made to use that type
 
## Manual Verification

na

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)
…s when top-level reasoning parameter is set (#14711)

## Background

When a user set the top-level `reasoning` parameter alongside a partial
`providerOptions.bedrock.reasoningConfig`, the provider was ignoring the
custom config entirely and using only the values derived from
`reasoning`. This made it impossible to set or override individual
fields (e.g. `display`, `budgetTokens`) while still benefiting from the
automatic effort/budget mapping.

## Summary

- `resolveBedrockReasoningConfig` now spreads
`bedrockOptions.reasoningConfig` on top of the derived fields, so
explicit user values win while anything omitted is still derived from
the top-level `reasoning` parameter.
- When the merged result ends up with `type: 'disabled'`, derived
`maxReasoningEffort`/`budgetTokens` are stripped to avoid emitting
conflicting API fields — mirroring the same guard in
`anthropic-messages-language-model`.

## Manual Verification

Run the new examples added in this PR.

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)
…mbols and files (#14712)

## Background

The Anthropic provider had a redundant `messages` affix in several
internal symbol and file names (e.g. `AnthropicMessagesLanguageModel`,
`AnthropicMessagesModelId`, `anthropic-messages-language-model.ts`).
While the Anthropic API uses `/messages` as its root, there is no other
variant, so including it in the name is not helpful.

## Summary

- Rename files, types, and classes accordingly
- Maintain back compat for package-level exports by re-exporting as
deprecated aliases under the old name

## Manual Verification

N/A

## Checklist

- [ ] Tests have been added / updated (for bug fixes / features)
- [ ] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)
…15+ (#14731)

## Background

Three `streamText` tests behave differently in Node.js 24.15+, breaking
our ability to run them locally.

## Summary

Selectively disable the failing tests in Node 24.15+

## Future Work

Re-enable and fix the disabled tests.
## Background

the telemetry event hierarchy was changed as a part of the PR
#14614.

there was future work to define new event types that would limit the
boundaries of each event and keep scope clean (for example, the duration
of the chat span in the linked PR spans across tool execution when it
shouldn't)

## Summary

new event types added `LanguageModelCallStart` and
`LanguageModelCallEnd` that limit the scope of that event to be fired
just when a model is called and to stop before a tool execution happens

## Manual Verification

haven't verified shape yet

## Checklist

- [x] Tests have been added / updated (for bug fixes / features)
- [x] Documentation has been added / updated (for bug fixes / features)
- [x] A _patch_ changeset for relevant packages has been added (for bug
fixes / features - run `pnpm changeset` in the project root)
- [x] I have reviewed this pull request (self-review)
… multi-turn requests (#14739)

Replacement of the PR #14729 since we
now need signed commits.

## Background

DeepSeek released two model families with **opposite** requirements for
`reasoning_content` in multi-turn chat completions:

- **`deepseek-reasoner` (R1)** — per the [DeepSeek
docs](https://api-docs.deepseek.com/guides/reasoning_model), callers
must **not** echo `reasoning_content` from previous assistant turns. The
current converter (introduced in #10785) correctly strips it via ``if
(index <= lastUserMessageIndex) break``.
- **`deepseek-v4` / `deepseek-v4-pro` (V4 thinking mode)** — the API
**requires** every assistant message to carry a `reasoning_content`
field. Multi-turn requests without it fail with:

``400 The `reasoning_content` in the thinking mode must be passed back
to the API.``

The current converter applies the R1 stripping rule to V4 too, which
breaks every V4 conversation after the first turn.

## Reproduction

```ts
import { deepseek } from '@ai-sdk/deepseek';
import { isStepCount, streamText } from 'ai';
import { printFullStream } from '../../lib/print-full-stream';
import { run } from '../../lib/run';
import { weatherTool } from '../../tools/weather-tool';

run(async () => {
  const model = deepseek('deepseek-v4-pro');

  console.log('\n=== TURN 1 (tool call) ===');
  const t1 = streamText({
    model,
    tools: { weather: weatherTool },
    stopWhen: isStepCount(3),
    messages: [
      { role: 'user', content: 'What is the weather in San Francisco?' },
    ],
  });
  await printFullStream({ result: t1 });

  const t1Messages = (await t1.response).messages;

  console.log('\n\n=== TURN 2 (replay + new user turn) ===');
  const t2 = streamText({
    model,
    tools: { weather: weatherTool },
    stopWhen: isStepCount(3),
    messages: [
      { role: 'user', content: 'What is the weather in San Francisco?' },
      ...t1Messages,
      { role: 'user', content: 'How about in New York?' },
    ],
  });
  await printFullStream({ result: t2 });
});
```

## Fix

In `convertToDeepSeekChatMessages`:

1. Add a `modelId` parameter (the function is internal — only callers
are the test file and `deepseek-chat-language-model.ts`).
2. Detect V4 with `modelId.includes('deepseek-v4')`.
3. For V4: skip the strip-on-old-turn `break` so prior reasoning is
preserved, and back-fill `reasoning_content: ''` when an assistant
message has no `reasoning` part at all.
4. For R1 / other models: behavior unchanged.

## Test plan

- New unit tests in `convert-to-deepseek-chat-messages.test.ts`:
  - V4 with prior assistant turn → output preserves `reasoning_content`
- V4 with assistant turn lacking a reasoning part → output emits
`reasoning_content: ''`
- Existing R1/`deepseek-chat` tests continue to pass unchanged.
- All 29 tests in `@ai-sdk/deepseek` pass locally.
- Manual: a `streamText` multi-turn loop with `deepseek-v4-pro` no
longer 400s after turn 1.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.