Skip to content

think: add beforeStep hook and TurnConfig.output passthrough#1394

Merged
threepointone merged 1 commit intomainfrom
think-before-step-and-output
Apr 26, 2026
Merged

think: add beforeStep hook and TurnConfig.output passthrough#1394
threepointone merged 1 commit intomainfrom
think-before-step-and-output

Conversation

@threepointone
Copy link
Copy Markdown
Contributor

@threepointone threepointone commented Apr 26, 2026

Summary

Resolves #1363 and #1383.

  • beforeStep(ctx: PrepareStepContext): StepConfig | void — new lifecycle hook wired to the AI SDK's streamText({ prepareStep }) so subclasses can make per-step decisions (force a tool on step 0, switch to a cheaper model after tool results land, trim tool-heavy messages on later steps). Use beforeTurn for turn-wide assembly and beforeStep when the decision depends on step number or previous step results.
  • TurnConfig.output — new optional field forwarded to streamText. Accepts the AI SDK's structured-output spec (Output.object({ schema }), Output.text()) so a single agent can keep tools enabled on intermediate turns and return schema-validated structured output on a designated turn — without losing tools at model construction. Combine with activeTools: [] for providers (e.g. workers-ai-provider) that strip tools when responseFormat: "json" is active.
  • New re-exports from @cloudflare/think: PrepareStepFunction, PrepareStepResult, PrepareStepContext, StepConfig.

Naming

PrepareStepContext (not StepPrepareContext) — matches the AI SDK's PrepareStepFunction / prepareStep and avoids a confusable collision with the existing StepContext (which remains the completed-step result passed to onStepFinish). StepConfig mirrors TurnConfig.

Subclass-only beforeStep (no extension dispatch)

Intentional. The prepareStep event surface includes a live LanguageModel instance which is not JSON-safe to snapshot, and a returned override could include the same — there's no useful "snapshot, override" contract for sandboxed extensions. All other extension hook subscriptions are unchanged.

AI SDK boundary limitations (documented in docs/think/lifecycle-hooks.md)

These are AI SDK constraints, not Think-imposed:

  • No abortSignal in PrepareStepContext.
  • output and maxSteps cannot be overridden per step — set those at the turn level via TurnConfig.
  • experimental_context is typed unknown; users narrow it themselves.

beforeStep returning void/undefined/null is normalized to {} (defer to top-level settings) so subclass type-violations don't trip the AI SDK.

Test plan

  • tsc -p tsconfig.json --noEmit (packages/think) — clean
  • tsc -p src/tests/tsconfig.json --noEmit (packages/think tests) — clean
  • tsc --noEmit in examples/assistant — clean
  • npm test in packages/think259/259 passing (5 new this PR, 254 baseline preserved)

New regression coverage:

  • beforeStep receives the prepareStep context before each step
  • beforeStep can override the model for a step
  • beforeStep async returns are awaited before the step continues
  • beforeStep fires once per step across a tool-call loop with previousStepCount / previousToolResultCount accumulating across steps (verified via ThinkToolsTestAgent's tool-call → answer flow)
  • TurnConfig.output is accepted and forwarded to streamText

Made-with: Cursor

Made with Cursor


Open in Devin Review

Resolves #1363 and #1383.

`beforeStep(ctx: PrepareStepContext): StepConfig | void` is wired to the
AI SDK's `streamText({ prepareStep })` so subclasses can make per-step
decisions from the previous steps, current messages, model, and
experimental context. Use `beforeTurn` for turn-wide assembly and
`beforeStep` when the override depends on step number or previous
results (e.g. force a search tool on step 0, switch to a cheaper model
once tool results are in, trim tool-heavy messages on later steps).

`TurnConfig.output` is forwarded to `streamText` so callers can use the
AI SDK's structured-output spec (`Output.object({ schema })`,
`Output.text()`) without dropping tools at model construction. Combine
with `activeTools: []` for providers (e.g. workers-ai-provider) that
strip tools when `responseFormat: "json"` is active.

Naming: `PrepareStepContext` (matches the AI SDK's `PrepareStepFunction`
/ `prepareStep` and avoids the `StepContext` collision — `StepContext`
remains the completed-step result passed to `onStepFinish`). `StepConfig`
mirrors `TurnConfig`. New re-exports: `PrepareStepFunction`,
`PrepareStepResult`, `PrepareStepContext`, `StepConfig`.

Subclass-only by design: `beforeStep` is not dispatched to extensions.
The prepareStep event surface includes a live `LanguageModel` instance
which is not JSON-safe to snapshot, and a returned override could
include the same — there's no useful "snapshot, override" contract for
sandboxed extensions. All other extension hook subscriptions are
unchanged.

Limitations surfaced through the AI SDK boundary, not Think-imposed
(documented in lifecycle-hooks.md):
- No `abortSignal` in `PrepareStepContext`.
- `output` and `maxSteps` cannot be overridden per step — set those at
  the turn level via `TurnConfig`.
- `experimental_context` is typed `unknown`; users narrow it themselves.

`beforeStep` returning `void`/`undefined`/`null` is normalized to `{}`
(defer to top-level settings) so subclass type-violations don't trip
the AI SDK.

259/259 think tests passing. New coverage:
- `beforeStep` receives the prepareStep context before each step
- `beforeStep` can override the model for a step
- `beforeStep` async returns are awaited before the step continues
- `beforeStep` fires once per step across a tool-call loop with
  `previousStepCount` / `previousToolResultCount` accumulating across
  steps (verified via `ThinkToolsTestAgent`'s tool-call → answer flow)
- `TurnConfig.output` is accepted and forwarded to `streamText`

Made-with: Cursor
@changeset-bot
Copy link
Copy Markdown

changeset-bot Bot commented Apr 26, 2026

🦋 Changeset detected

Latest commit: 6a0ac58

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 1 package
Name Type
@cloudflare/think Patch

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no potential bugs to report.

View in Devin Review to see 4 additional findings.

Open in Devin Review

@threepointone threepointone merged commit a0a0d17 into main Apr 26, 2026
1 check failed
@threepointone threepointone deleted the think-before-step-and-output branch April 26, 2026 20:02
@github-actions github-actions Bot mentioned this pull request Apr 26, 2026
threepointone added a commit that referenced this pull request Apr 26, 2026
Add a new section summarizing four post-#1384 maintenance PRs (#1393#1396) and their effects on the multi-session assistant plan. Notes include facet bootstrap via explicit FacetStartupOptions.id (#1393) which removes the storage write/setName shim and makes MyAssistant.name resolve natively; the new beforeStep hook and TurnConfig.output passthrough (#1394); SubmitConcurrencyController being moved into agents/chat (#1395); and message-reconciler moved into agents/chat with Think now reconciling incoming messages (#1396). Clarifies that the chat-shared-layer has been incrementally hoisted into agents/chat and highlights the lack of a vitest+workers harness for examples/assistant, recommending a minimal test harness before hoisting useAgentChat.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

I can't open a PR, so here is my commit implementing beforeStep

1 participant