Skip to content

Conversation

@jpkraemer
Copy link
Contributor

@jpkraemer jpkraemer commented Jan 7, 2026

🎯 Changes

Fix that a part that has the thought flag set is output as thinking chunk instead of content. Fixes: #209

βœ… Checklist

  • I have followed the steps in the Contributing guide.
  • I have tested this code locally with pnpm run test:pr.

πŸš€ Release Impact

  • This change affects published code, and I have generated a changeset.
  • This change is docs/CI/dev-only (no release).

Summary by CodeRabbit

  • Bug Fixes

    • Gemini Text streaming now separates "thinking" chunks from assistant content and avoids spurious thinking updates.
  • New Features

    • Advanced thinking options exposed (thinking level and optional budget) for Gemini models.
    • Provider config surface simplified for easier option use.
  • Examples / UI

    • Default Gemini model in the chat example updated to "Gemini 2.5 - Flash".
    • Example chat demonstrates enabling thinking output with a thinking budget.
  • Chores

    • Changesets added to mark patch releases.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 7, 2026

πŸ“ Walkthrough

Walkthrough

Gemini text adapter streaming now emits explicit "thinking" chunks for thought parts and model/provider option shapes were flattened/expanded (GeminiGenerationConfigOptions β†’ GeminiCommonConfigOptions; added advanced thinking options). Examples and model metadata updated; minor stream processor guard tightened and changesets added.

Changes

Cohort / File(s) Summary
Release Management
\.changeset/*
\.changeset/flat-buttons-shave.md, \.changeset/goofy-cities-push.md
Add changesets marking patch releases for @tanstack/ai-gemini (and @tanstack/ai) describing fixes to Gemini thought/chunk processing.
Gemini Text Adapter
packages/typescript/ai-gemini/src/adapters/text.ts
Streaming logic: parts with part.thought now yield a thinking chunk (type 'thinking') instead of appending to accumulated assistant content; options mapping updated to accept and merge modelOptions (including thinkingConfig) into request config.
Provider Types & Model Metadata
packages/typescript/ai-gemini/src/text/text-provider-options.ts, packages/typescript/ai-gemini/src/model-meta.ts
Public API/type surface reorganized: GeminiGenerationConfigOptions renamed/flattened to GeminiCommonConfigOptions; added GeminiThinkingAdvancedOptions; updated ExternalTextProviderOptions and model metadata mappings to include common config and advanced thinking options.
Tests
packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
Tests updated to the flattened provider options shape (fields moved from generationConfig.* to top-level provider options) and to reflect thinkingConfig location changes.
Examples
examples/ts-react-chat/src/routes/api.tanchat.ts, examples/ts-react-chat/src/lib/model-selection.ts
Example model selection updated (Gemini model β†’ gemini-2.5-flash); example chat options now pass modelOptions (thinkingConfig: { includeThoughts: true, thinkingBudget: 100 }) for Gemini and modelOptions: {} for Grok.
Stream Processor
packages/typescript/ai/src/activities/chat/stream/processor.ts
Tightened delta check from chunk.delta !== '' to chunk.delta && chunk.delta !== '' to avoid treating undefined/null delta as valid.
Misc
packages/typescript/ai-openai/live-tests/tool-test-empty-object.ts
Removed an extraneous blank console.log() in a live test.

Sequence Diagram(s)

sequenceDiagram
  participant Client
  participant Adapter as Gemini Adapter
  participant Provider as Gemini API/Stream
  Client->>Adapter: send chat request (with modelOptions)
  Adapter->>Provider: open streaming request (config includes modelOptions.thinkingConfig)
  Provider-->>Adapter: stream part (may include part.text, part.thought)
  alt part.thought present
    Adapter-->>Client: emit { type: "thinking", text: part.thought }
  else content only
    Adapter-->>Client: accumulate and emit { type: "content", delta, content }
  end
  Provider-->>Adapter: stream end
  Adapter-->>Client: emit final content/end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • jherr

Poem

🐰 I hopped in code at break of dawn,

split my thoughts from words till dawn,
thinking now hops as its own song,
content stays steady, clean, and strong.
πŸ₯•βœ¨

πŸš₯ Pre-merge checks | βœ… 3
βœ… Passed checks (3 passed)
Check name Status Explanation
Title check βœ… Passed The title clearly and specifically describes the main change: fixing the output of thinking chunks for the Gemini Text adapter, which aligns with the core functionality modified in the changeset.
Description check βœ… Passed The description follows the template structure with completed sections for changes, checklist items, and release impact, clearly explaining the fix and referencing the related issue.
Docstring Coverage βœ… Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • πŸ“ Generate docstrings

πŸ“œ Recent review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between 1613d8d and 1247138.

πŸ“’ Files selected for processing (2)
  • packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
  • packages/typescript/ai-openai/live-tests/tool-test-empty-object.ts
πŸ’€ Files with no reviewable changes (1)
  • packages/typescript/ai-openai/live-tests/tool-test-empty-object.ts
🧰 Additional context used
πŸ““ Path-based instructions (3)
**/*.{ts,tsx}

πŸ“„ CodeRabbit inference engine (CLAUDE.md)

**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from /adapters subpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions with toolDefinition() and Zod schema inference
Implement isomorphic tool system using toolDefinition() with .server() and .client() implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Files:

  • packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
**/*.test.ts

πŸ“„ CodeRabbit inference engine (CLAUDE.md)

Write unit tests using Vitest alongside source files with .test.ts naming convention

Files:

  • packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
**/*.{ts,tsx,js,jsx}

πŸ“„ CodeRabbit inference engine (CLAUDE.md)

Use camelCase for function and variable names throughout the codebase

Files:

  • packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
🧠 Learnings (5)
πŸ“š Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking

Applied to files:

  • packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
πŸ“š Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety

Applied to files:

  • packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
πŸ“š Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/index.ts : Export tree-shakeable adapters with clear subpath exports in package.json (e.g., `tanstack/ai/adapters`, `tanstack/ai-openai/adapters`) to minimize bundle size

Applied to files:

  • packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
πŸ“š Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters

Applied to files:

  • packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
πŸ“š Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety

Applied to files:

  • packages/typescript/ai-gemini/tests/gemini-adapter.test.ts

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❀️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@nx-cloud
Copy link

nx-cloud bot commented Jan 8, 2026

View your CI Pipeline Execution β†— for commit 1247138

Command Status Duration Result
nx affected --targets=test:sherif,test:knip,tes... βœ… Succeeded 2m 40s View β†—
nx run-many --targets=build --exclude=examples/** βœ… Succeeded 39s View β†—

☁️ Nx Cloud last updated this comment at 2026-01-08 11:55:15 UTC

@pkg-pr-new
Copy link

pkg-pr-new bot commented Jan 8, 2026

Open in StackBlitz

@tanstack/ai

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai@210

@tanstack/ai-anthropic

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-anthropic@210

@tanstack/ai-client

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-client@210

@tanstack/ai-devtools-core

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-devtools-core@210

@tanstack/ai-gemini

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-gemini@210

@tanstack/ai-grok

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-grok@210

@tanstack/ai-ollama

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-ollama@210

@tanstack/ai-openai

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-openai@210

@tanstack/ai-preact

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-preact@210

@tanstack/ai-react

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-react@210

@tanstack/ai-react-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-react-ui@210

@tanstack/ai-solid

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-solid@210

@tanstack/ai-solid-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-solid-ui@210

@tanstack/ai-svelte

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-svelte@210

@tanstack/ai-vue

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-vue@210

@tanstack/ai-vue-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-vue-ui@210

@tanstack/react-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/react-ai-devtools@210

@tanstack/solid-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/solid-ai-devtools@210

commit: 1247138

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
.changeset/goofy-cities-push.md (1)

1-6: Changeset structure looks good; minor wording suggestion.

The changeset correctly declares patch updates for both affected packages. The description could use slightly more formal wording for consistency with professional changelog entries.

✨ Optional wording improvement
-fixed an issue with gemini and thought chunks processing
+Resolves an issue with Gemini and thought chunks processing
examples/ts-react-chat/src/routes/api.tanchat.ts (1)

100-104: Empty modelOptions is unnecessary for Grok adapter.

The modelOptions: {} on line 103 has no effect. Consider removing it to reduce noise, or this may be intentional to maintain a consistent structure across adapters.

♻️ Optional: Remove empty modelOptions
         grok: () =>
           createChatOptions({
             adapter: grokText((model || 'grok-3') as 'grok-3'),
-            modelOptions: {},
           }),
πŸ“œ Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between ff58494 and 1613d8d.

πŸ“’ Files selected for processing (7)
  • .changeset/goofy-cities-push.md
  • examples/ts-react-chat/src/lib/model-selection.ts
  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • packages/typescript/ai-gemini/src/adapters/text.ts
  • packages/typescript/ai-gemini/src/model-meta.ts
  • packages/typescript/ai-gemini/src/text/text-provider-options.ts
  • packages/typescript/ai/src/activities/chat/stream/processor.ts
🧰 Additional context used
πŸ““ Path-based instructions (5)
**/*.{ts,tsx}

πŸ“„ CodeRabbit inference engine (CLAUDE.md)

**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from /adapters subpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions with toolDefinition() and Zod schema inference
Implement isomorphic tool system using toolDefinition() with .server() and .client() implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Files:

  • packages/typescript/ai/src/activities/chat/stream/processor.ts
  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • examples/ts-react-chat/src/lib/model-selection.ts
  • packages/typescript/ai-gemini/src/adapters/text.ts
  • packages/typescript/ai-gemini/src/text/text-provider-options.ts
  • packages/typescript/ai-gemini/src/model-meta.ts
**/*.{ts,tsx,js,jsx}

πŸ“„ CodeRabbit inference engine (CLAUDE.md)

Use camelCase for function and variable names throughout the codebase

Files:

  • packages/typescript/ai/src/activities/chat/stream/processor.ts
  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • examples/ts-react-chat/src/lib/model-selection.ts
  • packages/typescript/ai-gemini/src/adapters/text.ts
  • packages/typescript/ai-gemini/src/text/text-provider-options.ts
  • packages/typescript/ai-gemini/src/model-meta.ts
examples/**

πŸ“„ CodeRabbit inference engine (CLAUDE.md)

Examples are not built by Nx and should be run independently from their directories with pnpm dev or pnpm install && pnpm dev

Files:

  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • examples/ts-react-chat/src/lib/model-selection.ts
packages/typescript/*/src/adapters/*.ts

πŸ“„ CodeRabbit inference engine (CLAUDE.md)

Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking

Files:

  • packages/typescript/ai-gemini/src/adapters/text.ts
packages/typescript/*/src/model-meta.ts

πŸ“„ CodeRabbit inference engine (CLAUDE.md)

Maintain model metadata files that define provider options and capabilities per model for per-model type safety

Files:

  • packages/typescript/ai-gemini/src/model-meta.ts
🧠 Learnings (7)
πŸ“š Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Applied to files:

  • packages/typescript/ai/src/activities/chat/stream/processor.ts
  • packages/typescript/ai-gemini/src/adapters/text.ts
πŸ“š Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety

Applied to files:

  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • examples/ts-react-chat/src/lib/model-selection.ts
  • packages/typescript/ai-gemini/src/adapters/text.ts
  • packages/typescript/ai-gemini/src/text/text-provider-options.ts
  • packages/typescript/ai-gemini/src/model-meta.ts
πŸ“š Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking

Applied to files:

  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • packages/typescript/ai-gemini/src/adapters/text.ts
  • packages/typescript/ai-gemini/src/text/text-provider-options.ts
  • packages/typescript/ai-gemini/src/model-meta.ts
πŸ“š Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/index.ts : Export tree-shakeable adapters with clear subpath exports in package.json (e.g., `tanstack/ai/adapters`, `tanstack/ai-openai/adapters`) to minimize bundle size

Applied to files:

  • examples/ts-react-chat/src/routes/api.tanchat.ts
  • packages/typescript/ai-gemini/src/adapters/text.ts
  • packages/typescript/ai-gemini/src/text/text-provider-options.ts
πŸ“š Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters

Applied to files:

  • packages/typescript/ai-gemini/src/adapters/text.ts
  • packages/typescript/ai-gemini/src/text/text-provider-options.ts
πŸ“š Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety

Applied to files:

  • packages/typescript/ai-gemini/src/adapters/text.ts
  • packages/typescript/ai-gemini/src/text/text-provider-options.ts
  • packages/typescript/ai-gemini/src/model-meta.ts
πŸ“š Learning: 2025-12-27T20:22:51.232Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:51.232Z
Learning: In the ai-ollama package's model-meta files (packages/typescript/ai-ollama/src/meta/model-meta-*.ts), capability-related comments follow a standard template format across all files for consistency, even if the comment text doesn't precisely match individual model capabilities. This is an intentional design choice to maintain uniformity across the codebase.

Applied to files:

  • packages/typescript/ai-gemini/src/model-meta.ts
🧬 Code graph analysis (4)
examples/ts-react-chat/src/routes/api.tanchat.ts (1)
packages/typescript/ai-grok/src/adapters/text.ts (1)
  • grokText (500-506)
packages/typescript/ai-gemini/src/adapters/text.ts (1)
packages/typescript/ai/src/types.ts (1)
  • TextOptions (565-650)
packages/typescript/ai-gemini/src/text/text-provider-options.ts (3)
packages/typescript/ai-anthropic/src/text/text-provider-options.ts (1)
  • ExternalTextProviderOptions (111-118)
packages/typescript/ai-openai/src/text/text-provider-options.ts (1)
  • ExternalTextProviderOptions (237-242)
packages/typescript/ai-gemini/src/index.ts (2)
  • GeminiThinkingOptions (73-73)
  • GeminiStructuredOutputOptions (72-72)
packages/typescript/ai-gemini/src/model-meta.ts (1)
packages/typescript/ai-gemini/src/text/text-provider-options.ts (5)
  • GeminiCommonConfigOptions (25-165)
  • GeminiCachedContentOptions (167-172)
  • GeminiStructuredOutputOptions (174-217)
  • GeminiThinkingOptions (219-234)
  • GeminiThinkingAdvancedOptions (236-246)
πŸͺ› LanguageTool
.changeset/goofy-cities-push.md

[style] ~5-~5: Consider using a different verb for a more formal wording.
Context: ...mini': patch '@tanstack/ai': patch --- fixed an issue with gemini and thought chunks...

(FIX_RESOLVE)

πŸ”‡ Additional comments (11)
packages/typescript/ai/src/activities/chat/stream/processor.ts (1)

735-745: LGTM! Correct fix for undefined delta handling.

The added truthy check prevents undefined or null delta from being incorrectly appended to thinking content. Without this guard, chunk.delta !== '' would evaluate to true for undefined, causing the fallback logic on line 737-744 to be bypassed incorrectly.

examples/ts-react-chat/src/lib/model-selection.ts (1)

38-42: LGTM! Model updated to support thinking features.

The switch to gemini-2.5-flash aligns with the PR's thinking support enhancements. This model is properly defined in model-meta.ts with thinking capabilities.

packages/typescript/ai-gemini/src/text/text-provider-options.ts (2)

219-246: Well-structured composition of thinking options.

The separation of GeminiThinkingOptions (for includeThoughts and thinkingBudget) and GeminiThinkingAdvancedOptions (for thinkingLevel) enables clean per-model type safety. The intersection in ExternalTextProviderOptions correctly merges both thinkingConfig shapes into a unified interface.


248-254: ExternalTextProviderOptions composition looks correct.

The union properly includes GeminiCommonConfigOptions, GeminiThinkingOptions, GeminiThinkingAdvancedOptions, and GeminiStructuredOutputOptions, aligning with the pattern used by other providers (OpenAI, Anthropic) per the relevant code snippets.

packages/typescript/ai-gemini/src/model-meta.ts (4)

1-9: LGTM! Import updates align with renamed types.

The imports correctly reflect the renaming of GeminiGenerationConfigOptions to GeminiCommonConfigOptions and the addition of GeminiThinkingAdvancedOptions.


78-86: Correct per-model type composition for Gemini 3 Pro.

The provider options correctly include both GeminiThinkingOptions and GeminiThinkingAdvancedOptions, enabling both basic thinking configuration and advanced thinking level settings for this model.


912-969: GeminiChatModelProviderOptionsByName correctly mirrors model definitions.

The type map properly reflects which models support GeminiThinkingAdvancedOptions (Gemini 3 models) versus those with only GeminiThinkingOptions (Gemini 2.5 models). This maintains compile-time type safety for per-model configurations. Based on learnings, this aligns with the pattern for per-model type safety.


189-196: Exclusion of GeminiThinkingAdvancedOptions for Gemini 2.5 Pro is correct.

Gemini 2.5 Pro does not support the thinkingLevel parameter defined in GeminiThinkingAdvancedOptions. It uses thinkingBudget instead (supported by GeminiThinkingOptions), with a range of 128–32,768 tokens. Gemini 3 models support both parameters, hence the inclusion of both option types. The current type configuration is intentional and accurate.

examples/ts-react-chat/src/routes/api.tanchat.ts (1)

93-98: Thinking configuration correctly enabled for Gemini adapter.

The modelOptions with thinkingConfig: { includeThoughts: true, thinkingBudget: 100 } properly demonstrates the new thinking support. Note that thinkingBudget: 100 is relatively lowβ€”this may be intentional for demo purposes to keep thinking concise.

packages/typescript/ai-gemini/src/adapters/text.ts (2)

216-236: Core fix: Correctly emits thinking chunks for "thought" parts.

This is the main fix for issue #209. Parts with part.thought === true now emit type: 'thinking' chunks instead of being accumulated into content. The chunk structure includes both content and delta set to the same value (part.text), which is appropriate since thinking parts arrive as complete units rather than accumulated.


491-519: Verify modelOpts spreading doesn't leak unintended properties into config.

Line 500 spreads ...modelOpts directly into the config object. If modelOptions contains properties that aren't valid Gemini API config fields (e.g., nested thinkingConfig before it's processed), this could cause issues.

The code extracts thinkingConfig on line 495, but then spreads the entire modelOpts (including the original thinkingConfig) on line 500, only to override it with the processed version on lines 504-512. This results in redundant/overwritten properties.

Consider extracting only the relevant properties or omitting thinkingConfig from the spread:

♻️ Suggested improvement
 private mapCommonOptionsToGemini(
   options: TextOptions<GeminiTextProviderOptions>,
 ) {
   const modelOpts = options.modelOptions
   const thinkingConfig = modelOpts?.thinkingConfig
+  const { thinkingConfig: _, ...restModelOpts } = modelOpts ?? {}
   const requestOptions: GenerateContentParameters = {
     model: options.model,
     contents: this.formatMessages(options.messages),
     config: {
-      ...modelOpts,
+      ...restModelOpts,
       temperature: options.temperature,
       topP: options.topP,
       maxOutputTokens: options.maxTokens,
       thinkingConfig: thinkingConfig
         ? {
             ...thinkingConfig,
             thinkingLevel: thinkingConfig.thinkingLevel
               ? (thinkingConfig.thinkingLevel as ThinkingLevel)
               : undefined,
           }
         : undefined,
       systemInstruction: options.systemPrompts?.join('\n'),
       tools: convertToolsToProviderFormat(options.tools),
     },
   }

   return requestOptions
 }

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/typescript/ai-gemini/tests/gemini-adapter.test.ts (1)

172-251: Critical: Structural mismatch between providerOptions definition and test expectations.

The providerOptions object is defined with a nested generationConfig structure (lines 174-202), but the test expectations (lines 230-251) read fields directly from the top level of providerOptions. This mismatch will cause the test to fail because fields like providerOptions?.stopSequences, providerOptions?.topK, etc., are undefinedβ€”they're actually nested at providerOptions.generationConfig.stopSequences, providerOptions.generationConfig.topK, etc.

Based on the PR's flattening of options (GeminiGenerationConfigOptions β†’ GeminiCommonConfigOptions), the providerOptions object definition should be flattened to match the expectations.

πŸ”§ Proposed fix to flatten the providerOptions definition
     const providerOptions: GeminiTextProviderOptions = {
       safetySettings,
-      generationConfig: {
-        stopSequences: ['<done>', '###'],
-        responseMimeType: 'application/json',
-        responseSchema,
-        responseJsonSchema,
-        responseModalities: ['TEXT'],
-        candidateCount: 2,
-        topK: 6,
-        seed: 7,
-        presencePenalty: 0.2,
-        frequencyPenalty: 0.4,
-        responseLogprobs: true,
-        logprobs: 3,
-        enableEnhancedCivicAnswers: true,
-        speechConfig: {
-          voiceConfig: {
-            prebuiltVoiceConfig: {
-              voiceName: 'Studio',
-            },
+      stopSequences: ['<done>', '###'],
+      responseMimeType: 'application/json',
+      responseSchema,
+      responseJsonSchema,
+      responseModalities: ['TEXT'],
+      candidateCount: 2,
+      topK: 6,
+      seed: 7,
+      presencePenalty: 0.2,
+      frequencyPenalty: 0.4,
+      responseLogprobs: true,
+      logprobs: 3,
+      enableEnhancedCivicAnswers: true,
+      speechConfig: {
+        voiceConfig: {
+          prebuiltVoiceConfig: {
+            voiceName: 'Studio',
           },
         },
-        thinkingConfig: {
-          includeThoughts: true,
-          thinkingBudget: 128,
-        },
-        imageConfig: {
-          aspectRatio: '1:1',
-        },
       },
+      thinkingConfig: {
+        includeThoughts: true,
+        thinkingBudget: 128,
+      },
+      imageConfig: {
+        aspectRatio: '1:1',
+      },
       cachedContent: 'cachedContents/weather-context',
     } as const
πŸ“œ Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

πŸ“₯ Commits

Reviewing files that changed from the base of the PR and between 1613d8d and 1247138.

πŸ“’ Files selected for processing (2)
  • packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
  • packages/typescript/ai-openai/live-tests/tool-test-empty-object.ts
πŸ’€ Files with no reviewable changes (1)
  • packages/typescript/ai-openai/live-tests/tool-test-empty-object.ts
🧰 Additional context used
πŸ““ Path-based instructions (3)
**/*.{ts,tsx}

πŸ“„ CodeRabbit inference engine (CLAUDE.md)

**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from /adapters subpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions with toolDefinition() and Zod schema inference
Implement isomorphic tool system using toolDefinition() with .server() and .client() implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses

Files:

  • packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
**/*.test.ts

πŸ“„ CodeRabbit inference engine (CLAUDE.md)

Write unit tests using Vitest alongside source files with .test.ts naming convention

Files:

  • packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
**/*.{ts,tsx,js,jsx}

πŸ“„ CodeRabbit inference engine (CLAUDE.md)

Use camelCase for function and variable names throughout the codebase

Files:

  • packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
🧠 Learnings (5)
πŸ“š Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking

Applied to files:

  • packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
πŸ“š Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety

Applied to files:

  • packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
πŸ“š Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/index.ts : Export tree-shakeable adapters with clear subpath exports in package.json (e.g., `tanstack/ai/adapters`, `tanstack/ai-openai/adapters`) to minimize bundle size

Applied to files:

  • packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
πŸ“š Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters

Applied to files:

  • packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
πŸ“š Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety

Applied to files:

  • packages/typescript/ai-gemini/tests/gemini-adapter.test.ts

@AlemTuzlak AlemTuzlak merged commit 7573619 into TanStack:main Jan 8, 2026
6 checks passed
@github-actions github-actions bot mentioned this pull request Jan 8, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Gemini Thinking output is added to content instead of thinking chunks.

2 participants