-
-
Notifications
You must be signed in to change notification settings - Fork 108
Correctly output thinking for Gemini Text adapter #210
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
π WalkthroughWalkthroughGemini text adapter streaming now emits explicit "thinking" chunks for thought parts and model/provider option shapes were flattened/expanded (GeminiGenerationConfigOptions β GeminiCommonConfigOptions; added advanced thinking options). Examples and model metadata updated; minor stream processor guard tightened and changesets added. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Adapter as Gemini Adapter
participant Provider as Gemini API/Stream
Client->>Adapter: send chat request (with modelOptions)
Adapter->>Provider: open streaming request (config includes modelOptions.thinkingConfig)
Provider-->>Adapter: stream part (may include part.text, part.thought)
alt part.thought present
Adapter-->>Client: emit { type: "thinking", text: part.thought }
else content only
Adapter-->>Client: accumulate and emit { type: "content", delta, content }
end
Provider-->>Adapter: stream end
Adapter-->>Client: emit final content/end
Estimated code review effortπ― 4 (Complex) | β±οΈ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
π₯ Pre-merge checks | β 3β Passed checks (3 passed)
βοΈ Tip: You can configure your own custom pre-merge checks in the settings. β¨ Finishing touches
π Recent review detailsConfiguration used: defaults Review profile: CHILL Plan: Pro π Files selected for processing (2)
π€ Files with no reviewable changes (1)
π§° Additional context usedπ Path-based instructions (3)**/*.{ts,tsx}π CodeRabbit inference engine (CLAUDE.md)
Files:
**/*.test.tsπ CodeRabbit inference engine (CLAUDE.md)
Files:
**/*.{ts,tsx,js,jsx}π CodeRabbit inference engine (CLAUDE.md)
Files:
π§ Learnings (5)π Learning: 2025-12-13T17:09:09.794ZApplied to files:
π Learning: 2025-12-13T17:09:09.794ZApplied to files:
π Learning: 2025-12-13T17:09:09.794ZApplied to files:
π Learning: 2025-12-13T17:09:09.794ZApplied to files:
π Learning: 2025-12-13T17:09:09.794ZApplied to files:
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
View your CI Pipeline Execution β for commit 1247138
βοΈ Nx Cloud last updated this comment at |
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-devtools-core
@tanstack/ai-gemini
@tanstack/ai-grok
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-preact
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
π§Ή Nitpick comments (2)
.changeset/goofy-cities-push.md (1)
1-6: Changeset structure looks good; minor wording suggestion.The changeset correctly declares patch updates for both affected packages. The description could use slightly more formal wording for consistency with professional changelog entries.
β¨ Optional wording improvement
-fixed an issue with gemini and thought chunks processing +Resolves an issue with Gemini and thought chunks processingexamples/ts-react-chat/src/routes/api.tanchat.ts (1)
100-104: EmptymodelOptionsis unnecessary for Grok adapter.The
modelOptions: {}on line 103 has no effect. Consider removing it to reduce noise, or this may be intentional to maintain a consistent structure across adapters.β»οΈ Optional: Remove empty modelOptions
grok: () => createChatOptions({ adapter: grokText((model || 'grok-3') as 'grok-3'), - modelOptions: {}, }),
π Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
π Files selected for processing (7)
.changeset/goofy-cities-push.mdexamples/ts-react-chat/src/lib/model-selection.tsexamples/ts-react-chat/src/routes/api.tanchat.tspackages/typescript/ai-gemini/src/adapters/text.tspackages/typescript/ai-gemini/src/model-meta.tspackages/typescript/ai-gemini/src/text/text-provider-options.tspackages/typescript/ai/src/activities/chat/stream/processor.ts
π§° Additional context used
π Path-based instructions (5)
**/*.{ts,tsx}
π CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from/adapterssubpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions withtoolDefinition()and Zod schema inference
Implement isomorphic tool system usingtoolDefinition()with.server()and.client()implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses
Files:
packages/typescript/ai/src/activities/chat/stream/processor.tsexamples/ts-react-chat/src/routes/api.tanchat.tsexamples/ts-react-chat/src/lib/model-selection.tspackages/typescript/ai-gemini/src/adapters/text.tspackages/typescript/ai-gemini/src/text/text-provider-options.tspackages/typescript/ai-gemini/src/model-meta.ts
**/*.{ts,tsx,js,jsx}
π CodeRabbit inference engine (CLAUDE.md)
Use camelCase for function and variable names throughout the codebase
Files:
packages/typescript/ai/src/activities/chat/stream/processor.tsexamples/ts-react-chat/src/routes/api.tanchat.tsexamples/ts-react-chat/src/lib/model-selection.tspackages/typescript/ai-gemini/src/adapters/text.tspackages/typescript/ai-gemini/src/text/text-provider-options.tspackages/typescript/ai-gemini/src/model-meta.ts
examples/**
π CodeRabbit inference engine (CLAUDE.md)
Examples are not built by Nx and should be run independently from their directories with
pnpm devorpnpm install && pnpm dev
Files:
examples/ts-react-chat/src/routes/api.tanchat.tsexamples/ts-react-chat/src/lib/model-selection.ts
packages/typescript/*/src/adapters/*.ts
π CodeRabbit inference engine (CLAUDE.md)
Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking
Files:
packages/typescript/ai-gemini/src/adapters/text.ts
packages/typescript/*/src/model-meta.ts
π CodeRabbit inference engine (CLAUDE.md)
Maintain model metadata files that define provider options and capabilities per model for per-model type safety
Files:
packages/typescript/ai-gemini/src/model-meta.ts
π§ Learnings (7)
π Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses
Applied to files:
packages/typescript/ai/src/activities/chat/stream/processor.tspackages/typescript/ai-gemini/src/adapters/text.ts
π Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety
Applied to files:
examples/ts-react-chat/src/routes/api.tanchat.tsexamples/ts-react-chat/src/lib/model-selection.tspackages/typescript/ai-gemini/src/adapters/text.tspackages/typescript/ai-gemini/src/text/text-provider-options.tspackages/typescript/ai-gemini/src/model-meta.ts
π Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking
Applied to files:
examples/ts-react-chat/src/routes/api.tanchat.tspackages/typescript/ai-gemini/src/adapters/text.tspackages/typescript/ai-gemini/src/text/text-provider-options.tspackages/typescript/ai-gemini/src/model-meta.ts
π Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/index.ts : Export tree-shakeable adapters with clear subpath exports in package.json (e.g., `tanstack/ai/adapters`, `tanstack/ai-openai/adapters`) to minimize bundle size
Applied to files:
examples/ts-react-chat/src/routes/api.tanchat.tspackages/typescript/ai-gemini/src/adapters/text.tspackages/typescript/ai-gemini/src/text/text-provider-options.ts
π Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters
Applied to files:
packages/typescript/ai-gemini/src/adapters/text.tspackages/typescript/ai-gemini/src/text/text-provider-options.ts
π Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Applied to files:
packages/typescript/ai-gemini/src/adapters/text.tspackages/typescript/ai-gemini/src/text/text-provider-options.tspackages/typescript/ai-gemini/src/model-meta.ts
π Learning: 2025-12-27T20:22:51.232Z
Learnt from: harry-whorlow
Repo: TanStack/ai PR: 117
File: packages/typescript/ai-ollama/src/meta/model-meta-gpt-oss.ts:92-97
Timestamp: 2025-12-27T20:22:51.232Z
Learning: In the ai-ollama package's model-meta files (packages/typescript/ai-ollama/src/meta/model-meta-*.ts), capability-related comments follow a standard template format across all files for consistency, even if the comment text doesn't precisely match individual model capabilities. This is an intentional design choice to maintain uniformity across the codebase.
Applied to files:
packages/typescript/ai-gemini/src/model-meta.ts
𧬠Code graph analysis (4)
examples/ts-react-chat/src/routes/api.tanchat.ts (1)
packages/typescript/ai-grok/src/adapters/text.ts (1)
grokText(500-506)
packages/typescript/ai-gemini/src/adapters/text.ts (1)
packages/typescript/ai/src/types.ts (1)
TextOptions(565-650)
packages/typescript/ai-gemini/src/text/text-provider-options.ts (3)
packages/typescript/ai-anthropic/src/text/text-provider-options.ts (1)
ExternalTextProviderOptions(111-118)packages/typescript/ai-openai/src/text/text-provider-options.ts (1)
ExternalTextProviderOptions(237-242)packages/typescript/ai-gemini/src/index.ts (2)
GeminiThinkingOptions(73-73)GeminiStructuredOutputOptions(72-72)
packages/typescript/ai-gemini/src/model-meta.ts (1)
packages/typescript/ai-gemini/src/text/text-provider-options.ts (5)
GeminiCommonConfigOptions(25-165)GeminiCachedContentOptions(167-172)GeminiStructuredOutputOptions(174-217)GeminiThinkingOptions(219-234)GeminiThinkingAdvancedOptions(236-246)
πͺ LanguageTool
.changeset/goofy-cities-push.md
[style] ~5-~5: Consider using a different verb for a more formal wording.
Context: ...mini': patch '@tanstack/ai': patch --- fixed an issue with gemini and thought chunks...
(FIX_RESOLVE)
π Additional comments (11)
packages/typescript/ai/src/activities/chat/stream/processor.ts (1)
735-745: LGTM! Correct fix for undefined delta handling.The added truthy check prevents
undefinedornulldelta from being incorrectly appended to thinking content. Without this guard,chunk.delta !== ''would evaluate totrueforundefined, causing the fallback logic on line 737-744 to be bypassed incorrectly.examples/ts-react-chat/src/lib/model-selection.ts (1)
38-42: LGTM! Model updated to support thinking features.The switch to
gemini-2.5-flashaligns with the PR's thinking support enhancements. This model is properly defined inmodel-meta.tswith thinking capabilities.packages/typescript/ai-gemini/src/text/text-provider-options.ts (2)
219-246: Well-structured composition of thinking options.The separation of
GeminiThinkingOptions(forincludeThoughtsandthinkingBudget) andGeminiThinkingAdvancedOptions(forthinkingLevel) enables clean per-model type safety. The intersection inExternalTextProviderOptionscorrectly merges boththinkingConfigshapes into a unified interface.
248-254: ExternalTextProviderOptions composition looks correct.The union properly includes
GeminiCommonConfigOptions,GeminiThinkingOptions,GeminiThinkingAdvancedOptions, andGeminiStructuredOutputOptions, aligning with the pattern used by other providers (OpenAI, Anthropic) per the relevant code snippets.packages/typescript/ai-gemini/src/model-meta.ts (4)
1-9: LGTM! Import updates align with renamed types.The imports correctly reflect the renaming of
GeminiGenerationConfigOptionstoGeminiCommonConfigOptionsand the addition ofGeminiThinkingAdvancedOptions.
78-86: Correct per-model type composition for Gemini 3 Pro.The provider options correctly include both
GeminiThinkingOptionsandGeminiThinkingAdvancedOptions, enabling both basic thinking configuration and advanced thinking level settings for this model.
912-969:GeminiChatModelProviderOptionsByNamecorrectly mirrors model definitions.The type map properly reflects which models support
GeminiThinkingAdvancedOptions(Gemini 3 models) versus those with onlyGeminiThinkingOptions(Gemini 2.5 models). This maintains compile-time type safety for per-model configurations. Based on learnings, this aligns with the pattern for per-model type safety.
189-196: Exclusion ofGeminiThinkingAdvancedOptionsfor Gemini 2.5 Pro is correct.Gemini 2.5 Pro does not support the
thinkingLevelparameter defined inGeminiThinkingAdvancedOptions. It usesthinkingBudgetinstead (supported byGeminiThinkingOptions), with a range of 128β32,768 tokens. Gemini 3 models support both parameters, hence the inclusion of both option types. The current type configuration is intentional and accurate.examples/ts-react-chat/src/routes/api.tanchat.ts (1)
93-98: Thinking configuration correctly enabled for Gemini adapter.The
modelOptionswiththinkingConfig: { includeThoughts: true, thinkingBudget: 100 }properly demonstrates the new thinking support. Note thatthinkingBudget: 100is relatively lowβthis may be intentional for demo purposes to keep thinking concise.packages/typescript/ai-gemini/src/adapters/text.ts (2)
216-236: Core fix: Correctly emits thinking chunks for "thought" parts.This is the main fix for issue #209. Parts with
part.thought === truenow emittype: 'thinking'chunks instead of being accumulated into content. The chunk structure includes bothcontentanddeltaset to the same value (part.text), which is appropriate since thinking parts arrive as complete units rather than accumulated.
491-519: VerifymodelOptsspreading doesn't leak unintended properties into config.Line 500 spreads
...modelOptsdirectly into theconfigobject. IfmodelOptionscontains properties that aren't valid Gemini API config fields (e.g., nestedthinkingConfigbefore it's processed), this could cause issues.The code extracts
thinkingConfigon line 495, but then spreads the entiremodelOpts(including the originalthinkingConfig) on line 500, only to override it with the processed version on lines 504-512. This results in redundant/overwritten properties.Consider extracting only the relevant properties or omitting
thinkingConfigfrom the spread:β»οΈ Suggested improvement
private mapCommonOptionsToGemini( options: TextOptions<GeminiTextProviderOptions>, ) { const modelOpts = options.modelOptions const thinkingConfig = modelOpts?.thinkingConfig + const { thinkingConfig: _, ...restModelOpts } = modelOpts ?? {} const requestOptions: GenerateContentParameters = { model: options.model, contents: this.formatMessages(options.messages), config: { - ...modelOpts, + ...restModelOpts, temperature: options.temperature, topP: options.topP, maxOutputTokens: options.maxTokens, thinkingConfig: thinkingConfig ? { ...thinkingConfig, thinkingLevel: thinkingConfig.thinkingLevel ? (thinkingConfig.thinkingLevel as ThinkingLevel) : undefined, } : undefined, systemInstruction: options.systemPrompts?.join('\n'), tools: convertToolsToProviderFormat(options.tools), }, } return requestOptions }
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and canβt be posted inline due to platform limitations.
β οΈ Outside diff range comments (1)
packages/typescript/ai-gemini/tests/gemini-adapter.test.ts (1)
172-251: Critical: Structural mismatch between providerOptions definition and test expectations.The
providerOptionsobject is defined with a nestedgenerationConfigstructure (lines 174-202), but the test expectations (lines 230-251) read fields directly from the top level ofproviderOptions. This mismatch will cause the test to fail because fields likeproviderOptions?.stopSequences,providerOptions?.topK, etc., are undefinedβthey're actually nested atproviderOptions.generationConfig.stopSequences,providerOptions.generationConfig.topK, etc.Based on the PR's flattening of options (GeminiGenerationConfigOptions β GeminiCommonConfigOptions), the
providerOptionsobject definition should be flattened to match the expectations.π§ Proposed fix to flatten the providerOptions definition
const providerOptions: GeminiTextProviderOptions = { safetySettings, - generationConfig: { - stopSequences: ['<done>', '###'], - responseMimeType: 'application/json', - responseSchema, - responseJsonSchema, - responseModalities: ['TEXT'], - candidateCount: 2, - topK: 6, - seed: 7, - presencePenalty: 0.2, - frequencyPenalty: 0.4, - responseLogprobs: true, - logprobs: 3, - enableEnhancedCivicAnswers: true, - speechConfig: { - voiceConfig: { - prebuiltVoiceConfig: { - voiceName: 'Studio', - }, + stopSequences: ['<done>', '###'], + responseMimeType: 'application/json', + responseSchema, + responseJsonSchema, + responseModalities: ['TEXT'], + candidateCount: 2, + topK: 6, + seed: 7, + presencePenalty: 0.2, + frequencyPenalty: 0.4, + responseLogprobs: true, + logprobs: 3, + enableEnhancedCivicAnswers: true, + speechConfig: { + voiceConfig: { + prebuiltVoiceConfig: { + voiceName: 'Studio', }, }, - thinkingConfig: { - includeThoughts: true, - thinkingBudget: 128, - }, - imageConfig: { - aspectRatio: '1:1', - }, }, + thinkingConfig: { + includeThoughts: true, + thinkingBudget: 128, + }, + imageConfig: { + aspectRatio: '1:1', + }, cachedContent: 'cachedContents/weather-context', } as const
π Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
π Files selected for processing (2)
packages/typescript/ai-gemini/tests/gemini-adapter.test.tspackages/typescript/ai-openai/live-tests/tool-test-empty-object.ts
π€ Files with no reviewable changes (1)
- packages/typescript/ai-openai/live-tests/tool-test-empty-object.ts
π§° Additional context used
π Path-based instructions (3)
**/*.{ts,tsx}
π CodeRabbit inference engine (CLAUDE.md)
**/*.{ts,tsx}: Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from/adapterssubpath rather than monolithic adapters
Use Zod for runtime schema validation and type inference, particularly for tool input/output definitions withtoolDefinition()and Zod schema inference
Implement isomorphic tool system usingtoolDefinition()with.server()and.client()implementations for dual-environment execution
Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Implement stream processing with StreamProcessor for handling chunked responses and support partial JSON parsing for streaming AI responses
Files:
packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
**/*.test.ts
π CodeRabbit inference engine (CLAUDE.md)
Write unit tests using Vitest alongside source files with
.test.tsnaming convention
Files:
packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
**/*.{ts,tsx,js,jsx}
π CodeRabbit inference engine (CLAUDE.md)
Use camelCase for function and variable names throughout the codebase
Files:
packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
π§ Learnings (5)
π Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/adapters/*.ts : Create individual adapter implementations for each provider capability (text, embed, summarize, image) with separate exports to enable tree-shaking
Applied to files:
packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
π Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/model-meta.ts : Maintain model metadata files that define provider options and capabilities per model for per-model type safety
Applied to files:
packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
π Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to packages/typescript/*/src/index.ts : Export tree-shakeable adapters with clear subpath exports in package.json (e.g., `tanstack/ai/adapters`, `tanstack/ai-openai/adapters`) to minimize bundle size
Applied to files:
packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
π Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use tree-shakeable adapter architecture for provider implementations - export specialized adapters (text, embedding, summarize, image) as separate imports from `/adapters` subpath rather than monolithic adapters
Applied to files:
packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
π Learning: 2025-12-13T17:09:09.794Z
Learnt from: CR
Repo: TanStack/ai PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-12-13T17:09:09.794Z
Learning: Applies to **/*.{ts,tsx} : Use type-safe per-model configuration with provider options typed based on selected model to ensure compile-time safety
Applied to files:
packages/typescript/ai-gemini/tests/gemini-adapter.test.ts
π― Changes
Fix that a part that has the thought flag set is output as thinking chunk instead of content. Fixes: #209
β Checklist
pnpm run test:pr.π Release Impact
Summary by CodeRabbit
Bug Fixes
New Features
Examples / UI
Chores
βοΈ Tip: You can customize this high-level summary in your review settings.