fix(core): allow disabling title temperature#1249
Conversation
🦋 Changeset detectedLatest commit: 998167d The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
📝 WalkthroughWalkthroughConversation title generation accepts an optional Changes
Sequence Diagram(s)sequenceDiagram
participant Agent as Agent
participant Trace as TraceContext
participant LLM as AI_SDK
participant MemoryMgr as MemoryManager
participant Logger as Logger
Agent->>Trace: start LLM span (callOptions include temperature if defined)
Trace->>LLM: generateText(model, messages, maxOutputTokens, temperature?)
LLM-->>Trace: result (text, warnings?)
Trace-->>Agent: result
alt warnings include temperature-unsupported
Agent->>Logger: warn "[Memory]" + serialized warning + hint (generateTitle.temperature: null)
end
alt generateText throws
Agent->>Logger: warn "[Memory] Failed to generate conversation title" + error.message + hint
Agent->>Logger: debug: safeStringify(full error)
Agent->>MemoryMgr: return fallback title ("Conversation")
else success
Agent->>MemoryMgr: return generated title
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
This comment has been minimized.
This comment has been minimized.
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/core/src/agent/agent.ts`:
- Around line 261-266: The isTemperatureWarning function uses incorrect
discriminant and property names causing TypeScript type errors: change checks to
use the correct variant strings ("unsupported-setting" and "compatibility") and
narrow the union before accessing variant-specific properties; for the
"unsupported-setting" variant inspect warning.setting, for "compatibility"
inspect warning.feature, and otherwise inspect warning.details — all with
toLowerCase() after confirming the field exists to safely detect the substring
"temperature" while preserving type safety for the Warning union used by the AI
SDK v6.0.0.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a53a9ccc-ec61-45ee-9bbf-3e15ad7e1d65
📒 Files selected for processing (7)
.changeset/generate-title-temperature-override.mdpackages/core/src/agent/agent.spec.tspackages/core/src/agent/agent.tspackages/core/src/memory/manager/memory-manager.spec.tspackages/core/src/memory/manager/memory-manager.tspackages/core/src/memory/types.tswebsite/docs/agents/memory/overview.md
Deploying voltagent with
|
| Latest commit: |
998167d
|
| Status: | ✅ Deploy successful! |
| Preview URL: | https://195b80f7.voltagent.pages.dev |
| Branch Preview URL: | https://fix-generate-title-temperatu.voltagent.pages.dev |
ef44250 to
aa8c70a
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/core/src/agent/agent.ts (1)
4819-4829:⚠️ Potential issue | 🟠 MajorWarn when title generation produces no usable title.
This still returns
nullsilently whengenerateTextsucceeds but the sanitized title is empty, so the"Conversation"fallback remains opaque for the “no usable output” case this PR is trying to surface. Please log a warn before returningnull.Suggested change
const resolvedUsage = result.usage ? await Promise.resolve(result.usage) : undefined; const title = sanitizeConversationTitle(result.text ?? "", maxLength); + if (!title) { + context.logger.warn("[Memory] Conversation title generation returned no usable title", { + finishReason: result.finishReason, + warnings: result.warnings ? safeStringify(result.warnings) : undefined, + hint: + temperature !== undefined + ? "If your title generation model does not support temperature, set generateTitle.temperature to null." + : undefined, + }); + } if (title) { llmSpan.setAttribute("output", title); } finalizeLLMSpan(SpanStatusCode.OK, { usage: resolvedUsage,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/agent/agent.ts` around lines 4819 - 4829, When sanitizeConversationTitle(result.text ?? "", maxLength) yields no usable title, add a warning log before returning null: log a concise warning that title generation produced an empty/sanitized result (include context like the raw result.text, result.finishReason or providerMetadata if available) and then proceed to finalizeLLMSpan and return null; locate the block using sanitizeConversationTitle, llmSpan, finalizeLLMSpan and add the processLogger.warn (or the module's existing logger.warn) call just before the existing return title || null.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/core/src/agent/agent.ts`:
- Around line 4835-4839: The warn log in agent.ts currently includes the full
serialized error via safeStringify(error); remove safeStringify(error) from the
context.logger.warn call and keep only sanitized fields (e.g., message and the
existing hint) to avoid dumping provider payloads, then log the full serialized
error at debug level using context.logger.debug or similar (e.g.,
context.logger.debug("[Memory] Full error for title generation", { error:
safeStringify(error) })) so detailed diagnostics remain available but not in
warn logs; target the existing call site that uses context.logger.warn("[Memory]
Failed to generate conversation title", ...) to make this change.
---
Outside diff comments:
In `@packages/core/src/agent/agent.ts`:
- Around line 4819-4829: When sanitizeConversationTitle(result.text ?? "",
maxLength) yields no usable title, add a warning log before returning null: log
a concise warning that title generation produced an empty/sanitized result
(include context like the raw result.text, result.finishReason or
providerMetadata if available) and then proceed to finalizeLLMSpan and return
null; locate the block using sanitizeConversationTitle, llmSpan, finalizeLLMSpan
and add the processLogger.warn (or the module's existing logger.warn) call just
before the existing return title || null.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: cacd2b7d-35d0-48fd-aa4b-8bbb9c03e11d
📒 Files selected for processing (7)
.changeset/generate-title-temperature-override.mdpackages/core/src/agent/agent.spec.tspackages/core/src/agent/agent.tspackages/core/src/memory/manager/memory-manager.spec.tspackages/core/src/memory/manager/memory-manager.tspackages/core/src/memory/types.tswebsite/docs/agents/memory/overview.md
✅ Files skipped from review due to trivial changes (2)
- packages/core/src/memory/types.ts
- .changeset/generate-title-temperature-override.md
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/core/src/memory/manager/memory-manager.spec.ts
aa8c70a to
998167d
Compare
There was a problem hiding this comment.
🧹 Nitpick comments (4)
packages/core/src/memory/manager/memory-manager.ts (1)
613-618: Align warn-log shape and hint wording withagent.ts.The sibling site in
packages/core/src/agent/agent.ts(lines 4840-4846) intentionally keeps thewarnpayload concise (message+hintonly) and emits a separatedebuglog with the full serialized error. Here the fullsafeStringify(error)is folded into thewarnpayload itself, and the hint string is also worded differently:
- memory-manager:
"If your title generation model does not support temperature, set generateTitle.temperature to null."- agent:
"Set generateTitle.temperature to null to omit temperature for title generation."Two divergent shapes/wordings for the same failure mode make log filtering and docs harder. Suggest mirroring the agent.ts pattern.
♻️ Proposed alignment
} catch (error) { - context.logger.warn("[Memory] Failed to generate conversation title", { - error: safeStringify(error), - message: error instanceof Error ? error.message : undefined, - hint: "If your title generation model does not support temperature, set generateTitle.temperature to null.", - }); + context.logger.warn("[Memory] Failed to generate conversation title", { + message: error instanceof Error ? error.message : undefined, + hint: "Set generateTitle.temperature to null to omit temperature for title generation.", + }); + context.logger.debug("[Memory] Full error for title generation", { + error: safeStringify(error), + }); }Note: the
memory-manager.spec.tsassertions added in this PR likely check the current payload shape — update them accordingly if you adopt this change.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/memory/manager/memory-manager.ts` around lines 613 - 618, The warn log in memory-manager.ts currently embeds safeStringify(error) and a differently worded hint; change the context.logger.warn call (in the generate title error handler) to only include message and the exact hint string used in agent.ts ("Set generateTitle.temperature to null to omit temperature for title generation."), and add a separate context.logger.debug (or context.logger.error if preferred) that logs the full serialized error via safeStringify(error); update references to context.logger.warn in the generate-title error handling code path (and adjust memory-manager.spec.ts expectations) to match this new shape and wording.packages/core/src/agent/agent.spec.ts (3)
1910-2002: Optional: split this test to isolate the two behaviors being asserted.The test name is "should use temperature 0 by default when generating conversation titles", but the body also asserts the unsupported-temperature warning path (lines 1995-2001) by injecting a
warningsarray into the mock response. These are two distinct concerns (default temperature value + warning surfacing) and a failure in the warning assertion would obscure the default-temperature regression you actually care about.Consider splitting into two tests, or removing the
warningsfrom this mock and covering the unsupported-temperature warning in a separate, dedicated test. Not blocking.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/agent/agent.spec.ts` around lines 1910 - 2002, The test mixes two concerns (default temperature and warning surfacing); split it into two tests: (1) keep the "should use temperature 0 by default when generating conversation titles" test and remove the mocked response's warnings array so it only asserts that ai.generateText was called with temperature 0, maxOutputTokens 32 and the span attribute llm.temperature is 0 (locate the title generator via (agent as any).createConversationTitleGenerator and the ai.generateText mock calls), and (2) add a new test that mocks ai.generateText to return a response containing the warnings array and asserts context.logger.warn was called with the "[Memory] Conversation title generation model does not support temperature" message and appropriate hint/warning contents; ensure both tests reuse the same context/span setup and clearly reference createConversationTitleGenerator, ai.generateText mock, and context.traceContext.createChildSpan in their assertions.
2213-2225: Bothexpect(...).toHaveBeenCalledWith(...)assertions match the same single warn call — confirm intent.
vi.fntoHaveBeenCalledWithsucceeds if any recorded call matches. The two assertions here together require that there is at least one[Memory] Failed to generate conversation titlewarn call whose payload (a) does not have anerrorkey, and (b) does havemessageandhint. Since the implementation only emits this warn once, both expectations are satisfied by the same call — which is presumably the intent (the full error must live indebug, notwarn).If you want to make that contract explicit and harder to accidentally regress (e.g., if a future change adds a second matching warn call carrying
error), assert againstwarnSpy.mock.callsdirectly so you check every call rather than matching any:♻️ Stricter assertion alternative
- expect(context.logger.warn).toHaveBeenCalledWith( - "[Memory] Failed to generate conversation title", - expect.not.objectContaining({ - error: expect.anything(), - }), - ); - expect(context.logger.warn).toHaveBeenCalledWith( - "[Memory] Failed to generate conversation title", - expect.objectContaining({ - message: "Unsupported temperature", - hint: expect.stringContaining("generateTitle.temperature"), - }), - ); + const failureWarnCalls = (context.logger.warn as any).mock.calls.filter( + ([msg]: [string]) => msg === "[Memory] Failed to generate conversation title", + ); + expect(failureWarnCalls).toHaveLength(1); + const [, payload] = failureWarnCalls[0]; + expect(payload).toEqual( + expect.objectContaining({ + message: "Unsupported temperature", + hint: expect.stringContaining("generateTitle.temperature"), + }), + ); + expect(payload).not.toHaveProperty("error");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/agent/agent.spec.ts` around lines 2213 - 2225, The two toHaveBeenCalledWith assertions both match the same warn call, so make the intent explicit by inspecting the mock's recorded calls directly: grab the calls from (context.logger.warn as jest.Mock).mock.calls, assert there is exactly one call (or the expected number), then assert the first argument equals "[Memory] Failed to generate conversation title" and the second argument does not contain an error key and does contain message: "Unsupported temperature" and a hint string containing "generateTitle.temperature"; this replaces the two loose expect(...).toHaveBeenCalledWith(...) assertions with deterministic checks against context.logger.warn's mock.calls.
2163-2163: Test name doesn't match what it asserts.The
it(...)description says "should keep full conversation title generation errors at debug level", but the assertions verify both the newwarn-level summary (lines 2213-2225) and thedebug-level full error (lines 2226-2231). A more accurate name would be e.g."should warn with summary and log full error at debug level when title generation fails"so the intent is obvious from the test report.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/core/src/agent/agent.spec.ts` at line 2163, The test description for the it block in agent.spec.ts is misleading: rename the test's title string from "should keep full conversation title generation errors at debug level" to something that reflects both assertions (e.g. "should warn with summary and log full error at debug level when title generation fails") so the test intent matches the assertions that check a warn-level summary and a debug-level full error; update the it(...) title in the spec containing the assertions around warn (lines verifying summary) and debug (lines verifying full error) accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@packages/core/src/agent/agent.spec.ts`:
- Around line 1910-2002: The test mixes two concerns (default temperature and
warning surfacing); split it into two tests: (1) keep the "should use
temperature 0 by default when generating conversation titles" test and remove
the mocked response's warnings array so it only asserts that ai.generateText was
called with temperature 0, maxOutputTokens 32 and the span attribute
llm.temperature is 0 (locate the title generator via (agent as
any).createConversationTitleGenerator and the ai.generateText mock calls), and
(2) add a new test that mocks ai.generateText to return a response containing
the warnings array and asserts context.logger.warn was called with the "[Memory]
Conversation title generation model does not support temperature" message and
appropriate hint/warning contents; ensure both tests reuse the same context/span
setup and clearly reference createConversationTitleGenerator, ai.generateText
mock, and context.traceContext.createChildSpan in their assertions.
- Around line 2213-2225: The two toHaveBeenCalledWith assertions both match the
same warn call, so make the intent explicit by inspecting the mock's recorded
calls directly: grab the calls from (context.logger.warn as
jest.Mock).mock.calls, assert there is exactly one call (or the expected
number), then assert the first argument equals "[Memory] Failed to generate
conversation title" and the second argument does not contain an error key and
does contain message: "Unsupported temperature" and a hint string containing
"generateTitle.temperature"; this replaces the two loose
expect(...).toHaveBeenCalledWith(...) assertions with deterministic checks
against context.logger.warn's mock.calls.
- Line 2163: The test description for the it block in agent.spec.ts is
misleading: rename the test's title string from "should keep full conversation
title generation errors at debug level" to something that reflects both
assertions (e.g. "should warn with summary and log full error at debug level
when title generation fails") so the test intent matches the assertions that
check a warn-level summary and a debug-level full error; update the it(...)
title in the spec containing the assertions around warn (lines verifying
summary) and debug (lines verifying full error) accordingly.
In `@packages/core/src/memory/manager/memory-manager.ts`:
- Around line 613-618: The warn log in memory-manager.ts currently embeds
safeStringify(error) and a differently worded hint; change the
context.logger.warn call (in the generate title error handler) to only include
message and the exact hint string used in agent.ts ("Set
generateTitle.temperature to null to omit temperature for title generation."),
and add a separate context.logger.debug (or context.logger.error if preferred)
that logs the full serialized error via safeStringify(error); update references
to context.logger.warn in the generate-title error handling code path (and
adjust memory-manager.spec.ts expectations) to match this new shape and wording.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f9bc782c-07eb-4726-b903-e5620d34c42e
📒 Files selected for processing (7)
.changeset/generate-title-temperature-override.mdpackages/core/src/agent/agent.spec.tspackages/core/src/agent/agent.tspackages/core/src/memory/manager/memory-manager.spec.tspackages/core/src/memory/manager/memory-manager.tspackages/core/src/memory/types.tswebsite/docs/agents/memory/overview.md
✅ Files skipped from review due to trivial changes (2)
- packages/core/src/memory/types.ts
- .changeset/generate-title-temperature-override.md
PR Checklist
Please check if your PR fulfills the following requirements:
Bugs / Features
What is the current behavior?
Conversation title generation always sends
temperature: 0. Reasoning models such asgpt-5-minican warn or fail because they do not support temperature, and title generation failures are only logged at debug level.What is the new behavior?
generateTitle.temperaturecan now be configured. It still defaults to0for backwards compatibility, and users can settemperature: nullto omit the parameter for reasoning models. Unsupported temperature warnings and title generation failures are surfaced at warn level with guidance.fixes #1233
Notes for reviewers
Verified with:
pnpm --filter @voltagent/core test -- src/agent/agent.spec.ts src/memory/manager/memory-manager.spec.tspnpm --filter @voltagent/core buildexamples/basecall usingopenai/gpt-5-miniwith defaulttemperature: 0and withtemperature: null.Summary by cubic
Adds an option to disable temperature for conversation title generation in
@voltagent/core. This avoids errors on reasoning models and surfaces clearer warnings; default remains0, setgenerateTitle.temperature: nullto omit the param.Bug Fixes
generateTitle.temperature(number ornull); omits the parameter whennull.null.Migration
generateTitle.temperature: nullin memory config.Written for commit 998167d. Summary will update on new commits.
Summary by CodeRabbit
New Features
nullomits temperature for reasoning models.Improvements
Documentation