Closed
Description
Description
I'm working with the following streamText
function that uses a streaming text generation model:
async chatStream(messages, options) {
const lastMessage = messages[messages.length - 1];
const prompt = lastMessage.content;
const model = this._getModel();
const stream = streamText({
model,
messages: this._convertJson(messages),
temperature: options.temperature || this.modelConfig.temperature,
topP: options.top_p || this.modelConfig.top_p,
maxTokens: options.max_tokens || this.modelConfig.max_tokens,
});
return stream.toTextStreamResponse();
}
Expected:
Response should include both reasoning steps and final answer (chain-of-thought).
Actual:
Only the final answer is returned.
Possible Issues:
- Missing parameters in
streamText
to enable reasoning generation. toTextStreamResponse()
method filters out reasoning content._convertJson()
modifies messages in a way that suppresses reasoning.
Ask:
What parameter/config change is needed to include the reasoning chain?
AI SDK Version
- "ai": "^4.3.4",
- "@ai-sdk/openai": "^1.3.9",