Skip to content

Conversation

@DeJeune
Copy link
Contributor

@DeJeune DeJeune commented Oct 22, 2025

Background

more and more provider supports v1/responses endpoint:

https://openrouter.ai/docs/api-reference/responses-api/overview
https://docs.x.ai/docs/api-reference#create-new-response

ToDo List:

  • getArgs()
  • doGenerate()
  • doStream()
  • unitTest
  • examples
  • docs

Summary

Manual Verification

Checklist

  • Tests have been added / updated (for bug fixes / features)
  • Documentation has been added / updated (for bug fixes / features)
  • A patch changeset for relevant packages has been added (for bug fixes / features - run pnpm changeset in the project root)
  • I have reviewed this pull request (self-review)

Future Work

Related Issues

@DeJeune DeJeune marked this pull request as draft October 22, 2025 17:38
@DeJeune
Copy link
Contributor Author

DeJeune commented Oct 22, 2025

I'm not sure whether some parts of the OpenAIResponseSchema should be dropped or be more flexible under "compatible"; perhaps @lgrammel needs to confirm this.

@lgrammel
Copy link
Collaborator

@DeJeune ideally it should be minimal to support the core set of what is available in the responses apis compatible providers. I think tests will be extremely critical. please adopt the fixture/snapshot model from openai responses and have fixtures for different providers.

fixtures can be created as follows:

  • for generate: take result.request.body from example and make it the fixture
  • for stream: enable includeRawChunks: true on the streamText call, and then use saveRawChunks to save the chunks (will be in output dir), all in examples

ideally we have fully realistic test input fixtures for various compatible providers, otherwise it will be hard to make this model stable and solid

- Add test fixtures for basic text, reasoning, and tool call responses
- Include both JSON and streaming chunk formats for comprehensive testing
- Add snapshot tests for response parsing and streaming behavior
- Fix provider metadata key in tool call responses
@DeJeune DeJeune marked this pull request as ready for review October 28, 2025 20:37
@gr2m gr2m self-assigned this Oct 28, 2025
id: value.item.id,
});
} else if (value.item.type === 'reasoning') {
const activeReasoningPart = activeReasoning[value.item.id];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
const activeReasoningPart = activeReasoning[value.item.id];
const activeReasoningPart = activeReasoning[value.item.id]!;

Missing null check when accessing activeReasoning object. If a response.output_item.done chunk for reasoning arrives before the corresponding response.output_item.added chunk, the code will crash when trying to access .summaryParts on an undefined value.

View Details

Analysis

Missing non-null assertion in OpenAI-compatible responses language model

What fails: TypeScript type checking fails in openai-compatible-responses-language-model.ts at line 565 because activeReasoning[value.item.id] returns a potentially undefined value when accessing a Record, then the code immediately tries to access .summaryParts on line 570 without a null check or non-null assertion.

How to reproduce:

# This would be caught by TypeScript strict type checking
# The code accesses activeReasoning[value.item.id] (type: T | undefined)
# Then immediately accesses .summaryParts on the potentially undefined value
const activeReasoningPart = activeReasoning[value.item.id];  // T | undefined
const summaryPartIndices = Object.entries(activeReasoningPart.summaryParts)  // ERROR

Result: TypeScript compilation would fail in strict null-checking mode because accessing .summaryParts on a potentially undefined value violates type safety.

Expected: The code should use a non-null assertion consistent with similar accesses in the same file at lines 628, 690, and 696, which all use the non-null assertion operator (!) because the OpenAI Responses API protocol guarantees that response.output_item.added chunks always arrive before corresponding response.output_item.done chunks (evidenced by sequence_number ordering in official test fixtures).

Reference: The fix mirrors the pattern used elsewhere in the same file where activeReasoning[value.item_id]! is used at lines 628, 690, 696 for similar protocol-guaranteed orderings.

- Implement tests for handling empty tools array and undefined tools.
- Test preparation of basic function tools with and without strict JSON schema.
- Validate multiple function tools preparation.
- Include tests for different tool choice scenarios: auto, none, required, specific tool, and undefined choice.
… from convertToOpenAICompatibleResponsesInput
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants