Skip to content

fix openrouter function calling#274

Merged
maxkahan merged 3 commits intomainfrom
fix-openrouter-function-calling
Dec 30, 2025
Merged

fix openrouter function calling#274
maxkahan merged 3 commits intomainfrom
fix-openrouter-function-calling

Conversation

@maxkahan
Copy link
Contributor

@maxkahan maxkahan commented Dec 29, 2025

  • Always use Chat Completions API (removed Responses API code path which was causing issues)
  • Add conditional strict: true to tool schemas:
    Enabled for non-OpenAI models (Gemini, Claude, etc.) to help them follow tool schemas
    Disabled for OpenAI models (they require all properties in required when strict is set, which breaks MCP tools with optional params)
  • Added unit tests for the strict mode conditional logic

Summary by CodeRabbit

  • New Features

    • Enhanced sanitization of tool outputs to render exceptions as readable error messages.
  • Improvements

    • Unified OpenRouter to use the Chat Completions API for consistent behavior across models.
    • Tool-schema translation now enables strict-mode for non-OpenAI models and omits it for OpenAI models.
  • Tests

    • Added async tests verifying strict-mode behavior for OpenAI vs non-OpenAI models.
  • Documentation

    • Updated example guidance and test docstrings to reflect these changes.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link

coderabbitai bot commented Dec 29, 2025

📝 Walkthrough

Walkthrough

Unifies OpenRouter LLM to use the Chat Completions API for all models, removes Responses API branches and helpers, adds strict-mode tool-schema conversion for non-OpenAI models, refines sanitize logic to format Exceptions, and updates example and tests accordingly.

Changes

Cohort / File(s) Summary
Core LLM sanitize
agents-core/vision_agents/core/llm/llm.py
_sanitize_tool_output now: keeps strings, formats Exception/BaseException as Error: <ExceptionName>: <message>, otherwise JSON-dumps; docstring updated.
OpenRouter LLM (single-path)
plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py
Removed Responses API branches and helpers (add_conversation_history, _handle_tool_calls); consolidated to Chat Completions-only flow (_create_response_chat_completions, _build_chat_messages, _chat_completions_internal, etc.); tool-schema conversion enables strict mode for non-OpenAI models.
OpenRouter tests
plugins/openrouter/tests/test_openrouter_llm.py
Added async tests test_strict_mode_for_non_openai and test_no_strict_mode_for_openai; updated two test docstrings.
OpenRouter example
plugins/openrouter/example/openrouter_example.py
Fixed model string to "openrouter/auto" and revised example tool-use guidance and chaining wording.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant Client
participant OpenRouterLLM
participant ChatCompletionsAPI
participant ToolExecutor
rect rgba(173,216,230,0.12)
Note over OpenRouterLLM,ChatCompletionsAPI: Unified Chat Completions path (strict-mode for non-OpenAI)
end
Client->>OpenRouterLLM: create_response(input, tools...)
OpenRouterLLM->>ChatCompletionsAPI: build messages + tool schemas (strict if non-OpenAI)
ChatCompletionsAPI-->>OpenRouterLLM: response (streamed or final; may request tool)
alt tool call requested
OpenRouterLLM->>ToolExecutor: execute tool call
ToolExecutor-->>OpenRouterLLM: tool output (value or Exception)
OpenRouterLLM->>OpenRouterLLM: _sanitize_tool_output (format exceptions)
OpenRouterLLM->>ChatCompletionsAPI: send tool result as new message
ChatCompletionsAPI-->>OpenRouterLLM: continued/final response
end
OpenRouterLLM-->>Client: final composed response

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

Possibly related PRs

Suggested labels

examples

Suggested reviewers

  • tschellenbach
  • d3xvn

Poem

I pared the twin roads down to neutral gray,
The old replies went under—no more two.
An error's name blooms cold and plain as day,
Laid flat and read, a fossil in the blue.
One current carries question back to you.

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fix openrouter function calling' directly aligns with the PR's main objective to resolve OpenRouter function calling by consolidating to the Chat Completions API and adding conditional strict-mode behavior.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings

📜 Recent review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 651fe93 and 83df11b.

📒 Files selected for processing (1)
  • agents-core/vision_agents/core/llm/llm.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (.cursor/rules/python.mdc)

**/*.py: Never adjust sys.path in Python code
Never write except Exception as e - use specific exception handling
Avoid using getattr, hasattr, delattr and setattr; prefer normal attribute access in Python
Docstrings should follow the Google style guide for docstrings

Files:

  • agents-core/vision_agents/core/llm/llm.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: unit / Test "not integration"
  • GitHub Check: unit / Mypy
  • GitHub Check: unit / Mypy
  • GitHub Check: unit / Test "not integration"
  • GitHub Check: unit / Ruff
  • GitHub Check: unit / Validate extra dependencies in "agents-core/pyproject.toml"
🔇 Additional comments (1)
agents-core/vision_agents/core/llm/llm.py (1)

400-405: LGTM! Exception handling is correctly implemented.

The sanitization logic properly handles strings as-is, formats Exception instances with their type name and message, and JSON-serializes other values. The approach is appropriate for sanitizing tool outputs.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
plugins/openrouter/tests/test_openrouter_llm.py (1)

55-76: Consider adding a test case for tools without required parameters.

The implementation only applies strict mode when params.get("required") is truthy (line 225 in openrouter_llm.py). Testing this edge case would provide more comprehensive coverage.

🔎 Suggested additional test
async def test_no_strict_mode_without_required_params(self):
    """Tools without required params should not have strict mode even for non-OpenAI models."""
    llm = LLM(model="google/gemini-2.0-flash-001")
    tools = [
        {"name": "test_tool", "description": "A test", "parameters": {"type": "object", "properties": {"foo": {"type": "string"}}}}
    ]
    converted = llm._convert_tools_to_chat_completions_format(tools)
    func = converted[0]["function"]
    assert func.get("strict") is None
    assert func["parameters"].get("additionalProperties") is None
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 515fd64 and b2b3484.

📒 Files selected for processing (4)
  • agents-core/vision_agents/core/llm/llm.py
  • plugins/openrouter/example/openrouter_example.py
  • plugins/openrouter/tests/test_openrouter_llm.py
  • plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (.cursor/rules/python.mdc)

**/*.py: Never adjust sys.path in Python code
Never write except Exception as e - use specific exception handling
Avoid using getattr, hasattr, delattr and setattr; prefer normal attribute access in Python
Docstrings should follow the Google style guide for docstrings

Files:

  • agents-core/vision_agents/core/llm/llm.py
  • plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py
  • plugins/openrouter/tests/test_openrouter_llm.py
  • plugins/openrouter/example/openrouter_example.py
**/*test*.py

📄 CodeRabbit inference engine (.cursor/rules/python.mdc)

**/*test*.py: Never mock in tests; use pytest for testing
Mark integration tests with @pytest.mark.integration decorator
@pytest.mark.asyncio is not needed - it is automatic

Files:

  • plugins/openrouter/tests/test_openrouter_llm.py
🧬 Code graph analysis (1)
plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py (3)
plugins/openai/vision_agents/plugins/openai/openai_llm.py (2)
  • create_conversation (110-112)
  • create_response (118-207)
plugins/openai/vision_agents/plugins/openai/chat_completions/chat_completions_llm.py (1)
  • create_response (107-145)
agents-core/vision_agents/core/llm/llm.py (1)
  • LLMResponseEvent (39-43)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
  • GitHub Check: unit / Mypy
  • GitHub Check: unit / Test "not integration"
  • GitHub Check: unit / Validate extra dependencies in "agents-core/pyproject.toml"
  • GitHub Check: unit / Ruff
  • GitHub Check: unit / Test "not integration"
  • GitHub Check: unit / Validate extra dependencies in "agents-core/pyproject.toml"
  • GitHub Check: unit / Ruff
  • GitHub Check: unit / Mypy
🔇 Additional comments (12)
plugins/openrouter/example/openrouter_example.py (2)

36-40: LGTM!

The updated comment accurately reflects the new Chat Completions API approach, and the fixed model string simplifies the example.


82-86: LGTM!

The updated tool use rules provide clearer guidance with concrete examples, which will help users understand the expected chaining behavior.

plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py (7)

1-6: LGTM!

The updated module docstring clearly communicates the unified Chat Completions API approach.


78-86: LGTM!

The _is_openai_model check with clear documentation of the OpenAI strict mode constraint is helpful for understanding the conditional behavior.


91-97: LGTM!

Consolidating to a single Chat Completions API path simplifies the implementation and improves maintainability.


195-230: LGTM!

The conditional strict mode logic is well-designed:

  • Enables strict mode for non-OpenAI models to improve schema adherence
  • Disables it for OpenAI models to support optional parameters in MCP tools
  • Only applies strict mode when required parameters exist, which is appropriate

The docstring clearly explains the rationale.


266-338: LGTM!

The streaming implementation correctly handles:

  • Real-time chunk emission for TTS
  • Tool call accumulation across chunks
  • Narration suppression when tool calls are present (preventing "Let me check..." from being spoken)

443-592: LGTM!

The tool call handling logic is robust:

  • Multi-round execution with proper deduplication
  • Correct message formatting for assistant tool_calls and tool results
  • Proper use of _sanitize_tool_output for error and result handling
  • Buffering of intermediate text to prevent narration between tool calls

103-149: LGTM!

The response creation flow properly:

  • Builds messages with conversation history
  • Adds tools when available
  • Updates conversation after the exchange completes
plugins/openrouter/tests/test_openrouter_llm.py (3)

55-65: LGTM!

The test correctly validates that non-OpenAI models (Gemini) enable strict mode with the expected schema constraints.


66-76: LGTM!

The test correctly validates that OpenAI models do not enable strict mode, allowing for optional parameters in MCP tools.


159-160: LGTM!

The docstring updates correctly reflect that there's now only one API path (Chat Completions), removing the now-redundant parenthetical clarifications.

Also applies to: 183-184

Copy link
Contributor

@d3xvn d3xvn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@maxkahan maxkahan merged commit 79778a1 into main Dec 30, 2025
10 checks passed
@maxkahan maxkahan deleted the fix-openrouter-function-calling branch December 30, 2025 13:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants