Conversation
📝 WalkthroughWalkthroughUnifies OpenRouter LLM to use the Chat Completions API for all models, removes Responses API branches and helpers, adds strict-mode tool-schema conversion for non-OpenAI models, refines sanitize logic to format Exceptions, and updates example and tests accordingly. Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
📜 Recent review detailsConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Disabled knowledge base sources:
📒 Files selected for processing (1)
🧰 Additional context used📓 Path-based instructions (1)**/*.py📄 CodeRabbit inference engine (.cursor/rules/python.mdc)
Files:
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
🔇 Additional comments (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
plugins/openrouter/tests/test_openrouter_llm.py (1)
55-76: Consider adding a test case for tools without required parameters.The implementation only applies strict mode when
params.get("required")is truthy (line 225 in openrouter_llm.py). Testing this edge case would provide more comprehensive coverage.🔎 Suggested additional test
async def test_no_strict_mode_without_required_params(self): """Tools without required params should not have strict mode even for non-OpenAI models.""" llm = LLM(model="google/gemini-2.0-flash-001") tools = [ {"name": "test_tool", "description": "A test", "parameters": {"type": "object", "properties": {"foo": {"type": "string"}}}} ] converted = llm._convert_tools_to_chat_completions_format(tools) func = converted[0]["function"] assert func.get("strict") is None assert func["parameters"].get("additionalProperties") is None
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (4)
agents-core/vision_agents/core/llm/llm.pyplugins/openrouter/example/openrouter_example.pyplugins/openrouter/tests/test_openrouter_llm.pyplugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py
📄 CodeRabbit inference engine (.cursor/rules/python.mdc)
**/*.py: Never adjust sys.path in Python code
Never writeexcept Exception as e- use specific exception handling
Avoid using getattr, hasattr, delattr and setattr; prefer normal attribute access in Python
Docstrings should follow the Google style guide for docstrings
Files:
agents-core/vision_agents/core/llm/llm.pyplugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.pyplugins/openrouter/tests/test_openrouter_llm.pyplugins/openrouter/example/openrouter_example.py
**/*test*.py
📄 CodeRabbit inference engine (.cursor/rules/python.mdc)
**/*test*.py: Never mock in tests; use pytest for testing
Mark integration tests with @pytest.mark.integration decorator
@pytest.mark.asyncio is not needed - it is automatic
Files:
plugins/openrouter/tests/test_openrouter_llm.py
🧬 Code graph analysis (1)
plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py (3)
plugins/openai/vision_agents/plugins/openai/openai_llm.py (2)
create_conversation(110-112)create_response(118-207)plugins/openai/vision_agents/plugins/openai/chat_completions/chat_completions_llm.py (1)
create_response(107-145)agents-core/vision_agents/core/llm/llm.py (1)
LLMResponseEvent(39-43)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (8)
- GitHub Check: unit / Mypy
- GitHub Check: unit / Test "not integration"
- GitHub Check: unit / Validate extra dependencies in "agents-core/pyproject.toml"
- GitHub Check: unit / Ruff
- GitHub Check: unit / Test "not integration"
- GitHub Check: unit / Validate extra dependencies in "agents-core/pyproject.toml"
- GitHub Check: unit / Ruff
- GitHub Check: unit / Mypy
🔇 Additional comments (12)
plugins/openrouter/example/openrouter_example.py (2)
36-40: LGTM!The updated comment accurately reflects the new Chat Completions API approach, and the fixed model string simplifies the example.
82-86: LGTM!The updated tool use rules provide clearer guidance with concrete examples, which will help users understand the expected chaining behavior.
plugins/openrouter/vision_agents/plugins/openrouter/openrouter_llm.py (7)
1-6: LGTM!The updated module docstring clearly communicates the unified Chat Completions API approach.
78-86: LGTM!The
_is_openai_modelcheck with clear documentation of the OpenAI strict mode constraint is helpful for understanding the conditional behavior.
91-97: LGTM!Consolidating to a single Chat Completions API path simplifies the implementation and improves maintainability.
195-230: LGTM!The conditional strict mode logic is well-designed:
- Enables strict mode for non-OpenAI models to improve schema adherence
- Disables it for OpenAI models to support optional parameters in MCP tools
- Only applies strict mode when
requiredparameters exist, which is appropriateThe docstring clearly explains the rationale.
266-338: LGTM!The streaming implementation correctly handles:
- Real-time chunk emission for TTS
- Tool call accumulation across chunks
- Narration suppression when tool calls are present (preventing "Let me check..." from being spoken)
443-592: LGTM!The tool call handling logic is robust:
- Multi-round execution with proper deduplication
- Correct message formatting for assistant tool_calls and tool results
- Proper use of
_sanitize_tool_outputfor error and result handling- Buffering of intermediate text to prevent narration between tool calls
103-149: LGTM!The response creation flow properly:
- Builds messages with conversation history
- Adds tools when available
- Updates conversation after the exchange completes
plugins/openrouter/tests/test_openrouter_llm.py (3)
55-65: LGTM!The test correctly validates that non-OpenAI models (Gemini) enable strict mode with the expected schema constraints.
66-76: LGTM!The test correctly validates that OpenAI models do not enable strict mode, allowing for optional parameters in MCP tools.
159-160: LGTM!The docstring updates correctly reflect that there's now only one API path (Chat Completions), removing the now-redundant parenthetical clarifications.
Also applies to: 183-184
Enabled for non-OpenAI models (Gemini, Claude, etc.) to help them follow tool schemas
Disabled for OpenAI models (they require all properties in required when strict is set, which breaks MCP tools with optional params)
Summary by CodeRabbit
New Features
Improvements
Tests
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.