-
-
Notifications
You must be signed in to change notification settings - Fork 5k
feat: add support for using MCPs on /chat/completions #17747
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
ishaan-jaff
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reviewed
litellm/main.py
Outdated
| raise Exception("Mock completion response failed - {}".format(e)) | ||
|
|
||
|
|
||
| async def _call_acompletion_internal( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pelase don't add this to main.py we don'y need any more code here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point — I’ve updated the PR.
litellm/main.py
Outdated
| tool_choice = validate_chat_completion_tool_choice(tool_choice=tool_choice) | ||
|
|
||
| skip_mcp_handler = kwargs.pop("_skip_mcp_handler", False) | ||
| if not skip_mcp_handler: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if possible pelase can we avoid adding more code in main.py, new code can be placed in a diff file
docs/my-website/docs/mcp.md
Outdated
| </TabItem> | ||
|
|
||
| #### Use MCP tools with `/chat/completions` | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you make it clear this works across ALL llm providers
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I’ve updated the PR — could you take a look and let me know if it matches what you had in mind?
| tool_server_map[tool_name] = allowed_mcp_servers[0] | ||
| else: | ||
| tool_server_map[tool_name], _ = split_server_prefix_from_name( | ||
| _, tool_server_map[tool_name] = split_server_prefix_from_name( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tuple unpacking order here is reversed.
Because of that, tool_server_map ends up using the tool name as both the key and the value,
instead of mapping tool_name -> server_name.
This change fixes that issue.
ishaan-jaff
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| tools = validate_and_fix_openai_tools(tools=tools) | ||
| # validate tool_choice | ||
| tool_choice = validate_chat_completion_tool_choice(tool_choice=tool_choice) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this need to be in main.py ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
Title
feat: add support for using MCPs on /chat/completions
Relevant issues
LIT-1537
Pre-Submission checklist
Please complete all items before asking a LiteLLM maintainer to review your PR
tests/litellm/directory, Adding at least 1 test is a hard requirement - see detailsmake test-unitType
🆕 New Feature
🐛 Bug Fix
📖 Documentation
✅ Test
Changes