Skip to content

Conversation

@uc4w6c
Copy link
Collaborator

@uc4w6c uc4w6c commented Dec 10, 2025

Title

feat: add support for using MCPs on /chat/completions

Relevant issues

LIT-1537

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🆕 New Feature
🐛 Bug Fix
📖 Documentation
✅ Test

Changes

curl http://localhost:4000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-xxxxx" \
  -d '{
    "model": "gpt-4o",
    "messages": [
        {
            "role": "user",
            "content": "add 1 + 1"
        }
    ],
    "tools": [
      {
        "type": "mcp",
        "server_label": "litellm",
        "server_url": "litellm_proxy/mcp/everything",
        "require_approval": "never",
        "headers": {
          "x-litellm-api-key": "Bearer sk-xxxxx"
        }
      }
    ]
  }'

{"id":"chatcmpl-Cl3kfjQkTB3lWV4GvWaUSkbYbBnMK","created":1765331873,"model":"gpt-4o-2024-08-06","object":"chat.completion","system_fingerprint":"fp_83554c687e","choices":[{"finish_reason":"stop","index":0,"message":{"content":"The sum of 1 and 1 is 2.","role":"assistant"}}],"usage":{"completion_tokens":13,"prompt_tokens":482,"total_tokens":495,"completion_tokens_details":{"accepted_prediction_tokens":0,"audio_tokens":0,"reasoning_tokens":0,"rejected_prediction_tokens":0},"prompt_tokens_details":{"audio_tokens":0,"cached_tokens":0}},"service_tier":"default"}

@vercel
Copy link

vercel bot commented Dec 10, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
litellm Ready Ready Preview Comment Dec 11, 2025 9:43pm

Copy link
Contributor

@ishaan-jaff ishaan-jaff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reviewed

litellm/main.py Outdated
raise Exception("Mock completion response failed - {}".format(e))


async def _call_acompletion_internal(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pelase don't add this to main.py we don'y need any more code here

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point — I’ve updated the PR.

litellm/main.py Outdated
tool_choice = validate_chat_completion_tool_choice(tool_choice=tool_choice)

skip_mcp_handler = kwargs.pop("_skip_mcp_handler", False)
if not skip_mcp_handler:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if possible pelase can we avoid adding more code in main.py, new code can be placed in a diff file

</TabItem>

#### Use MCP tools with `/chat/completions`

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you make it clear this works across ALL llm providers

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ve updated the PR — could you take a look and let me know if it matches what you had in mind?

tool_server_map[tool_name] = allowed_mcp_servers[0]
else:
tool_server_map[tool_name], _ = split_server_prefix_from_name(
_, tool_server_map[tool_name] = split_server_prefix_from_name(
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tuple unpacking order here is reversed.
Because of that, tool_server_map ends up using the tool name as both the key and the value,
instead of mapping tool_name -> server_name.
This change fixes that issue.

Copy link
Contributor

@ishaan-jaff ishaan-jaff left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tools = validate_and_fix_openai_tools(tools=tools)
# validate tool_choice
tool_choice = validate_chat_completion_tool_choice(tool_choice=tool_choice)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this need to be in main.py ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

@uc4w6c uc4w6c merged commit 8899b63 into main Dec 12, 2025
47 of 59 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants