Skip to content

Conversation

@codeflash-ai
Copy link

@codeflash-ai codeflash-ai bot commented Oct 24, 2025

📄 9% (0.09x) speedup for MCPClientBase.get_system_prompt in src/mistralai/extra/mcp/base.py

⏱️ Runtime : 5.69 milliseconds 5.22 milliseconds (best of 169 runs)

📝 Explanation and details

Two micro-optimizations deliver a 9% runtime improvement and 3% throughput boost:

1. Simplified comparison in _convert_content:
Changed not mcp_content.type == "text" to mcp_content.type != "text". This eliminates the overhead of the not operator and chained comparison, reducing execution time from 354.9ns to 337.8ns per hit (5% faster per call).

2. Removed unnecessary typing.cast wrapper:
Eliminated the typing.cast() call in the list comprehension within get_system_prompt. The cast provided no runtime value since the dictionary already matches the expected type structure. This reduces the overhead in the message processing loop, improving from 7.01ms to 5.20ms total time (26% faster for the list comprehension).

The optimizations are particularly effective for workloads with:

  • High message volumes: The removed typing.cast scales linearly with message count
  • Frequent content validation: The simplified comparison benefits repeated _convert_content calls
  • Batch processing scenarios: Both optimizations compound when processing multiple prompts

These changes preserve all functionality while eliminating unnecessary Python overhead in hot code paths.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 6 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage 100.0%
🌀 Generated Regression Tests and Runtime
import asyncio  # used to run async functions
# --- Function to test (copied exactly as provided) ---
import typing
from contextlib import AsyncExitStack
from typing import Any, Optional, Union
from unittest.mock import AsyncMock, MagicMock, patch

import pytest  # used for our unit tests
from mcp import ClientSession
from mcp.types import EmbeddedResource, ImageContent, TextContent
from mistralai.extra.exceptions import MCPException
from mistralai.extra.mcp.base import MCPClientBase
from mistralai.models import (AssistantMessageTypedDict,
                              SystemMessageTypedDict, TextChunkTypedDict)

# --- Helper Classes for Mocking ---

class MockTextContent:
    def __init__(self, text):
        self.type = "text"
        self.text = text

class MockImageContent:
    def __init__(self, url):
        self.type = "image"
        self.url = url

class MockMessage:
    def __init__(self, role, content):
        self.role = role
        self.content = content

class MockPromptResult:
    def __init__(self, description, messages):
        self.description = description
        self.messages = messages

# --- Fixtures ---

@pytest.fixture
def mcp_client():
    client = MCPClientBase()
    client._session = MagicMock()
    return client

# --- Basic Test Cases ---

@pytest.mark.asyncio



async def test_get_system_prompt_non_text_message_raises(mcp_client):
    """Test that non-text message content raises MCPException."""
    mock_message = MockMessage("system", MockImageContent("http://image.url"))
    mock_prompt_result = MockPromptResult("Has image", [mock_message])
    mcp_client._session.get_prompt = AsyncMock(return_value=mock_prompt_result)

    with pytest.raises(MCPException):
        await mcp_client.get_system_prompt("has_image", {})

@pytest.mark.asyncio

async def test_get_system_prompt_exception_from_get_prompt(mcp_client):
    """Test that exceptions from get_prompt are propagated."""
    mcp_client._session.get_prompt = AsyncMock(side_effect=RuntimeError("boom"))
    with pytest.raises(RuntimeError, match="boom"):
        await mcp_client.get_system_prompt("err", {})

# --- Large Scale Test Cases ---

@pytest.mark.asyncio





#------------------------------------------------
import asyncio  # used to run async functions
# --- Function to test (EXACT COPY, DO NOT MODIFY) ---
import typing
from contextlib import AsyncExitStack
from typing import Any, Optional, Union
from unittest.mock import AsyncMock, MagicMock

import pytest  # used for our unit tests
from mcp import ClientSession
from mcp.types import EmbeddedResource, ImageContent, TextContent
from mistralai.extra.exceptions import MCPException
from mistralai.extra.mcp.base import MCPClientBase
from mistralai.models import (AssistantMessageTypedDict,
                              SystemMessageTypedDict, TextChunkTypedDict)


# --- Mock classes for testing ---
class MockTextContent:
    def __init__(self, text):
        self.type = "text"
        self.text = text

class MockImageContent:
    def __init__(self, data):
        self.type = "image"
        self.data = data

class MockEmbeddedResource:
    def __init__(self, data):
        self.type = "embedded"
        self.data = data

class MockMessage:
    def __init__(self, role, content):
        self.role = role
        self.content = content

class MockPromptResult:
    def __init__(self, description, messages):
        self.description = description
        self.messages = messages

# --- Fixtures ---
@pytest.fixture
def client():
    # Create an MCPClientBase with a mocked session
    client = MCPClientBase()
    client._session = MagicMock()
    return client

@pytest.fixture
def basic_prompt_result():
    # A simple prompt result with one system message
    msg = MockMessage("system", MockTextContent("Hello world"))
    return MockPromptResult("Basic description", [msg])

@pytest.fixture
def multi_message_prompt_result():
    # Prompt result with multiple messages of different roles
    msgs = [
        MockMessage("system", MockTextContent("System message")),
        MockMessage("assistant", MockTextContent("Assistant message")),
        MockMessage("system", MockTextContent("Another system message")),
    ]
    return MockPromptResult("Multi-message description", msgs)

@pytest.fixture
def edge_prompt_result():
    # Prompt result with empty messages list
    return MockPromptResult("Empty messages", [])

@pytest.fixture
def non_text_content_prompt_result():
    # Prompt result with non-text content (should raise MCPException)
    msgs = [
        MockMessage("system", MockImageContent(b"imagebytes")),
        MockMessage("assistant", MockEmbeddedResource({"key": "value"})),
    ]
    return MockPromptResult("Non-text content", msgs)

# --- Basic Test Cases ---

@pytest.mark.asyncio


async def test_get_system_prompt_non_text_content_raises(client, non_text_content_prompt_result):
    """Test that non-text content raises MCPException."""
    client._session.get_prompt = AsyncMock(return_value=non_text_content_prompt_result)
    with pytest.raises(MCPException):
        await client.get_system_prompt("non_text", {})

@pytest.mark.asyncio





#------------------------------------------------
from mistralai.extra.mcp.base import MCPClientBase

To edit these changes git checkout codeflash/optimize-MCPClientBase.get_system_prompt-mh4gofov and push.

Codeflash

Two micro-optimizations deliver a 9% runtime improvement and 3% throughput boost:

**1. Simplified comparison in `_convert_content`:**
Changed `not mcp_content.type == "text"` to `mcp_content.type != "text"`. This eliminates the overhead of the `not` operator and chained comparison, reducing execution time from 354.9ns to 337.8ns per hit (5% faster per call).

**2. Removed unnecessary `typing.cast` wrapper:**
Eliminated the `typing.cast()` call in the list comprehension within `get_system_prompt`. The cast provided no runtime value since the dictionary already matches the expected type structure. This reduces the overhead in the message processing loop, improving from 7.01ms to 5.20ms total time (26% faster for the list comprehension).

The optimizations are particularly effective for workloads with:
- **High message volumes**: The removed `typing.cast` scales linearly with message count
- **Frequent content validation**: The simplified comparison benefits repeated `_convert_content` calls
- **Batch processing scenarios**: Both optimizations compound when processing multiple prompts

These changes preserve all functionality while eliminating unnecessary Python overhead in hot code paths.
@codeflash-ai codeflash-ai bot requested a review from mashraf-222 October 24, 2025 06:19
@codeflash-ai codeflash-ai bot added the ⚡️ codeflash Optimization PR opened by Codeflash AI label Oct 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

⚡️ codeflash Optimization PR opened by Codeflash AI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant