Skip to content

Conversation

@codeflash-ai
Copy link

@codeflash-ai codeflash-ai bot commented Oct 23, 2025

📄 8% (0.08x) speedup for AsyncRawInvitesClient.delete in src/deepgram/manage/v1/projects/members/invites/raw_client.py

⏱️ Runtime : 22.8 milliseconds 21.2 milliseconds (best of 110 runs)

📝 Explanation and details

The optimized code achieves a 7% runtime improvement and 1.9% throughput increase through strategic optimizations in the jsonable_encoder function, which is a critical serialization component called frequently throughout the codebase.

Key Optimizations:

  1. Early short-circuit for primitive types: Moved the isinstance(obj, (str, int, float, type(None))) check to the very beginning of the function, allowing the most common data types to return immediately without going through multiple expensive isinstance checks. This eliminates overhead for ~47.7% of calls based on the profiler data.

  2. Eliminated redundant dictionary operations: Removed the unnecessary allowed_keys = set(obj.keys()) computation and the explicit loop with append operations for dictionary encoding. Instead, uses a direct dictionary comprehension that's more efficient.

  3. Type-specific collection handling: Replaced the generic list-building approach for collections with type-aware comprehensions that maintain the original container type (tuple, list, set, frozenset), reducing object creation overhead.

  4. Reordered custom encoder checks: Moved custom encoder logic after primitive type checks to avoid unnecessary dictionary lookups for common cases.

Performance Impact:

  • The line profiler shows jsonable_encoder time reduced from 4.38ms to 0.95ms (~78% improvement)
  • Total function calls remain the same (1350), but per-call overhead drops significantly
  • The optimization is most effective for workloads with many simple data types (strings, numbers) and nested data structures

Test Case Benefits:
The improvements are particularly beneficial for the high-volume concurrent test cases (test_delete_throughput_high_volume with 100-200 operations) where jsonable_encoder is called repeatedly during request URL construction and parameter serialization, leading to cumulative performance gains across the entire async operation pipeline.

Correctness verification report:

Test Status
⚙️ Existing Unit Tests 🔘 None Found
🌀 Generated Regression Tests 938 Passed
⏪ Replay Tests 🔘 None Found
🔎 Concolic Coverage Tests 🔘 None Found
📊 Tests Coverage 90.9%
🌀 Generated Regression Tests and Runtime
import asyncio  # used to run async functions

import pytest  # used for our unit tests
from deepgram.core.api_error import ApiError
from deepgram.core.http_response import AsyncHttpResponse
from deepgram.errors.bad_request_error import BadRequestError
from deepgram.manage.v1.projects.members.invites.raw_client import \
    AsyncRawInvitesClient
from deepgram.types.delete_project_invite_v1response import \
    DeleteProjectInviteV1Response


# Mocks for dependencies
class DummyResponse:
    def __init__(self, status_code, json_data=None, text="", headers=None):
        self.status_code = status_code
        self._json_data = json_data
        self.text = text
        self.headers = headers or {}

    def json(self):
        if self._json_data is not None:
            return self._json_data
        raise Exception("No JSON data")

class DummyHttpxClient:
    def __init__(self, responses):
        self.responses = responses
        self.call_count = 0

    async def request(self, *args, **kwargs):
        # Return responses in order, or last one if out of range
        if self.call_count < len(self.responses):
            resp = self.responses[self.call_count]
        else:
            resp = self.responses[-1]
        self.call_count += 1
        return resp

class DummyEnvironment:
    @property
    def base(self):
        return "https://api.test.deepgram.com"

class DummyClientWrapper:
    def __init__(self, httpx_client):
        self.httpx_client = httpx_client
        self._environment = DummyEnvironment()

    def get_environment(self):
        return self._environment

# Helper to create the client
def make_client(responses):
    httpx_client = DummyHttpxClient(responses)
    client_wrapper = DummyClientWrapper(httpx_client)
    return AsyncRawInvitesClient(client_wrapper=client_wrapper)

# -------------------------
# 1. Basic Test Cases
# -------------------------

@pytest.mark.asyncio
async def test_delete_successful_invite_deletion():
    """Test that delete returns AsyncHttpResponse with correct data on success (status 200)."""
    expected_data = {"deleted": True}
    response_obj = DummyResponse(200, json_data=expected_data)
    client = make_client([response_obj])

    result = await client.delete("proj_123", "user@example.com")

@pytest.mark.asyncio
async def test_delete_successful_invite_deletion_status_204():
    """Test that delete returns AsyncHttpResponse with correct data on status 204 (no content)."""
    expected_data = {"deleted": True}
    response_obj = DummyResponse(204, json_data=expected_data)
    client = make_client([response_obj])

    result = await client.delete("proj_123", "user@example.com")

@pytest.mark.asyncio
async def test_delete_bad_request_error():
    """Test that delete raises BadRequestError on status 400."""
    error_data = {"error": "Bad request"}
    response_obj = DummyResponse(400, json_data=error_data)
    client = make_client([response_obj])

    with pytest.raises(BadRequestError) as exc_info:
        await client.delete("proj_123", "bad_email")

@pytest.mark.asyncio
async def test_delete_api_error_for_non_200_400():
    """Test that delete raises ApiError for status 404 (not found)."""
    error_data = {"error": "Not found"}
    response_obj = DummyResponse(404, json_data=error_data)
    client = make_client([response_obj])

    with pytest.raises(ApiError) as exc_info:
        await client.delete("proj_123", "missing@example.com")

# -------------------------
# 2. Edge Test Cases
# -------------------------

@pytest.mark.asyncio

async def test_delete_concurrent_deletions():
    """Test concurrent execution of delete for different invites."""
    expected_data1 = {"deleted": True}
    expected_data2 = {"deleted": True}
    response_obj1 = DummyResponse(200, json_data=expected_data1)
    response_obj2 = DummyResponse(200, json_data=expected_data2)
    # Each client gets its own response list
    client1 = make_client([response_obj1])
    client2 = make_client([response_obj2])

    results = await asyncio.gather(
        client1.delete("proj_1", "user1@example.com"),
        client2.delete("proj_2", "user2@example.com"),
    )

@pytest.mark.asyncio
async def test_delete_multiple_errors_concurrent():
    """Test concurrent execution where both deletions fail."""
    error_data = {"error": "Not found"}
    response_obj1 = DummyResponse(404, json_data=error_data)
    response_obj2 = DummyResponse(404, json_data=error_data)
    client1 = make_client([response_obj1])
    client2 = make_client([response_obj2])

    with pytest.raises(ApiError):
        await asyncio.gather(
            client1.delete("proj_1", "user1@example.com"),
            client2.delete("proj_2", "user2@example.com"),
        )

@pytest.mark.asyncio
async def test_delete_with_custom_request_options():
    """Test that request_options are passed and handled."""
    expected_data = {"deleted": True}
    response_obj = DummyResponse(200, json_data=expected_data)
    client = make_client([response_obj])

    # Custom request_options
    request_options = {"timeout_in_seconds": 5, "additional_headers": {"X-Test": "yes"}}
    result = await client.delete("proj_123", "user@example.com", request_options=request_options)

# -------------------------
# 3. Large Scale Test Cases
# -------------------------

@pytest.mark.asyncio
async def test_delete_large_scale_concurrent():
    """Test delete under large scale concurrent execution (100 invites)."""
    num_invites = 100
    responses = [DummyResponse(200, json_data={"deleted": True}) for _ in range(num_invites)]
    clients = [make_client([resp]) for resp in responses]
    tasks = [
        client.delete(f"proj_{i}", f"user{i}@example.com")
        for i, client in enumerate(clients)
    ]
    results = await asyncio.gather(*tasks)

@pytest.mark.asyncio
async def test_delete_large_scale_concurrent_errors():
    """Test delete under large scale concurrent execution with errors (50 errors, 50 successes)."""
    num_invites = 100
    responses = [
        DummyResponse(200, json_data={"deleted": True}) if i % 2 == 0
        else DummyResponse(404, json_data={"error": "Not found"})
        for i in range(num_invites)
    ]
    clients = [make_client([resp]) for resp in responses]
    tasks = [
        client.delete(f"proj_{i}", f"user{i}@example.com")
        for i, client in enumerate(clients)
    ]
    # Gather, catching exceptions
    results = []
    for i, task in enumerate(tasks):
        try:
            res = await task
            results.append(res)
        except ApiError as e:
            results.append(e)

# -------------------------
# 4. Throughput Test Cases
# -------------------------

@pytest.mark.asyncio
async def test_delete_throughput_small_load():
    """Throughput test: small load of 10 deletions."""
    num_invites = 10
    responses = [DummyResponse(200, json_data={"deleted": True}) for _ in range(num_invites)]
    clients = [make_client([resp]) for resp in responses]
    tasks = [
        client.delete(f"proj_{i}", f"user{i}@example.com")
        for i, client in enumerate(clients)
    ]
    results = await asyncio.gather(*tasks)

@pytest.mark.asyncio
async def test_delete_throughput_medium_load():
    """Throughput test: medium load of 50 deletions."""
    num_invites = 50
    responses = [DummyResponse(200, json_data={"deleted": True}) for _ in range(num_invites)]
    clients = [make_client([resp]) for resp in responses]
    tasks = [
        client.delete(f"proj_{i}", f"user{i}@example.com")
        for i, client in enumerate(clients)
    ]
    results = await asyncio.gather(*tasks)

@pytest.mark.asyncio
async def test_delete_throughput_high_volume():
    """Throughput test: high volume load of 200 deletions."""
    num_invites = 200
    responses = [DummyResponse(200, json_data={"deleted": True}) for _ in range(num_invites)]
    clients = [make_client([resp]) for resp in responses]
    tasks = [
        client.delete(f"proj_{i}", f"user{i}@example.com")
        for i, client in enumerate(clients)
    ]
    results = await asyncio.gather(*tasks)
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.
#------------------------------------------------
import asyncio  # used to run async functions
# Function under test (EXACT COPY, DO NOT MODIFY)
import typing
from json.decoder import JSONDecodeError
from unittest.mock import AsyncMock, MagicMock, patch

import pytest  # used for our unit tests
from deepgram.core.api_error import ApiError
from deepgram.core.client_wrapper import AsyncClientWrapper
from deepgram.core.http_response import AsyncHttpResponse
from deepgram.core.jsonable_encoder import jsonable_encoder
from deepgram.core.pydantic_utilities import parse_obj_as
from deepgram.core.request_options import RequestOptions
from deepgram.errors.bad_request_error import BadRequestError
from deepgram.manage.v1.projects.members.invites.raw_client import \
    AsyncRawInvitesClient
from deepgram.types.delete_project_invite_v1response import \
    DeleteProjectInviteV1Response

# ---- Begin Unit Tests ----

# Fixtures for mocks
@pytest.fixture
def mock_httpx_client():
    return AsyncMock()

@pytest.fixture
def mock_environment():
    env = MagicMock()
    env.base = "https://api.example.com"
    return env

@pytest.fixture
def mock_client_wrapper(mock_httpx_client, mock_environment):
    wrapper = MagicMock()
    wrapper.httpx_client = mock_httpx_client
    wrapper.get_environment.return_value = mock_environment
    return wrapper

@pytest.fixture
def async_raw_invites_client(mock_client_wrapper):
    return AsyncRawInvitesClient(client_wrapper=mock_client_wrapper)

# Helper: minimal valid response object for success
class DummyResponse:
    def __init__(self, status_code=204, json_data=None, headers=None, text=""):
        self.status_code = status_code
        self._json_data = json_data
        self.headers = headers or {}
        self.text = text

    def json(self):
        if isinstance(self._json_data, Exception):
            raise self._json_data
        return self._json_data

# Helper: minimal valid DeleteProjectInviteV1Response for parse_obj_as
class DummyDeleteProjectInviteV1Response:
    pass

# --------- 1. BASIC TEST CASES ---------
@pytest.mark.asyncio
async def test_delete_success(async_raw_invites_client, mock_client_wrapper):
    """Test successful delete returns expected AsyncHttpResponse."""
    # Arrange
    dummy_json = {"deleted": True}
    dummy_response = DummyResponse(status_code=204, json_data=dummy_json)
    mock_client_wrapper.httpx_client.request.return_value = dummy_response

    # Act
    resp = await async_raw_invites_client.delete("proj123", "user@example.com")

@pytest.mark.asyncio
async def test_delete_success_status_200(async_raw_invites_client, mock_client_wrapper):
    """Test delete with status 200 returns expected AsyncHttpResponse."""
    dummy_json = {"deleted": True}
    dummy_response = DummyResponse(status_code=200, json_data=dummy_json)
    mock_client_wrapper.httpx_client.request.return_value = dummy_response

    resp = await async_raw_invites_client.delete("proj123", "user@example.com")

@pytest.mark.asyncio
async def test_delete_with_request_options(async_raw_invites_client, mock_client_wrapper):
    """Test delete with request_options parameter."""
    dummy_json = {"deleted": True}
    dummy_response = DummyResponse(status_code=204, json_data=dummy_json)
    mock_client_wrapper.httpx_client.request.return_value = dummy_response

    options = {"timeout_in_seconds": 5}
    resp = await async_raw_invites_client.delete("proj123", "user@example.com", request_options=options)

# --------- 2. EDGE TEST CASES ---------
@pytest.mark.asyncio
async def test_delete_raises_bad_request_error(async_raw_invites_client, mock_client_wrapper):
    """Test delete raises BadRequestError on 400 status."""
    dummy_json = {"error": "bad request"}
    dummy_response = DummyResponse(status_code=400, json_data=dummy_json)
    mock_client_wrapper.httpx_client.request.return_value = dummy_response

    with pytest.raises(Exception) as excinfo:
        await async_raw_invites_client.delete("proj123", "bad@email.com")

@pytest.mark.asyncio
async def test_delete_raises_api_error_on_non_2xx(async_raw_invites_client, mock_client_wrapper):
    """Test delete raises ApiError on non-2xx/400 status."""
    dummy_json = {"error": "forbidden"}
    dummy_response = DummyResponse(status_code=403, json_data=dummy_json)
    mock_client_wrapper.httpx_client.request.return_value = dummy_response

    with pytest.raises(Exception) as excinfo:
        await async_raw_invites_client.delete("proj123", "forbidden@email.com")

@pytest.mark.asyncio
async def test_delete_raises_api_error_on_jsondecodeerror(async_raw_invites_client, mock_client_wrapper):
    """Test delete raises ApiError if response.json() raises JSONDecodeError."""
    dummy_response = DummyResponse(status_code=500, json_data=JSONDecodeError("Expecting value", "doc", 0))
    dummy_response.text = "not json"
    mock_client_wrapper.httpx_client.request.return_value = dummy_response

    with pytest.raises(Exception) as excinfo:
        await async_raw_invites_client.delete("proj123", "broken@email.com")

@pytest.mark.asyncio
async def test_delete_concurrent(async_raw_invites_client, mock_client_wrapper):
    """Test concurrent delete calls do not interfere."""
    dummy_json = {"deleted": True}
    dummy_response = DummyResponse(status_code=204, json_data=dummy_json)
    mock_client_wrapper.httpx_client.request.return_value = dummy_response

    # Run multiple deletes concurrently
    coros = [
        async_raw_invites_client.delete(f"proj{i}", f"user{i}@example.com")
        for i in range(5)
    ]
    results = await asyncio.gather(*coros)
    for resp in results:
        pass

@pytest.mark.asyncio
async def test_delete_handles_various_status_codes(async_raw_invites_client, mock_client_wrapper):
    """Test delete with various 2xx status codes."""
    for status in [200, 201, 202, 204, 299]:
        dummy_json = {"deleted": True}
        dummy_response = DummyResponse(status_code=status, json_data=dummy_json)
        mock_client_wrapper.httpx_client.request.return_value = dummy_response
        resp = await async_raw_invites_client.delete("projX", "userX@example.com")

# --------- 3. LARGE SCALE TEST CASES ---------
@pytest.mark.asyncio
async def test_delete_many_concurrent(async_raw_invites_client, mock_client_wrapper):
    """Test delete with many concurrent calls (scalability)."""
    dummy_json = {"deleted": True}
    dummy_response = DummyResponse(status_code=204, json_data=dummy_json)
    mock_client_wrapper.httpx_client.request.return_value = dummy_response

    # 50 concurrent deletes (reasonable for test)
    coros = [
        async_raw_invites_client.delete(f"proj{i}", f"user{i}@example.com")
        for i in range(50)
    ]
    results = await asyncio.gather(*coros)
    for resp in results:
        pass

@pytest.mark.asyncio
async def test_delete_many_varied_inputs(async_raw_invites_client, mock_client_wrapper):
    """Test delete with varied project_id and email values."""
    dummy_json = {"deleted": True}
    dummy_response = DummyResponse(status_code=204, json_data=dummy_json)
    mock_client_wrapper.httpx_client.request.return_value = dummy_response

    test_cases = [
        ("projA", "userA@example.com"),
        ("", ""),  # empty strings
        ("123", "weird!#$%&'*+/=?^_`{|}~@example.com"),  # special chars
        ("long" * 50, "longemail" * 30 + "@example.com"),  # long strings
        ("projB", "userB@example.com"),
    ]
    coros = [async_raw_invites_client.delete(pid, email) for pid, email in test_cases]
    results = await asyncio.gather(*coros)
    for resp in results:
        pass

# --------- 4. THROUGHPUT TEST CASES ---------
@pytest.mark.asyncio
async def test_delete_throughput_small_load(async_raw_invites_client, mock_client_wrapper):
    """Test throughput with a small number of sequential deletes."""
    dummy_json = {"deleted": True}
    dummy_response = DummyResponse(status_code=204, json_data=dummy_json)
    mock_client_wrapper.httpx_client.request.return_value = dummy_response

    for i in range(5):
        resp = await async_raw_invites_client.delete(f"proj{i}", f"user{i}@example.com")

@pytest.mark.asyncio
async def test_delete_throughput_medium_load(async_raw_invites_client, mock_client_wrapper):
    """Test throughput with a medium number of concurrent deletes."""
    dummy_json = {"deleted": True}
    dummy_response = DummyResponse(status_code=204, json_data=dummy_json)
    mock_client_wrapper.httpx_client.request.return_value = dummy_response

    coros = [
        async_raw_invites_client.delete(f"proj{i}", f"user{i}@example.com")
        for i in range(20)
    ]
    results = await asyncio.gather(*coros)
    for resp in results:
        pass

@pytest.mark.asyncio
async def test_delete_throughput_high_volume(async_raw_invites_client, mock_client_wrapper):
    """Test throughput with a high volume of concurrent deletes (limit to 100 for test speed)."""
    dummy_json = {"deleted": True}
    dummy_response = DummyResponse(status_code=204, json_data=dummy_json)
    mock_client_wrapper.httpx_client.request.return_value = dummy_response

    coros = [
        async_raw_invites_client.delete(f"proj{i}", f"user{i}@example.com")
        for i in range(100)
    ]
    results = await asyncio.gather(*coros)
    for resp in results:
        pass

@pytest.mark.asyncio
async def test_delete_throughput_mixed_status(async_raw_invites_client, mock_client_wrapper):
    """Test throughput with mixed status codes and error handling."""
    # Alternate between 204 (success), 400 (bad request), 500 (api error)
    dummy_json = {"deleted": True}
    bad_json = {"error": "bad request"}
    error_json = {"error": "server error"}

    responses = [
        DummyResponse(status_code=204, json_data=dummy_json),
        DummyResponse(status_code=400, json_data=bad_json),
        DummyResponse(status_code=500, json_data=error_json),
    ]

    # Cycle through responses for each call
    def side_effect(*args, **kwargs):
        idx = side_effect.counter % 3
        side_effect.counter += 1
        return responses[idx]
    side_effect.counter = 0

    mock_client_wrapper.httpx_client.request.side_effect = side_effect

    coros = [
        async_raw_invites_client.delete(f"proj{i}", f"user{i}@example.com")
        for i in range(9)
    ]
    # Expect 3 successes, 3 BadRequestError, 3 ApiError
    results = []
    for coro in coros:
        try:
            res = await coro
            results.append(("success", res))
        except Exception as e:
            results.append((e.__class__.__name__, None))
# codeflash_output is used to check that the output of the original code is the same as that of the optimized code.

To edit these changes git checkout codeflash/optimize-AsyncRawInvitesClient.delete-mh2x8iwj and push.

Codeflash

The optimized code achieves a **7% runtime improvement** and **1.9% throughput increase** through strategic optimizations in the `jsonable_encoder` function, which is a critical serialization component called frequently throughout the codebase.

**Key Optimizations:**

1. **Early short-circuit for primitive types**: Moved the `isinstance(obj, (str, int, float, type(None)))` check to the very beginning of the function, allowing the most common data types to return immediately without going through multiple expensive `isinstance` checks. This eliminates overhead for ~47.7% of calls based on the profiler data.

2. **Eliminated redundant dictionary operations**: Removed the unnecessary `allowed_keys = set(obj.keys())` computation and the explicit loop with `append` operations for dictionary encoding. Instead, uses a direct dictionary comprehension that's more efficient.

3. **Type-specific collection handling**: Replaced the generic list-building approach for collections with type-aware comprehensions that maintain the original container type (tuple, list, set, frozenset), reducing object creation overhead.

4. **Reordered custom encoder checks**: Moved custom encoder logic after primitive type checks to avoid unnecessary dictionary lookups for common cases.

**Performance Impact:**
- The line profiler shows `jsonable_encoder` time reduced from 4.38ms to 0.95ms (**~78% improvement**)
- Total function calls remain the same (1350), but per-call overhead drops significantly
- The optimization is most effective for workloads with many simple data types (strings, numbers) and nested data structures

**Test Case Benefits:**
The improvements are particularly beneficial for the high-volume concurrent test cases (`test_delete_throughput_high_volume` with 100-200 operations) where `jsonable_encoder` is called repeatedly during request URL construction and parameter serialization, leading to cumulative performance gains across the entire async operation pipeline.
@codeflash-ai codeflash-ai bot requested a review from mashraf-222 October 23, 2025 04:27
@codeflash-ai codeflash-ai bot added ⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: Medium Optimization Quality according to Codeflash labels Oct 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

⚡️ codeflash Optimization PR opened by Codeflash AI 🎯 Quality: Medium Optimization Quality according to Codeflash

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant