-
Couldn't load subscription status.
- Fork 5
Fix database session handling to reduce connection pool exhaustion #333
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughRefactors response handling to use ephemeral engine-backed DB sessions within the worker. Updates process_response signature to accept ancestor_id and latest_conversation instead of a Session. Endpoint precomputes ancestry context and passes it to the background task. Conversation creation and ancestry resolution now occur inside local session contexts. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Endpoint
participant BackgroundWorker as process_response
participant DB as Session(engine)
participant OpenAI
participant Tracer as LangfuseTracer
Client->>Endpoint: POST /responses
Endpoint->>DB: Use _session to fetch latest_conversation/ancestor_id
Endpoint->>BackgroundWorker: enqueue(request, ancestor_id, latest_conversation)
BackgroundWorker->>DB: with Session(engine)
alt latest_conversation is None
BackgroundWorker->>DB: Resolve ancestor_id
end
BackgroundWorker->>DB: Create conversation (if needed)
BackgroundWorker->>OpenAI: Generate response
BackgroundWorker->>Tracer: Trace events
BackgroundWorker->>DB: Persist conversation/response
BackgroundWorker-->>Endpoint: Completion (async)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested labels
Poem
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🔭 Outside diff range comments (2)
backend/app/api/routes/responses.py (2)
156-163: Use the renamed previous_response_id when calling OpenAIFollow-through on the rename so the code reads correctly and avoids confusion.
- "previous_response_id": ancestor_id, + "previous_response_id": previous_response_id,
94-101: Bug: file search results extraction loops over the accumulator, always returns empty listThe code extends results by iterating over results itself, which is initially empty, so it never adds anything. It should iterate the tool call’s results.
Apply this fix:
- results.extend( - [FileResultChunk(score=hit.score, text=hit.text) for hit in results] - ) + results.extend( + FileResultChunk(score=hit.score, text=hit.text) + for hit in (getattr(tool_call, "results", []) or []) + )
🧹 Nitpick comments (2)
backend/app/api/routes/responses.py (2)
10-10: Engine-scoped session in worker is the right move; consider centralizing session creationGood shift away from passing request-scoped sessions. To keep session options consistent (expire_on_commit, autoflush, etc.) with the rest of the app, consider exposing a SessionLocal (sessionmaker) in app.core.db and using that here instead of constructing Session(engine) directly.
263-281: Harden callback dispatch: guard with try/except and consider SSRF protectionsIf send_callback raises (network issue, timeout), the background task may fail mid-flight with no clear signal. Wrap the call in try/except and log errors. Also consider enforcing an allowlist or validating callback_url to mitigate SSRF risk.
Proposed change:
- send_callback( - request.callback_url, - { - "success": callback_data.get("success", False), - "data": { - **(callback_data.get("data") or {}), - **get_additional_data(request_dict), - }, - "error": callback_data.get("error"), - "metadata": None, - }, - ) - logger.info( - f"[process_response] Callback sent successfully, assistant={mask_string(request.assistant_id)}, project_id={project_id}" - ) + try: + send_callback( + request.callback_url, + { + "success": callback_data.get("success", False), + "data": { + **(callback_data.get("data") or {}), + **get_additional_data(request_dict), + }, + "error": callback_data.get("error"), + "metadata": None, + }, + ) + logger.info( + f"[process_response] Callback sent successfully, assistant={mask_string(request.assistant_id)}, project_id={project_id}" + ) + except Exception as cb_err: + logger.error( + f"[process_response] Callback dispatch failed for URL={request.callback_url}: {cb_err}", + exc_info=True, + )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these settings in your CodeRabbit configuration.
📒 Files selected for processing (1)
backend/app/api/routes/responses.py(6 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
backend/app/api/routes/responses.py (3)
backend/app/tests/conftest.py (1)
db(18-35)backend/app/models/openai_conversation.py (3)
OpenAIConversationCreate(72-98)OpenAIConversation(58-69)OpenAIConversationBase(22-55)backend/app/crud/openai_conversation.py (3)
get_ancestor_id_from_response(61-99)create_conversation(140-163)get_conversation_by_ancestor_id(41-58)
🔇 Additional comments (1)
backend/app/api/routes/responses.py (1)
20-20: Type import addition looks goodBringing in OpenAIConversation for typing improves clarity.
| ancestor_id: str, | ||
| latest_conversation: OpenAIConversation | None, | ||
| ): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Parameter naming/typing mismatch: this is a previous_response_id and may be None
The parameter named ancestor_id is actually used as the OpenAI previous_response_id (Line 158) and can be None when the request has no response_id (Lines 340-350). Rename for clarity and make it Optional to avoid type mismatch and potential runtime confusion.
Apply this diff to the signature:
- ancestor_id: str,
- latest_conversation: OpenAIConversation | None,
+ previous_response_id: str | None,
+ latest_conversation: OpenAIConversation | None,Also update internal references (see next comment).
Run this to find any other usages that need updating:
🏁 Script executed:
#!/bin/bash
# Verify no other references to "ancestor_id" exist in process_response context.
rg -n "process_response\(|ancestor_id" -A 3 -B 2Length of output: 34390
Rename ancestor_id → previous_response_id (make Optional) in process_response
Verified: ancestor_id is forwarded to OpenAI as previous_response_id (responses.py:158) and is assigned from request.response_id which can be None (responses.py:340–350). Rename the parameter and all internal references to avoid the typing/misnaming mismatch.
Files/locations to update
- backend/app/api/routes/responses.py
- function signature (lines ~135–137)
- params dict where "previous_response_id" is set (line ~158)
- the assignment / lookup block that reads request.response_id and calls get_conversation_by_ancestor_id (lines ~340–350)
- the background_tasks.add_task call that passes the variable (lines ~351–361)
Apply these diffs:
Signature change:
- ancestor_id: str,
- latest_conversation: OpenAIConversation | None,
+ previous_response_id: str | None,
+ latest_conversation: OpenAIConversation | None,Use previous_response_id in params:
- "previous_response_id": ancestor_id,
+ "previous_response_id": previous_response_id,Rename variable and DB lookup:
- ancestor_id = request.response_id
- latest_conversation = None
- if ancestor_id:
- latest_conversation = get_conversation_by_ancestor_id(
- session=_session,
- ancestor_response_id=ancestor_id,
- project_id=project_id,
- )
- if latest_conversation:
- ancestor_id = latest_conversation.response_id
+ previous_response_id = request.response_id
+ latest_conversation = None
+ if previous_response_id:
+ latest_conversation = get_conversation_by_ancestor_id(
+ session=_session,
+ ancestor_response_id=previous_response_id,
+ project_id=project_id,
+ )
+ if latest_conversation:
+ previous_response_id = latest_conversation.response_idPass the renamed variable to the background task:
- process_response,
- project_id,
- organization_id,
- ancestor_id,
- latest_conversation,
+ process_response,
+ project_id,
+ organization_id,
+ previous_response_id,
+ latest_conversation,Reasoning: this keeps runtime behavior identical (None is allowed), clarifies that the value is the OpenAI previous_response_id, and prevents a typing mismatch. After these edits, run tests targeting responses and conversation CRUD to confirm nothing else references the old name.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ancestor_id: str, | |
| latest_conversation: OpenAIConversation | None, | |
| ): | |
| previous_response_id: str | None, | |
| latest_conversation: OpenAIConversation | None, | |
| ): |
| with Session(engine) as session: | ||
| ancestor_response_id = ( | ||
| latest_conversation.ancestor_response_id | ||
| if latest_conversation | ||
| else get_ancestor_id_from_response( | ||
| session=session, | ||
| current_response_id=response.id, | ||
| previous_response_id=response.previous_response_id, | ||
| project_id=project_id, | ||
| ) | ||
| ) | ||
| ) | ||
|
|
||
| # Create conversation record in database | ||
| conversation_data = OpenAIConversationCreate( | ||
| response_id=response.id, | ||
| previous_response_id=response.previous_response_id, | ||
| ancestor_response_id=ancestor_response_id, | ||
| user_question=request.question, | ||
| response=response.output_text, | ||
| model=response.model, | ||
| assistant_id=request.assistant_id, | ||
| ) | ||
| # Create conversation record in database | ||
| conversation_data = OpenAIConversationCreate( | ||
| response_id=response.id, | ||
| previous_response_id=response.previous_response_id, | ||
| ancestor_response_id=ancestor_response_id, | ||
| user_question=request.question, | ||
| response=response.output_text, | ||
| model=response.model, | ||
| assistant_id=request.assistant_id, | ||
| ) | ||
|
|
||
| create_conversation( | ||
| session=session, | ||
| conversation=conversation_data, | ||
| project_id=project_id, | ||
| organization_id=organization_id, | ||
| ) | ||
| create_conversation( | ||
| session=session, | ||
| conversation=conversation_data, | ||
| project_id=project_id, | ||
| organization_id=organization_id, | ||
| ) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Persisting conversations: add a safety net for DB failures
Right now, any DB error during conversation creation will bubble up and can prevent callback dispatch. Wrap the DB block in try/except to log and continue. This preserves user experience even if persistence fails transiently.
- with Session(engine) as session:
- ancestor_response_id = (
- latest_conversation.ancestor_response_id
- if latest_conversation
- else get_ancestor_id_from_response(
- session=session,
- current_response_id=response.id,
- previous_response_id=response.previous_response_id,
- project_id=project_id,
- )
- )
-
- # Create conversation record in database
- conversation_data = OpenAIConversationCreate(
- response_id=response.id,
- previous_response_id=response.previous_response_id,
- ancestor_response_id=ancestor_response_id,
- user_question=request.question,
- response=response.output_text,
- model=response.model,
- assistant_id=request.assistant_id,
- )
-
- create_conversation(
- session=session,
- conversation=conversation_data,
- project_id=project_id,
- organization_id=organization_id,
- )
+ with Session(engine) as session:
+ try:
+ ancestor_response_id = (
+ latest_conversation.ancestor_response_id
+ if latest_conversation
+ else get_ancestor_id_from_response(
+ session=session,
+ current_response_id=response.id,
+ previous_response_id=response.previous_response_id,
+ project_id=project_id,
+ )
+ )
+
+ # Create conversation record in database
+ conversation_data = OpenAIConversationCreate(
+ response_id=response.id,
+ previous_response_id=response.previous_response_id,
+ ancestor_response_id=ancestor_response_id,
+ user_question=request.question,
+ response=response.output_text,
+ model=response.model,
+ assistant_id=request.assistant_id,
+ )
+
+ create_conversation(
+ session=session,
+ conversation=conversation_data,
+ project_id=project_id,
+ organization_id=organization_id,
+ )
+ except Exception as db_err:
+ logger.error(
+ "[process_response] Failed to persist conversation (response_id=%s, project_id=%s): %s",
+ response.id,
+ project_id,
+ db_err,
+ exc_info=True,
+ )
+ tracer.log_error(
+ "DB persistence error during conversation create",
+ response_id=response.id,
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| with Session(engine) as session: | |
| ancestor_response_id = ( | |
| latest_conversation.ancestor_response_id | |
| if latest_conversation | |
| else get_ancestor_id_from_response( | |
| session=session, | |
| current_response_id=response.id, | |
| previous_response_id=response.previous_response_id, | |
| project_id=project_id, | |
| ) | |
| ) | |
| ) | |
| # Create conversation record in database | |
| conversation_data = OpenAIConversationCreate( | |
| response_id=response.id, | |
| previous_response_id=response.previous_response_id, | |
| ancestor_response_id=ancestor_response_id, | |
| user_question=request.question, | |
| response=response.output_text, | |
| model=response.model, | |
| assistant_id=request.assistant_id, | |
| ) | |
| # Create conversation record in database | |
| conversation_data = OpenAIConversationCreate( | |
| response_id=response.id, | |
| previous_response_id=response.previous_response_id, | |
| ancestor_response_id=ancestor_response_id, | |
| user_question=request.question, | |
| response=response.output_text, | |
| model=response.model, | |
| assistant_id=request.assistant_id, | |
| ) | |
| create_conversation( | |
| session=session, | |
| conversation=conversation_data, | |
| project_id=project_id, | |
| organization_id=organization_id, | |
| ) | |
| create_conversation( | |
| session=session, | |
| conversation=conversation_data, | |
| project_id=project_id, | |
| organization_id=organization_id, | |
| ) | |
| with Session(engine) as session: | |
| try: | |
| ancestor_response_id = ( | |
| latest_conversation.ancestor_response_id | |
| if latest_conversation | |
| else get_ancestor_id_from_response( | |
| session=session, | |
| current_response_id=response.id, | |
| previous_response_id=response.previous_response_id, | |
| project_id=project_id, | |
| ) | |
| ) | |
| # Create conversation record in database | |
| conversation_data = OpenAIConversationCreate( | |
| response_id=response.id, | |
| previous_response_id=response.previous_response_id, | |
| ancestor_response_id=ancestor_response_id, | |
| user_question=request.question, | |
| response=response.output_text, | |
| model=response.model, | |
| assistant_id=request.assistant_id, | |
| ) | |
| create_conversation( | |
| session=session, | |
| conversation=conversation_data, | |
| project_id=project_id, | |
| organization_id=organization_id, | |
| ) | |
| except Exception as db_err: | |
| logger.error( | |
| "[process_response] Failed to persist conversation (response_id=%s, project_id=%s): %s", | |
| response.id, | |
| project_id, | |
| db_err, | |
| exc_info=True, | |
| ) | |
| tracer.log_error( | |
| "DB persistence error during conversation create", | |
| response_id=response.id, | |
| ) |
🤖 Prompt for AI Agents
In backend/app/api/routes/responses.py around lines 205 to 234, the block that
opens a DB Session and calls create_conversation can raise exceptions that will
bubble up and prevent callback dispatch; wrap the database interaction in a
try/except that catches Exception, logs the error with contextual details
(response.id, project_id, organization_id) and a clear message, and then
suppresses the exception so processing continues (do not re-raise); keep the
session handling intact (use the existing with Session(...) as session) and
ensure the except block only handles persistence failures so callbacks still
run.
| ancestor_id = request.response_id | ||
| latest_conversation = None | ||
| if ancestor_id: | ||
| latest_conversation = get_conversation_by_ancestor_id( | ||
| session=_session, | ||
| ancestor_response_id=ancestor_id, | ||
| project_id=project_id, | ||
| ) | ||
| if latest_conversation: | ||
| ancestor_id = latest_conversation.response_id | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Precompute both IDs explicitly and avoid overloading the name “ancestor_id”
The local variable ancestor_id is first used as an ancestor_response_id (root) and then reassigned to the latest response_id to be used as previous_response_id (Line 349). This is confusing and error-prone. Split into clear variables.
Additionally, consider not passing ORM instances to background tasks (see next comment).
- ancestor_id = request.response_id
- latest_conversation = None
- if ancestor_id:
- latest_conversation = get_conversation_by_ancestor_id(
- session=_session,
- ancestor_response_id=ancestor_id,
- project_id=project_id,
- )
- if latest_conversation:
- ancestor_id = latest_conversation.response_id
+ ancestor_response_id = request.response_id
+ previous_response_id = None
+ latest_conversation = None
+ if ancestor_response_id:
+ latest_conversation = get_conversation_by_ancestor_id(
+ session=_session,
+ ancestor_response_id=ancestor_response_id,
+ project_id=project_id,
+ )
+ previous_response_id = (
+ latest_conversation.response_id
+ if latest_conversation
+ else ancestor_response_id
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ancestor_id = request.response_id | |
| latest_conversation = None | |
| if ancestor_id: | |
| latest_conversation = get_conversation_by_ancestor_id( | |
| session=_session, | |
| ancestor_response_id=ancestor_id, | |
| project_id=project_id, | |
| ) | |
| if latest_conversation: | |
| ancestor_id = latest_conversation.response_id | |
| ancestor_response_id = request.response_id | |
| previous_response_id = None | |
| latest_conversation = None | |
| if ancestor_response_id: | |
| latest_conversation = get_conversation_by_ancestor_id( | |
| session=_session, | |
| ancestor_response_id=ancestor_response_id, | |
| project_id=project_id, | |
| ) | |
| previous_response_id = ( | |
| latest_conversation.response_id | |
| if latest_conversation | |
| else ancestor_response_id | |
| ) |
🤖 Prompt for AI Agents
In backend/app/api/routes/responses.py around lines 340–350, avoid reusing
ancestor_id for two meanings: keep ancestor_response_id = request.response_id,
then fetch latest_conversation and set previous_response_id explicitly
(previous_response_id = latest_conversation.response_id if found, else
previous_response_id = ancestor_response_id); update downstream calls to use
these two distinct variables and ensure only primitive IDs (not ORM instances)
are passed to any background tasks.
| ancestor_id, | ||
| latest_conversation, | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Avoid passing ORM instances across the background task boundary
Passing a detached SQLModel instance (latest_conversation) into BackgroundTasks is brittle and can lead to DetachedInstanceError if any lazy attributes are accessed. Since the worker already opens its own Session and can compute ancestry, just pass the primitive previous_response_id and None for latest_conversation.
- ancestor_id,
- latest_conversation,
+ previous_response_id,
+ None, # avoid passing ORM instances; worker computes ancestry with its own sessionIf you adopt the earlier signature rename, ensure both the def and add_task call are updated consistently. You can validate with:
🏁 Script executed:
#!/bin/bash
rg -n "process_response\(" -A 2 -B 2Length of output: 316
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Full file: backend/app/api/routes/responses.py ==="
nl -ba backend/app/api/routes/responses.py || true
echo
echo "=== Lines that call add_task( ==="
rg -n "add_task\(" -S || true
echo
echo "=== References to process_response ==="
rg -n "process_response" -S || true
echo
echo "=== References to latest_conversation ==="
rg -n "latest_conversation" -S || true
echo
echo "=== References to previous_response_id ==="
rg -n "previous_response_id" -S || trueLength of output: 13227
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== process_response signature & start (lines 120-160) ==="
sed -n '120,160p' backend/app/api/routes/responses.py || true
echo
echo "=== process_response body (lines 160-220) ==="
sed -n '160,220p' backend/app/api/routes/responses.py || true
echo
echo "=== surrounding add_task call and latest_conversation (lines 320-370) ==="
sed -n '320,370p' backend/app/api/routes/responses.py || trueLength of output: 5533
Avoid passing ORM instances into BackgroundTasks — pass the primitive previous_response_id and None
Passing latest_conversation (an SQLModel instance) into BackgroundTasks is brittle and can cause DetachedInstanceError. Let the worker open its own Session and compute ancestry. Rename the parameter for clarity and stop passing the ORM instance.
Files to change
- backend/app/api/routes/responses.py
- process_response signature (around def at ~line 128)
- the local ancestor_id variable and its assignment (around ~line 340)
- the background_tasks.add_task(...) call (around ~lines 350–361)
- update internal uses of ancestor_id in process_response to previous_response_id
Suggested diff
--- a/backend/app/api/routes/responses.py
+++ b/backend/app/api/routes/responses.py
@@
def process_response(
request: ResponsesAPIRequest,
client: OpenAI,
assistant,
tracer: LangfuseTracer,
project_id: int,
organization_id: int,
- ancestor_id: str,
- latest_conversation: OpenAIConversation | None,
+ previous_response_id: str | None,
+ latest_conversation: OpenAIConversation | None,
):
@@
- params = {
- "model": assistant.model,
- "previous_response_id": ancestor_id,
+ params = {
+ "model": assistant.model,
+ "previous_response_id": previous_response_id,
"instructions": assistant.instructions,
"temperature": assistant.temperature,
"input": [{"role": "user", "content": request.question}],
}
@@
- ancestor_id = request.response_id
+ previous_response_id = request.response_id
latest_conversation = None
if ancestor_id:
- latest_conversation = get_conversation_by_ancestor_id(
- session=_session,
- ancestor_response_id=ancestor_id,
- project_id=project_id,
- )
- if latest_conversation:
- ancestor_id = latest_conversation.response_id
+ latest_conversation = get_conversation_by_ancestor_id(
+ session=_session,
+ ancestor_response_id=previous_response_id,
+ project_id=project_id,
+ )
+ if latest_conversation:
+ previous_response_id = latest_conversation.response_id
@@
- background_tasks.add_task(
- process_response,
- request,
- client,
- assistant,
- tracer,
- project_id,
- organization_id,
- ancestor_id,
- latest_conversation,
- )
+ background_tasks.add_task(
+ process_response,
+ request,
+ client,
+ assistant,
+ tracer,
+ project_id,
+ organization_id,
+ previous_response_id,
+ None, # avoid passing ORM instances; worker computes ancestry with its own session
+ )Quick checks
- Update any remaining references to ancestor_id to previous_response_id in this file.
- Run: rg -n "process_response(" -A 2 -B 2 to ensure all call sites/signatures are consistent.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ancestor_id, | |
| latest_conversation, | |
| ) | |
| previous_response_id, | |
| None, # avoid passing ORM instances; worker computes ancestry with its own session | |
| ) |
🤖 Prompt for AI Agents
In backend/app/api/routes/responses.py around lines 128 and 340–361, avoid
passing the SQLModel instance latest_conversation into BackgroundTasks: change
process_response signature to accept previous_response_id (primitive int or
None) instead of ancestor_id/ORM, rename local ancestor_id variable to
previous_response_id where assigned, and update the
background_tasks.add_task(...) call to pass previous_response_id (or None)
instead of latest_conversation; inside process_response replace all internal
uses of ancestor_id with previous_response_id and ensure the worker opens its
own DB Session to load any ORM objects it needs. Also run a quick project-wide
search for process_response calls and update their signatures/usages accordingly
(rg -n "process_response\(" -A 2 -B 2) to keep call sites consistent.
Codecov Report✅ All modified and coverable lines are covered by tests. 📢 Thoughts on this report? Let us know! |
Summary
Summary by CodeRabbit
Refactor
Chores
Note: No user-facing features or behavior changes.