Skip to content

Conversation

@avirajsingh7
Copy link
Collaborator

@avirajsingh7 avirajsingh7 commented Aug 14, 2025

Summary

Summary by CodeRabbit

  • Refactor

    • Streamlined background processing for response handling.
    • Simplified conversation tracking and metadata handling during processing.
    • Adjusted internal interfaces between the API endpoint and background worker.
  • Chores

    • Updated internal dependencies and types to align with the new processing flow.

Note: No user-facing features or behavior changes.

@coderabbitai
Copy link

coderabbitai bot commented Aug 14, 2025

Walkthrough

Refactors response handling to use ephemeral engine-backed DB sessions within the worker. Updates process_response signature to accept ancestor_id and latest_conversation instead of a Session. Endpoint precomputes ancestry context and passes it to the background task. Conversation creation and ancestry resolution now occur inside local session contexts.

Changes

Cohort / File(s) Summary of Changes
Responses flow and session management
backend/app/api/routes/responses.py
Switched to local engine-backed Session usage; updated process_response signature to accept ancestor_id and latest_conversation; moved ancestry resolution and conversation creation into local session context; endpoint now precomputes ancestor_id/latest_conversation and passes them to background task; added OpenAIConversation typing/imports.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant Endpoint
    participant BackgroundWorker as process_response
    participant DB as Session(engine)
    participant OpenAI
    participant Tracer as LangfuseTracer

    Client->>Endpoint: POST /responses
    Endpoint->>DB: Use _session to fetch latest_conversation/ancestor_id
    Endpoint->>BackgroundWorker: enqueue(request, ancestor_id, latest_conversation)
    BackgroundWorker->>DB: with Session(engine)
    alt latest_conversation is None
        BackgroundWorker->>DB: Resolve ancestor_id
    end
    BackgroundWorker->>DB: Create conversation (if needed)
    BackgroundWorker->>OpenAI: Generate response
    BackgroundWorker->>Tracer: Trace events
    BackgroundWorker->>DB: Persist conversation/response
    BackgroundWorker-->>Endpoint: Completion (async)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

  • OpenAI Conversation: Lookup #297: Also modifies process_response and conversation ancestry lookup/persistence, but uses injected sessions instead of local engine-backed sessions.

Suggested labels

enhancement

Poem

I nibble code like clover leaves,
Threading sessions through the eaves—
Ancestors found, a lineage neat,
Ephemeral hops with tidy feet.
A whisker-twitch, then logs align,
Conversations blossom—how sublime!
(._.)/ 🌿

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch hotfix/remove_session_from_background_job

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@avirajsingh7 avirajsingh7 changed the title remove session passing to background job Fix database session handling to reduce connection pool exhaustion Aug 14, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🔭 Outside diff range comments (2)
backend/app/api/routes/responses.py (2)

156-163: Use the renamed previous_response_id when calling OpenAI

Follow-through on the rename so the code reads correctly and avoids confusion.

-            "previous_response_id": ancestor_id,
+            "previous_response_id": previous_response_id,

94-101: Bug: file search results extraction loops over the accumulator, always returns empty list

The code extends results by iterating over results itself, which is initially empty, so it never adds anything. It should iterate the tool call’s results.

Apply this fix:

-            results.extend(
-                [FileResultChunk(score=hit.score, text=hit.text) for hit in results]
-            )
+            results.extend(
+                FileResultChunk(score=hit.score, text=hit.text)
+                for hit in (getattr(tool_call, "results", []) or [])
+            )
🧹 Nitpick comments (2)
backend/app/api/routes/responses.py (2)

10-10: Engine-scoped session in worker is the right move; consider centralizing session creation

Good shift away from passing request-scoped sessions. To keep session options consistent (expire_on_commit, autoflush, etc.) with the rest of the app, consider exposing a SessionLocal (sessionmaker) in app.core.db and using that here instead of constructing Session(engine) directly.


263-281: Harden callback dispatch: guard with try/except and consider SSRF protections

If send_callback raises (network issue, timeout), the background task may fail mid-flight with no clear signal. Wrap the call in try/except and log errors. Also consider enforcing an allowlist or validating callback_url to mitigate SSRF risk.

Proposed change:

-        send_callback(
-            request.callback_url,
-            {
-                "success": callback_data.get("success", False),
-                "data": {
-                    **(callback_data.get("data") or {}),
-                    **get_additional_data(request_dict),
-                },
-                "error": callback_data.get("error"),
-                "metadata": None,
-            },
-        )
-        logger.info(
-            f"[process_response] Callback sent successfully, assistant={mask_string(request.assistant_id)}, project_id={project_id}"
-        )
+        try:
+            send_callback(
+                request.callback_url,
+                {
+                    "success": callback_data.get("success", False),
+                    "data": {
+                        **(callback_data.get("data") or {}),
+                        **get_additional_data(request_dict),
+                    },
+                    "error": callback_data.get("error"),
+                    "metadata": None,
+                },
+            )
+            logger.info(
+                f"[process_response] Callback sent successfully, assistant={mask_string(request.assistant_id)}, project_id={project_id}"
+            )
+        except Exception as cb_err:
+            logger.error(
+                f"[process_response] Callback dispatch failed for URL={request.callback_url}: {cb_err}",
+                exc_info=True,
+            )
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these settings in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between d9a8545 and 17d42fe.

📒 Files selected for processing (1)
  • backend/app/api/routes/responses.py (6 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
backend/app/api/routes/responses.py (3)
backend/app/tests/conftest.py (1)
  • db (18-35)
backend/app/models/openai_conversation.py (3)
  • OpenAIConversationCreate (72-98)
  • OpenAIConversation (58-69)
  • OpenAIConversationBase (22-55)
backend/app/crud/openai_conversation.py (3)
  • get_ancestor_id_from_response (61-99)
  • create_conversation (140-163)
  • get_conversation_by_ancestor_id (41-58)
🔇 Additional comments (1)
backend/app/api/routes/responses.py (1)

20-20: Type import addition looks good

Bringing in OpenAIConversation for typing improves clarity.

Comment on lines +135 to 137
ancestor_id: str,
latest_conversation: OpenAIConversation | None,
):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Parameter naming/typing mismatch: this is a previous_response_id and may be None

The parameter named ancestor_id is actually used as the OpenAI previous_response_id (Line 158) and can be None when the request has no response_id (Lines 340-350). Rename for clarity and make it Optional to avoid type mismatch and potential runtime confusion.

Apply this diff to the signature:

-    ancestor_id: str,
-    latest_conversation: OpenAIConversation | None,
+    previous_response_id: str | None,
+    latest_conversation: OpenAIConversation | None,

Also update internal references (see next comment).

Run this to find any other usages that need updating:


🏁 Script executed:

#!/bin/bash
# Verify no other references to "ancestor_id" exist in process_response context.
rg -n "process_response\(|ancestor_id" -A 3 -B 2

Length of output: 34390


Rename ancestor_id → previous_response_id (make Optional) in process_response

Verified: ancestor_id is forwarded to OpenAI as previous_response_id (responses.py:158) and is assigned from request.response_id which can be None (responses.py:340–350). Rename the parameter and all internal references to avoid the typing/misnaming mismatch.

Files/locations to update

  • backend/app/api/routes/responses.py
    • function signature (lines ~135–137)
    • params dict where "previous_response_id" is set (line ~158)
    • the assignment / lookup block that reads request.response_id and calls get_conversation_by_ancestor_id (lines ~340–350)
    • the background_tasks.add_task call that passes the variable (lines ~351–361)

Apply these diffs:

Signature change:

-    ancestor_id: str,
-    latest_conversation: OpenAIConversation | None,
+    previous_response_id: str | None,
+    latest_conversation: OpenAIConversation | None,

Use previous_response_id in params:

-            "previous_response_id": ancestor_id,
+            "previous_response_id": previous_response_id,

Rename variable and DB lookup:

-    ancestor_id = request.response_id
-    latest_conversation = None
-    if ancestor_id:
-        latest_conversation = get_conversation_by_ancestor_id(
-            session=_session,
-            ancestor_response_id=ancestor_id,
-            project_id=project_id,
-        )
-        if latest_conversation:
-            ancestor_id = latest_conversation.response_id
+    previous_response_id = request.response_id
+    latest_conversation = None
+    if previous_response_id:
+        latest_conversation = get_conversation_by_ancestor_id(
+            session=_session,
+            ancestor_response_id=previous_response_id,
+            project_id=project_id,
+        )
+        if latest_conversation:
+            previous_response_id = latest_conversation.response_id

Pass the renamed variable to the background task:

-        process_response,
-        project_id,
-        organization_id,
-        ancestor_id,
-        latest_conversation,
+        process_response,
+        project_id,
+        organization_id,
+        previous_response_id,
+        latest_conversation,

Reasoning: this keeps runtime behavior identical (None is allowed), clarifies that the value is the OpenAI previous_response_id, and prevents a typing mismatch. After these edits, run tests targeting responses and conversation CRUD to confirm nothing else references the old name.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ancestor_id: str,
latest_conversation: OpenAIConversation | None,
):
previous_response_id: str | None,
latest_conversation: OpenAIConversation | None,
):

Comment on lines +205 to 234
with Session(engine) as session:
ancestor_response_id = (
latest_conversation.ancestor_response_id
if latest_conversation
else get_ancestor_id_from_response(
session=session,
current_response_id=response.id,
previous_response_id=response.previous_response_id,
project_id=project_id,
)
)
)

# Create conversation record in database
conversation_data = OpenAIConversationCreate(
response_id=response.id,
previous_response_id=response.previous_response_id,
ancestor_response_id=ancestor_response_id,
user_question=request.question,
response=response.output_text,
model=response.model,
assistant_id=request.assistant_id,
)
# Create conversation record in database
conversation_data = OpenAIConversationCreate(
response_id=response.id,
previous_response_id=response.previous_response_id,
ancestor_response_id=ancestor_response_id,
user_question=request.question,
response=response.output_text,
model=response.model,
assistant_id=request.assistant_id,
)

create_conversation(
session=session,
conversation=conversation_data,
project_id=project_id,
organization_id=organization_id,
)
create_conversation(
session=session,
conversation=conversation_data,
project_id=project_id,
organization_id=organization_id,
)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Persisting conversations: add a safety net for DB failures

Right now, any DB error during conversation creation will bubble up and can prevent callback dispatch. Wrap the DB block in try/except to log and continue. This preserves user experience even if persistence fails transiently.

-        with Session(engine) as session:
-            ancestor_response_id = (
-                latest_conversation.ancestor_response_id
-                if latest_conversation
-                else get_ancestor_id_from_response(
-                    session=session,
-                    current_response_id=response.id,
-                    previous_response_id=response.previous_response_id,
-                    project_id=project_id,
-                )
-            )
-
-            # Create conversation record in database
-            conversation_data = OpenAIConversationCreate(
-                response_id=response.id,
-                previous_response_id=response.previous_response_id,
-                ancestor_response_id=ancestor_response_id,
-                user_question=request.question,
-                response=response.output_text,
-                model=response.model,
-                assistant_id=request.assistant_id,
-            )
-
-            create_conversation(
-                session=session,
-                conversation=conversation_data,
-                project_id=project_id,
-                organization_id=organization_id,
-            )
+        with Session(engine) as session:
+            try:
+                ancestor_response_id = (
+                    latest_conversation.ancestor_response_id
+                    if latest_conversation
+                    else get_ancestor_id_from_response(
+                        session=session,
+                        current_response_id=response.id,
+                        previous_response_id=response.previous_response_id,
+                        project_id=project_id,
+                    )
+                )
+
+                # Create conversation record in database
+                conversation_data = OpenAIConversationCreate(
+                    response_id=response.id,
+                    previous_response_id=response.previous_response_id,
+                    ancestor_response_id=ancestor_response_id,
+                    user_question=request.question,
+                    response=response.output_text,
+                    model=response.model,
+                    assistant_id=request.assistant_id,
+                )
+
+                create_conversation(
+                    session=session,
+                    conversation=conversation_data,
+                    project_id=project_id,
+                    organization_id=organization_id,
+                )
+            except Exception as db_err:
+                logger.error(
+                    "[process_response] Failed to persist conversation (response_id=%s, project_id=%s): %s",
+                    response.id,
+                    project_id,
+                    db_err,
+                    exc_info=True,
+                )
+                tracer.log_error(
+                    "DB persistence error during conversation create",
+                    response_id=response.id,
+                )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
with Session(engine) as session:
ancestor_response_id = (
latest_conversation.ancestor_response_id
if latest_conversation
else get_ancestor_id_from_response(
session=session,
current_response_id=response.id,
previous_response_id=response.previous_response_id,
project_id=project_id,
)
)
)
# Create conversation record in database
conversation_data = OpenAIConversationCreate(
response_id=response.id,
previous_response_id=response.previous_response_id,
ancestor_response_id=ancestor_response_id,
user_question=request.question,
response=response.output_text,
model=response.model,
assistant_id=request.assistant_id,
)
# Create conversation record in database
conversation_data = OpenAIConversationCreate(
response_id=response.id,
previous_response_id=response.previous_response_id,
ancestor_response_id=ancestor_response_id,
user_question=request.question,
response=response.output_text,
model=response.model,
assistant_id=request.assistant_id,
)
create_conversation(
session=session,
conversation=conversation_data,
project_id=project_id,
organization_id=organization_id,
)
create_conversation(
session=session,
conversation=conversation_data,
project_id=project_id,
organization_id=organization_id,
)
with Session(engine) as session:
try:
ancestor_response_id = (
latest_conversation.ancestor_response_id
if latest_conversation
else get_ancestor_id_from_response(
session=session,
current_response_id=response.id,
previous_response_id=response.previous_response_id,
project_id=project_id,
)
)
# Create conversation record in database
conversation_data = OpenAIConversationCreate(
response_id=response.id,
previous_response_id=response.previous_response_id,
ancestor_response_id=ancestor_response_id,
user_question=request.question,
response=response.output_text,
model=response.model,
assistant_id=request.assistant_id,
)
create_conversation(
session=session,
conversation=conversation_data,
project_id=project_id,
organization_id=organization_id,
)
except Exception as db_err:
logger.error(
"[process_response] Failed to persist conversation (response_id=%s, project_id=%s): %s",
response.id,
project_id,
db_err,
exc_info=True,
)
tracer.log_error(
"DB persistence error during conversation create",
response_id=response.id,
)
🤖 Prompt for AI Agents
In backend/app/api/routes/responses.py around lines 205 to 234, the block that
opens a DB Session and calls create_conversation can raise exceptions that will
bubble up and prevent callback dispatch; wrap the database interaction in a
try/except that catches Exception, logs the error with contextual details
(response.id, project_id, organization_id) and a clear message, and then
suppresses the exception so processing continues (do not re-raise); keep the
session handling intact (use the existing with Session(...) as session) and
ensure the except block only handles persistence failures so callbacks still
run.

Comment on lines +340 to +350
ancestor_id = request.response_id
latest_conversation = None
if ancestor_id:
latest_conversation = get_conversation_by_ancestor_id(
session=_session,
ancestor_response_id=ancestor_id,
project_id=project_id,
)
if latest_conversation:
ancestor_id = latest_conversation.response_id

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Precompute both IDs explicitly and avoid overloading the name “ancestor_id”

The local variable ancestor_id is first used as an ancestor_response_id (root) and then reassigned to the latest response_id to be used as previous_response_id (Line 349). This is confusing and error-prone. Split into clear variables.

Additionally, consider not passing ORM instances to background tasks (see next comment).

-    ancestor_id = request.response_id
-    latest_conversation = None
-    if ancestor_id:
-        latest_conversation = get_conversation_by_ancestor_id(
-            session=_session,
-            ancestor_response_id=ancestor_id,
-            project_id=project_id,
-        )
-        if latest_conversation:
-            ancestor_id = latest_conversation.response_id
+    ancestor_response_id = request.response_id
+    previous_response_id = None
+    latest_conversation = None
+    if ancestor_response_id:
+        latest_conversation = get_conversation_by_ancestor_id(
+            session=_session,
+            ancestor_response_id=ancestor_response_id,
+            project_id=project_id,
+        )
+        previous_response_id = (
+            latest_conversation.response_id
+            if latest_conversation
+            else ancestor_response_id
+        )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ancestor_id = request.response_id
latest_conversation = None
if ancestor_id:
latest_conversation = get_conversation_by_ancestor_id(
session=_session,
ancestor_response_id=ancestor_id,
project_id=project_id,
)
if latest_conversation:
ancestor_id = latest_conversation.response_id
ancestor_response_id = request.response_id
previous_response_id = None
latest_conversation = None
if ancestor_response_id:
latest_conversation = get_conversation_by_ancestor_id(
session=_session,
ancestor_response_id=ancestor_response_id,
project_id=project_id,
)
previous_response_id = (
latest_conversation.response_id
if latest_conversation
else ancestor_response_id
)
🤖 Prompt for AI Agents
In backend/app/api/routes/responses.py around lines 340–350, avoid reusing
ancestor_id for two meanings: keep ancestor_response_id = request.response_id,
then fetch latest_conversation and set previous_response_id explicitly
(previous_response_id = latest_conversation.response_id if found, else
previous_response_id = ancestor_response_id); update downstream calls to use
these two distinct variables and ensure only primitive IDs (not ORM instances)
are passed to any background tasks.

Comment on lines +359 to 361
ancestor_id,
latest_conversation,
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Avoid passing ORM instances across the background task boundary

Passing a detached SQLModel instance (latest_conversation) into BackgroundTasks is brittle and can lead to DetachedInstanceError if any lazy attributes are accessed. Since the worker already opens its own Session and can compute ancestry, just pass the primitive previous_response_id and None for latest_conversation.

-        ancestor_id,
-        latest_conversation,
+        previous_response_id,
+        None,  # avoid passing ORM instances; worker computes ancestry with its own session

If you adopt the earlier signature rename, ensure both the def and add_task call are updated consistently. You can validate with:


🏁 Script executed:

#!/bin/bash
rg -n "process_response\(" -A 2 -B 2

Length of output: 316


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== Full file: backend/app/api/routes/responses.py ==="
nl -ba backend/app/api/routes/responses.py || true
echo
echo "=== Lines that call add_task( ==="
rg -n "add_task\(" -S || true
echo
echo "=== References to process_response ==="
rg -n "process_response" -S || true
echo
echo "=== References to latest_conversation ==="
rg -n "latest_conversation" -S || true
echo
echo "=== References to previous_response_id ==="
rg -n "previous_response_id" -S || true

Length of output: 13227


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== process_response signature & start (lines 120-160) ==="
sed -n '120,160p' backend/app/api/routes/responses.py || true
echo
echo "=== process_response body (lines 160-220) ==="
sed -n '160,220p' backend/app/api/routes/responses.py || true
echo
echo "=== surrounding add_task call and latest_conversation (lines 320-370) ==="
sed -n '320,370p' backend/app/api/routes/responses.py || true

Length of output: 5533


Avoid passing ORM instances into BackgroundTasks — pass the primitive previous_response_id and None

Passing latest_conversation (an SQLModel instance) into BackgroundTasks is brittle and can cause DetachedInstanceError. Let the worker open its own Session and compute ancestry. Rename the parameter for clarity and stop passing the ORM instance.

Files to change

  • backend/app/api/routes/responses.py
    • process_response signature (around def at ~line 128)
    • the local ancestor_id variable and its assignment (around ~line 340)
    • the background_tasks.add_task(...) call (around ~lines 350–361)
    • update internal uses of ancestor_id in process_response to previous_response_id

Suggested diff

--- a/backend/app/api/routes/responses.py
+++ b/backend/app/api/routes/responses.py
@@
 def process_response(
     request: ResponsesAPIRequest,
     client: OpenAI,
     assistant,
     tracer: LangfuseTracer,
     project_id: int,
     organization_id: int,
-    ancestor_id: str,
-    latest_conversation: OpenAIConversation | None,
+    previous_response_id: str | None,
+    latest_conversation: OpenAIConversation | None,
 ):
@@
-        params = {
-            "model": assistant.model,
-            "previous_response_id": ancestor_id,
+        params = {
+            "model": assistant.model,
+            "previous_response_id": previous_response_id,
             "instructions": assistant.instructions,
             "temperature": assistant.temperature,
             "input": [{"role": "user", "content": request.question}],
         }
@@
-    ancestor_id = request.response_id
+    previous_response_id = request.response_id
     latest_conversation = None
     if ancestor_id:
-        latest_conversation = get_conversation_by_ancestor_id(
-            session=_session,
-            ancestor_response_id=ancestor_id,
-            project_id=project_id,
-        )
-        if latest_conversation:
-            ancestor_id = latest_conversation.response_id
+        latest_conversation = get_conversation_by_ancestor_id(
+            session=_session,
+            ancestor_response_id=previous_response_id,
+            project_id=project_id,
+        )
+        if latest_conversation:
+            previous_response_id = latest_conversation.response_id
@@
-    background_tasks.add_task(
-        process_response,
-        request,
-        client,
-        assistant,
-        tracer,
-        project_id,
-        organization_id,
-        ancestor_id,
-        latest_conversation,
-    )
+    background_tasks.add_task(
+        process_response,
+        request,
+        client,
+        assistant,
+        tracer,
+        project_id,
+        organization_id,
+        previous_response_id,
+        None,  # avoid passing ORM instances; worker computes ancestry with its own session
+    )

Quick checks

  • Update any remaining references to ancestor_id to previous_response_id in this file.
  • Run: rg -n "process_response(" -A 2 -B 2 to ensure all call sites/signatures are consistent.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
ancestor_id,
latest_conversation,
)
previous_response_id,
None, # avoid passing ORM instances; worker computes ancestry with its own session
)
🤖 Prompt for AI Agents
In backend/app/api/routes/responses.py around lines 128 and 340–361, avoid
passing the SQLModel instance latest_conversation into BackgroundTasks: change
process_response signature to accept previous_response_id (primitive int or
None) instead of ancestor_id/ORM, rename local ancestor_id variable to
previous_response_id where assigned, and update the
background_tasks.add_task(...) call to pass previous_response_id (or None)
instead of latest_conversation; inside process_response replace all internal
uses of ancestor_id with previous_response_id and ensure the worker opens its
own DB Session to load any ORM objects it needs. Also run a quick project-wide
search for process_response calls and update their signatures/usages accordingly
(rg -n "process_response\(" -A 2 -B 2) to keep call sites consistent.

@codecov
Copy link

codecov bot commented Aug 14, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

@AkhileshNegi AkhileshNegi merged commit 9483957 into main Aug 14, 2025
3 checks passed
@AkhileshNegi AkhileshNegi deleted the hotfix/remove_session_from_background_job branch August 14, 2025 14:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants