Skip to content

Conversation

@AkhileshNegi
Copy link
Collaborator

@AkhileshNegi AkhileshNegi commented Aug 12, 2025

Summary

Target issue is #317
Explain the motivation for making this change. What existing problem does the pull request solve?

Checklist

Before submitting a pull request, please ensure that you mark these task.

  • Ran fastapi run --reload app/main.py or docker compose up in the repository root and test.
  • If you've fixed a bug or added code that is tested and has test cases.

Notes

Summary by CodeRabbit

  • Refactor
    • Standardized API and webhook callback payloads to plain dictionaries with merged request-specific additional data.
    • Callbacks now carry: success, data (merged additional data), error, and metadata=None.
    • Authentication/config error responses return the request-derived additional data directly.
    • “Processing started” responses explicitly include additional_data in the data payload.
    • No public API signatures changed; clients parsing prior payload shapes should update accordingly.

@AkhileshNegi AkhileshNegi self-assigned this Aug 12, 2025
@coderabbitai
Copy link

coderabbitai bot commented Aug 12, 2025

Walkthrough

Refactors payload construction in backend/app/api/routes/responses.py: callbacks (success/error) now send plain dicts with merged additional data; internal error responses use ResponsesAPIResponse.failure_response; /responses auth/config error path returns get_additional_data directly. No public signatures changed.

Changes

Cohort / File(s) Summary of Changes
process_response & webhook callback
backend/app/api/routes/responses.py
Success path: removed injecting top-level extra data; builds webhook callback as plain dict merging callback data with get_additional_data(request_dict). Error path: uses ResponsesAPIResponse.failure_response(...) for internal error responses; webhook error payloads emitted as plain dicts (success, data, error, metadata).
/responses endpoint (async) adjustments
backend/app/api/routes/responses.py
Auth/config error path now returns get_additional_data(request_dict) in the data field directly. The initial "processing started" accepted response remains structured and explicitly includes additional_data inside data.
Unchanged / no-op
backend/app/api/routes/responses.py
get_file_search_results signature and logic unchanged; public method signatures preserved.

Sequence Diagram(s)

sequenceDiagram
  participant Client
  participant Responses as /responses (async)
  participant Processor
  participant Webhook

  Client->>Responses: POST /responses
  alt Auth/Config error
    Responses-->>Client: {success:false, data:get_additional_data(...), error:<msg>, metadata:null}
  else Accepted
    Responses-->>Client: {success:true, data:{status,message,additional_data:...}}
    Responses->>Processor: enqueue/process
    alt Processing error
      Processor-->>Webhook: {success:false, data:{... merged with get_additional_data(...)}, error:<msg>, metadata:null}
    else Processing success
      Processor-->>Webhook: {success:true, data:{... merged with get_additional_data(...)}, error:null, metadata:null}
    end
  end
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

Suggested labels

bug

Suggested reviewers

  • avirajsingh7

Poem

I hop through payloads, quick and slick,
Turning typed shapes into plain dict.
Extra fields find a cozy nest,
Webhooks hum and do their best.
Thump-thump — the rabbit hops, impressed. 🐇

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch hotfix/passing-data

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🔭 Outside diff range comments (1)
backend/app/api/routes/responses.py (1)

257-257: Inconsistent data payload structure between sync and async responses.

Line 257 adds additional data fields directly to the _APIResponse object using **get_additional_data(request_dict), but the sync endpoint (line 477) no longer includes this additional data. This creates inconsistency between the two endpoints.

For consistency, either both endpoints should include additional data or neither should. Apply this diff to remove the additional data spread from the async endpoint to match the sync endpoint:

-                **get_additional_data(request_dict),
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3a3247e and b49bf8b.

📒 Files selected for processing (1)
  • backend/app/api/routes/responses.py (4 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
backend/app/api/routes/responses.py (3)
backend/app/api/routes/threads.py (2)
  • send_callback (37-49)
  • get_additional_data (128-134)
backend/app/utils.py (2)
  • APIResponse (27-48)
  • failure_response (40-48)
backend/app/api/routes/collections.py (1)
  • WebHookCallback (147-170)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: checks (3.11.7, 6)
🔇 Additional comments (3)
backend/app/api/routes/responses.py (3)

326-326: LGTM: Consistent metadata usage for additional data.

The refactoring correctly moves additional data to the data field in error responses, maintaining consistency with the overall error response structure.


392-395: LGTM: Proper use of metadata field for additional data.

The refactoring correctly moves additional data from the data field to the metadata field using APIResponse.failure_response, which is more semantically appropriate for auxiliary request information.


489-491: LGTM: Consistent error response structure.

The error handling properly uses metadata for additional data while keeping the main error message in the error field, maintaining consistency with the APIResponse pattern.

Comment on lines 275 to 285

# Send callback with webhook-specific response format
send_callback(
request.callback_url,
{
"success": False,
"data": get_additional_data(request_dict),
"error": error_message,
"metadata": None,
},
)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical logic error: Success case callback is replaced with error case callback.

The successful response callback (lines 244-259) is being overwritten by the error case callback. This means that successful API responses will never be sent to the callback URL, only error responses will be sent.

The success callback code appears to be missing. Apply this diff to restore the missing success callback:

+        # Send success callback
+        if request.callback_url:
+            logger.info(
+                f"[process_response] Sending success callback to URL: {request.callback_url}, assistant={mask_string(request.assistant_id)}, project_id={project_id}"
+            )
+            send_callback(request.callback_url, callback_response.model_dump())
+            logger.info(
+                f"[process_response] Success callback sent, assistant={mask_string(request.assistant_id)}, project_id={project_id}"
+            )
+        
+        return
+        
     except openai.OpenAIError as e:
         error_message = handle_openai_error(e)
         logger.error(
             f"[process_response] OpenAI API error during response processing: {error_message}, project_id={project_id}",
             exc_info=True,
         )
         tracer.log_error(error_message, response_id=request.response_id)
 
         request_dict = request.model_dump()
-    tracer.flush()
+        tracer.flush()
 
-    if request.callback_url:
-        logger.info(
-            f"[process_response] Sending callback to URL: {request.callback_url}, assistant={mask_string(request.assistant_id)}, project_id={project_id}"
-        )
+        if request.callback_url:
+            logger.info(
+                f"[process_response] Sending error callback to URL: {request.callback_url}, assistant={mask_string(request.assistant_id)}, project_id={project_id}"
+            )
 
-        # Send callback with webhook-specific response format
-        send_callback(
-            request.callback_url,
-            {
-                "success": False,
-                "data": get_additional_data(request_dict),
-                "error": error_message,
-                "metadata": None,
-            },
-        )
-        logger.info(
-            f"[process_response] Callback sent successfully, assistant={mask_string(request.assistant_id)}, project_id={project_id}"
-        )
+            # Send callback with webhook-specific response format
+            send_callback(
+                request.callback_url,
+                {
+                    "success": False,
+                    "data": get_additional_data(request_dict),
+                    "error": error_message,
+                    "metadata": None,
+                },
+            )
+            logger.info(
+                f"[process_response] Error callback sent, assistant={mask_string(request.assistant_id)}, project_id={project_id}"
+            )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Send callback with webhook-specific response format
send_callback(
request.callback_url,
{
"success": False,
"data": get_additional_data(request_dict),
"error": error_message,
"metadata": None,
},
)
# Send success callback
if request.callback_url:
logger.info(
f"[process_response] Sending success callback to URL: {request.callback_url}, assistant={mask_string(request.assistant_id)}, project_id={project_id}"
)
send_callback(request.callback_url, callback_response.model_dump())
logger.info(
f"[process_response] Success callback sent, assistant={mask_string(request.assistant_id)}, project_id={project_id}"
)
return
except openai.OpenAIError as e:
error_message = handle_openai_error(e)
logger.error(
f"[process_response] OpenAI API error during response processing: {error_message}, project_id={project_id}",
exc_info=True,
)
tracer.log_error(error_message, response_id=request.response_id)
request_dict = request.model_dump()
tracer.flush()
if request.callback_url:
logger.info(
f"[process_response] Sending error callback to URL: {request.callback_url}, assistant={mask_string(request.assistant_id)}, project_id={project_id}"
)
# Send callback with webhook-specific response format
send_callback(
request.callback_url,
{
"success": False,
"data": get_additional_data(request_dict),
"error": error_message,
"metadata": None,
},
)
logger.info(
f"[process_response] Error callback sent, assistant={mask_string(request.assistant_id)}, project_id={project_id}"
)
🤖 Prompt for AI Agents
In backend/app/api/routes/responses.py around lines 275 to 285, the error-case
send_callback is overwriting the earlier success-case callback so successful
responses never get sent; restore the missing success callback by re-inserting
the original send_callback call used for successful responses (the one that
sends success=True, the actual response data, and metadata) before the error
handling path, and ensure the error-path only runs on exceptions so it sends the
failure payload (success=False) without replacing the success callback.

@codecov
Copy link

codecov bot commented Aug 12, 2025

Codecov Report

❌ Patch coverage is 66.66667% with 1 line in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
backend/app/api/routes/responses.py 66.66% 1 Missing ⚠️

📢 Thoughts on this report? Let us know!

@AkhileshNegi AkhileshNegi added the enhancement New feature or request label Aug 12, 2025
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🔭 Outside diff range comments (2)
backend/app/api/routes/responses.py (2)

89-91: Add the missing failure_response classmethod to ResponsesAPIResponse.

The code on line 268 calls ResponsesAPIResponse.failure_response(error=error_message), but this method is not defined in the ResponsesAPIResponse class. According to the AI summary, this method was added but it's missing from the implementation.

Apply this diff to add the missing method:

 class ResponsesAPIResponse(APIResponse[_APIResponse]):
-    pass
+    @classmethod
+    def failure_response(cls, error: str) -> "ResponsesAPIResponse":
+        """Create a failure response with error message."""
+        return cls(success=False, data=None, error=error, metadata=None)

The base APIResponse class in backend/app/utils.py has a similar failure_response method that you can reference for the implementation pattern.


98-99: Fix variable reference error in get_file_search_results.

The function incorrectly references results instead of tool_call.results when creating FileResultChunk objects. This creates an infinite loop that will cause a RecursionError.

Apply this diff to fix the variable reference:

 def get_file_search_results(response):
     results: list[FileResultChunk] = []
     for tool_call in response.output:
         if tool_call.type == "file_search_call":
             results.extend(
-                [FileResultChunk(score=hit.score, text=hit.text) for hit in results]
+                [FileResultChunk(score=hit.score, text=hit.text) for hit in tool_call.results]
             )
     return results
♻️ Duplicate comments (1)
backend/app/api/routes/responses.py (1)

259-270: Success callback is still missing from the success path.

The past review comment correctly identified that the success callback is missing. After the success case (lines 244-258), the code should send a success callback before reaching the except block. Currently, only the error callback (lines 272-293) will be sent regardless of success or failure.

Apply this diff to add the missing success callback:

             )
         )
+        
+        # Send success callback
+        if request.callback_url:
+            logger.info(
+                f"[process_response] Sending success callback to URL: {request.callback_url}, assistant={mask_string(request.assistant_id)}, project_id={project_id}"
+            )
+            callback_data = callback_response.model_dump()
+            send_callback(
+                request.callback_url,
+                {
+                    "success": callback_data.get("success", False),
+                    "data": {
+                        **(callback_data.get("data") or {}),
+                        **get_additional_data(request_dict),
+                    },
+                    "error": callback_data.get("error"),
+                    "metadata": None,
+                },
+            )
+            logger.info(
+                f"[process_response] Success callback sent, assistant={mask_string(request.assistant_id)}, project_id={project_id}"
+            )
+        
+        tracer.flush()
+        return
+        
     except openai.OpenAIError as e:
         error_message = handle_openai_error(e)
         logger.error(
             f"[process_response] OpenAI API error during response processing: {error_message}, project_id={project_id}",
             exc_info=True,
         )
         tracer.log_error(error_message, response_id=request.response_id)
 
         request_dict = request.model_dump()
         callback_response = ResponsesAPIResponse.failure_response(error=error_message)
-
-    tracer.flush()
-
-    if request.callback_url:
-        logger.info(
-            f"[process_response] Sending callback to URL: {request.callback_url}, assistant={mask_string(request.assistant_id)}, project_id={project_id}"
-        )
-
-        # Send callback with webhook-specific response format
-        callback_data = callback_response.model_dump()
-        send_callback(
-            request.callback_url,
-            {
-                "success": callback_data.get("success", False),
-                "data": {
-                    **(callback_data.get("data") or {}),
-                    **get_additional_data(request_dict),
-                },
-                "error": callback_data.get("error"),
-                "metadata": None,
-            },
-        )
-        logger.info(
-            f"[process_response] Callback sent successfully, assistant={mask_string(request.assistant_id)}, project_id={project_id}"
-        )
+        
+        # Send error callback
+        if request.callback_url:
+            logger.info(
+                f"[process_response] Sending error callback to URL: {request.callback_url}, assistant={mask_string(request.assistant_id)}, project_id={project_id}"
+            )
+            callback_data = callback_response.model_dump()
+            send_callback(
+                request.callback_url,
+                {
+                    "success": callback_data.get("success", False),
+                    "data": {
+                        **(callback_data.get("data") or {}),
+                        **get_additional_data(request_dict),
+                    },
+                    "error": callback_data.get("error"),
+                    "metadata": None,
+                },
+            )
+            logger.info(
+                f"[process_response] Error callback sent, assistant={mask_string(request.assistant_id)}, project_id={project_id}"
+            )
+        
+        tracer.flush()
🧹 Nitpick comments (1)
backend/app/api/routes/responses.py (1)

329-334: Consider using consistent error response structure.

The error response structure here differs from the one used in the synchronous endpoint (lines 404-409) which uses APIResponse class. Consider using the same response class for consistency.

Apply this diff for consistency:

-        return {
-            "success": False,
-            "error": "OpenAI API key not configured for this organization.",
-            "data": additional_data if additional_data else None,
-            "metadata": None,
-        }
+        return APIResponse(
+            success=False,
+            data=additional_data if additional_data else None,
+            error="OpenAI API key not configured for this organization.",
+            metadata=None,
+        ).model_dump()
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2396bd9 and f73290c.

📒 Files selected for processing (1)
  • backend/app/api/routes/responses.py (1 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
backend/app/api/routes/responses.py (2)
backend/app/utils.py (1)
  • failure_response (40-48)
backend/app/api/routes/threads.py (2)
  • send_callback (37-49)
  • get_additional_data (128-134)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: checks (3.11.7, 6)
🔇 Additional comments (1)
backend/app/api/routes/responses.py (1)

278-290: Verify webhook payload backward compatibility in responses.py

The async /responses handler now builds the callback payload as:

payload["data"] = {
  **(callback_data.get("data") or {}),
  **get_additional_data(request_dict),
}

Any key in get_additional_data(request_dict) that matches a key in callback_data["data"] will overwrite the original value.

• File: backend/app/api/routes/responses.py (lines 278–290)
• Async exclusion: assistant_id, callback_url, response_id, question
• Sync exclusion: model, instructions, vector_store_ids, max_num_results, temperature

I did not find any tests in backend/app/tests/api/routes that exercise send_callback for the responses endpoints (there are tests for threads but none for responses).

Please:

  1. Confirm that your external webhook consumers handle the merged payload structure.
  2. Verify there are no overlapping field names between callback_data["data"] and the remaining request fields.
  3. Add unit tests (e.g. a test_responses_*.py) to cover send_callback()’s payload shape and merge behavior.

@AkhileshNegi AkhileshNegi merged commit db872b9 into main Aug 12, 2025
2 of 3 checks passed
@AkhileshNegi AkhileshNegi deleted the hotfix/passing-data branch August 12, 2025 10:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant