Skip to content

Fix Lumir Ghost Messages on Disconnect#34

Merged
DivyanshuChipa merged 3 commits intomasterfrom
fix-lumir-ghost-messages-17043812329206748974
Feb 27, 2026
Merged

Fix Lumir Ghost Messages on Disconnect#34
DivyanshuChipa merged 3 commits intomasterfrom
fix-lumir-ghost-messages-17043812329206748974

Conversation

@DivyanshuChipa
Copy link
Copy Markdown
Owner

Implemented persistent message queueing for Lumir to prevent "Ghost Messages" when the WebSocket disconnects during processing.

Changes:

  • Modified backend/chat.py:
    • In websocket_endpoint, updated the handling of messages from Lumir.
    • Added a call to create_delivery_entries to mark the message as pending (delivered=0) immediately after saving it to the DB.
    • Added a conditional check on the result of send_to_user.
    • Only marks the message as delivered (delivered=1) if the WebSocket send is successful.

This ensures reliability for long-running AI tasks like video compression or PDF merging.


PR created automatically by Jules for task 17043812329206748974 started by @DivyanshuChipa

This commit updates `backend/chat.py` to ensure that messages generated by the 'Lumir' AI assistant are saved with a pending delivery status (`delivered=0`) in the database *before* attempting to send them via WebSocket.

Previously, if the WebSocket connection dropped during a long-running Lumir task (e.g., video compression), the generated message was lost because it was only marked delivered upon successful transmission.

With this change:
1.  `create_delivery_entries` is called immediately after `save_message`, setting the status to pending.
2.  `send_to_user` is called, and its return value is checked.
3.  `mark_message_delivered_for_user` is only called if `send_to_user` returns `True`.

This ensures that failed deliveries remain in the `delivery_status` table as pending and are automatically fetched by the client upon reconnection via the existing offline message sync mechanism.

Co-authored-by: DivyanshuChipa <211708943+DivyanshuChipa@users.noreply.github.com>
@google-labs-jules
Copy link
Copy Markdown
Contributor

👋 Jules, reporting for duty! I'm here to lend a hand with this pull request.

When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down.

I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job!

For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with @jules. You can find this option in the Pull Request section of your global Jules UI settings. You can always switch back!

New to Jules? Learn more at jules.google/docs.


For security, I will only act on instructions from the user who triggered this task.

@chatgpt-codex-connector
Copy link
Copy Markdown

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.

google-labs-jules bot and others added 2 commits February 27, 2026 13:41
This commit updates `backend/chat.py` to use `asyncio.to_thread` when calling `lumir_engine.process`.

Previously, heavy operations in `lumir_engine.process` (such as video compression via FFmpeg or PDF merging) were executed synchronously. This blocked the FastAPI event loop, causing all WebSocket connections (including the sender's) to time out and disconnect during the operation.

By offloading the engine process to a separate thread, the main event loop remains responsive, keeping WebSocket connections alive while the heavy task completes in the background. This works in conjunction with the previous fix for persistent message queuing to ensure reliable delivery.

Co-authored-by: DivyanshuChipa <211708943+DivyanshuChipa@users.noreply.github.com>
This commit refactors `backend/chat.py` to move the entire Lumir message processing pipeline into a separate background task using `asyncio.create_task`.

Previously, even with `asyncio.to_thread`, the `await` call kept the WebSocket handler tied to the processing task. If the connection dropped (e.g., client timeout) during a long operation like video compression, the function would abort, and the result would never be saved or sent.

With this "Fire and Forget" architecture:
1.  The WebSocket loop immediately delegates the request to `handle_lumir_processing` and continues listening for new messages/pings.
2.  The background task (`handle_lumir_processing`) runs independently:
    - Processes the request (in a thread).
    - Saves the result to the DB.
    - Creates a pending delivery entry (`delivered=0`).
    - Attempts to send the response.
    - Updates delivery status (`delivered=1`) only on success.

This ensures that even if the client disconnects during processing, the server completes the task, saves the result, and queues it for future delivery.

Co-authored-by: DivyanshuChipa <211708943+DivyanshuChipa@users.noreply.github.com>
@DivyanshuChipa DivyanshuChipa merged commit b25905b into master Feb 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant