Skip to content

fix(channels): Telegram threading, live listeners, core restart, and webhook cleanup#485

Merged
graycyrus merged 16 commits intotinyhumansai:mainfrom
YellowSnnowmann:fix/telegram-replies
Apr 10, 2026
Merged

fix(channels): Telegram threading, live listeners, core restart, and webhook cleanup#485
graycyrus merged 16 commits intotinyhumansai:mainfrom
YellowSnnowmann:fix/telegram-replies

Conversation

@YellowSnnowmann
Copy link
Copy Markdown
Contributor

@YellowSnnowmann YellowSnnowmann commented Apr 10, 2026

Summary

  • Channel listeners — Start Telegram (and other) channel integrations from JSON-RPC server startup when config has listening integrations; optional OPENHUMAN_DISABLE_CHANNEL_LISTENERS=1 for tests.
  • Desktop UXTelegramConfig trigger restartCoreProcess() when the core returns restart_required: true, so connecting a bot does not require a manual app quit.
  • Telegram behavior — Threaded replies via reply_to_message_id, typing indicators, update deduplication, edited_message / message_reaction handling, case-insensitive allowlist, and conversation history keyed by chat where appropriate.
  • Reactions — Outbound [REACTION:emoji] markers and inbound reaction events on the event bus (with allowlist checks).
  • Long polling vs webhook — On Bot API 409 “webhook is active”, call deleteWebhook and retry getUpdates so long polling can run after a webhook was previously set.
  • Core update / supervision — Clearer logging around core updates; supervision/startup hooks aligned with channel lifecycle.

Problem

Telegram channels were easy to misconfigure in practice: listeners were not always started with the server, the UI told users to restart manually instead of orchestrating a core restart, replies did not attach to the triggering message, duplicate update_ids could double-fire the agent, and bots still using webhooks could not long-poll until the webhook was cleared. Several smaller gaps (typing visibility, reactions, history scoping, allowlist casing) made the channel feel unfinished.

Solution

  • run_server_inner (src/core/jsonrpc.rs) spawns start_channels(config) when ChannelsConfig::has_listening_integrations() is true, unless OPENHUMAN_DISABLE_CHANNEL_LISTENERS is set.
  • UI calls Tauri restartCoreProcess() after successful connect when restart_required is returned.
  • Telegram maps inbound message_id to thread_ts for outbound reply_to_message_id; adds TelegramTypingTask, TelegramUpdateWindow, reaction parsing/sending, and delete_webhook_for_long_polling on 409 webhook conflicts.
  • Confighas_listening_integrations() and related schema/helpers; connect path persists Telegram settings where applicable.
  • Traitsfinalize_draft thread context and optional supports_reactions() default; core update logging tightened in core_update.rs.

Submission Checklist

  • Unit tests — Telegram parsing, reactions, typing, allowlist, integration-style coverage as landed on the branch.
  • E2E / integration — Manual Telegram smoke where applicable; no full automated Bot API E2E.
  • Doc comments / inline — Rationale for startup guard, dedup, reply mapping, webhook retry, and UI restart path.

Impact

  • Desktop — Core + Tauri + React channel config only; no mobile/web product surface.
  • Compatibility — Conversation history key shape may change for some flows; see inline notes / prior migration discussion for per-chat scoping.
  • Telegram — Bots need appropriate update subscriptions for reactions; webhook deletion runs only when polling hits the webhook conflict.

Related

Summary by CodeRabbit

  • New Features

    • Attempts to restart the core process automatically when saving Discord or Telegram credentials so connections activate without manual restart.
  • Improvements

    • Telegram will detect webhook conflicts, try to disable webhooks and recover.
    • Clearer core-version compatibility warnings, status events, and reuse-port warnings.
    • More detailed startup and listener diagnostics and supervision traces.
  • Bug Fixes

    • Dropped Telegram messages now log concise debug info for troubleshooting.

- Introduced a new static variable to track previously warned world-readable config files, preventing duplicate warnings.
- Updated the warning logic to only log a warning for each unique world-readable config file, improving log clarity and reducing noise.
- Added new `ChannelReactionReceived` and `ChannelReactionSent` events to the DomainEvent enum, expanding event handling capabilities in the event bus.
- Included tests for the new reaction events to ensure proper functionality and integration.
- Introduced functions to parse log file constraints from environment variables and filter log events based on these constraints.
- Enhanced the `init_for_cli_run` function to apply the new filtering logic, improving log management and clarity.
- Updated the `conversation_history_key` function to include thread context for Telegram, ensuring accurate message targeting.
- Added a new trait method `supports_reactions` to the `Channel` trait, indicating support for emoji reactions.
- Implemented integration tests for Telegram channel features, including reaction handling and thread message forwarding.
…dicators

- Added support for emoji reactions in Telegram responses, allowing for contextual acknowledgment of user messages.
- Implemented a decision heuristic for when to use reactions, improving user interaction quality.
- Introduced a typing indicator that activates immediately upon receiving a message, providing instant feedback to users.
- Updated the channel delivery instructions to include new reaction syntax and guidelines for usage.
- Enhanced tests to cover new reaction handling and message acknowledgment features, ensuring robust functionality.
- Changed the route key in tests from `telegram_alice` to `telegram_alice_chat-1` to match the updated `conversation_history_key` format for Telegram.
- This adjustment ensures accurate routing and consistency in message handling tests.
- Refactored the message handling tests to utilize a `ChannelMessage` struct for improved clarity and maintainability.
- Updated the route key generation to use the `conversation_history_key` function, ensuring consistency in message routing.
- Simplified the invocation of `process_channel_message` by directly passing the constructed message, enhancing readability.
- Updated the `finalize_draft` method in the `Channel` trait and its implementation for `TelegramChannel` to accept an optional `thread_ts` parameter, allowing for message threading.
- Adjusted related message handling functions to utilize the new parameter, ensuring proper message context during sending.
- Modified tests to reflect changes in the `finalize_draft` method signature, enhancing the robustness of message handling in threaded conversations.
…nnection

- Added functionality to restart the core process when a channel connection requires a restart, enhancing the user experience by automating the process.
- Implemented error handling to log any issues during the restart, ensuring users are informed to restart the app if necessary.
- Updated both Discord and Telegram configuration components to include this new behavior, improving consistency across channel integrations.
- Added warnings for outdated sidecar versions and potential mismatches in UI features, improving user awareness of version compatibility.
- Implemented detailed error logging for failed attempts to fetch the latest core release, providing users with clear instructions for manual updates if necessary.
- Enhanced logging for reusing existing core RPC endpoints, alerting users to potential issues with stale connections.
…ging

- Added support for real-time channel listeners for Telegram and Discord, ensuring inbound bot messages are polled during `openhuman run`.
- Introduced a method to check for configured listening integrations, preventing unnecessary listener spawning when not needed.
- Enhanced logging for channel connection events and message handling, providing better visibility into channel operations and user interactions.
- Updated the Telegram channel connection to log the count of allowed users and mention-only settings for improved debugging.
… imports in channel config components

- Bumped the openhuman package version from 0.49.17 to 0.51.18 in Cargo.toml and Cargo.lock.
- Refactored import statements in DiscordConfig.tsx and TelegramConfig.tsx to maintain consistency and ensure proper functionality.
- Added `delete_webhook_for_long_polling` method to clear the Bot API webhook, enabling `getUpdates` long polling.
- Updated error handling in `fetch_bot_username` to call the new method when a 409 conflict indicates an active webhook, allowing for retries after webhook deletion.
- Enhanced logging for better traceability of webhook deletion and polling conflicts.
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 10, 2026

📝 Walkthrough

Walkthrough

Adds guarded channel-listener startup on server launch, improves Tauri core RPC/upgrade logging and error handling (including compatibility check), requires UI-triggered core restart for certain channel config changes, enriches Telegram handling (webhook deletion, dropped-message logging), and adds a helper to detect configured listening integrations.

Changes

Cohort / File(s) Summary
Tauri core management
app/src-tauri/src/core_process.rs, app/src-tauri/src/core_update.rs
Warn when reusing an already-open RPC port. In check_and_update_core, compute below_app_minimum, log compatibility warnings, and change fetch-error handling: on fetch failure return error when force or below-minimum (emit core-update:status:"error" and remediation log); otherwise treat fetch failure as non-fatal and emit core-update:status:"up_to_date".
React channel config components
app/src/components/channels/DiscordConfig.tsx, app/src/components/channels/TelegramConfig.tsx
On credential-based success that sets restart_required, attempt restartCoreProcess() and only mark channel connected after successful restart; on restart failure log and set a fixed UI error instructing the user to restart the app.
Server startup / background listeners
src/core/jsonrpc.rs
Spawn a background Tokio task to run start_channels(config) during server startup unless OPENHUMAN_DISABLE_CHANNEL_LISTENERS disables it; handle config load failures and log relevant info.
Channel runtime & supervision
src/openhuman/channels/runtime/startup.rs, src/openhuman/channels/runtime/supervision.rs, src/openhuman/config/schema/channels.rs
Log Telegram startup config summary (counts/flags) or explicit absence; add ChannelsConfig::has_listening_integrations() helper; add supervision info/debug logs for listener task start and before each listen() call.
Telegram provider & controller
src/openhuman/channels/providers/telegram/channel.rs, src/openhuman/channels/controllers/ops.rs
Add delete_webhook_for_long_polling() helper to clear webhook when 409 indicates webhook-active and retry long-polling; log dropped messages with sender info and message length; log allowed_users_count and mention_only after persisting Telegram config.

Sequence Diagram(s)

sequenceDiagram
    actor TauriApp as Tauri App
    participant Core as Core Process
    participant GitHub as GitHub API
    participant Listener as Channel Listener Task

    TauriApp->>Core: ensure_running()
    alt RPC port already open
        Core-->>TauriApp: log warn (reusing RPC port - possible stale core)
    end

    TauriApp->>Core: check_and_update_core(force?)
    Core->>Core: compute below_app_minimum
    alt force OR below_app_minimum
        Core->>GitHub: fetch_latest_release()
        alt fetch fails
            Core-->>TauriApp: emit core-update:status "error", log remediation
            Core-->>TauriApp: return Err
        else fetch succeeds
            Core->>Core: proceed with update flow
        end
    else not forced AND not below_app_minimum
        Core->>GitHub: fetch_latest_release()
        alt fetch fails
            Core-->>TauriApp: emit core-update:status "up_to_date", return Ok(())
        else fetch succeeds
            Core->>Core: proceed with update flow
        end
    end

    Note over Core,Listener: Server startup (unless disabled)
    Core->>Listener: spawn start_channels(config) (background task)
    Listener->>Listener: load config
    alt has_listening_integrations == true
        Listener->>Listener: spawn channel listeners (log details)
    else
        Listener-->>Listener: log that no listening integrations are configured
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • senamakel

"🐰
When ports reuse, the rabbit sighs,
It nudges webhooks, watches skies.
Listeners wake and logs will sing,
Restarts hop in to fix the thing.
Hooray — a carrot-coded spring!"

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title concisely summarizes the main changes: Telegram threading, live listeners startup, core restart on config, and webhook cleanup for long-polling recovery.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/openhuman/channels/controllers/ops.rs (1)

164-218: ⚠️ Potential issue | 🟠 Major

Avoid partial success between credential storage and config.toml persistence.

This writes Telegram credentials first and only then updates channels_config.telegram. If persisted.save() fails, connect_channel() returns an error but the bot token is already stored, so later status checks/retries can observe a half-configured channel.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/channels/controllers/ops.rs` around lines 164 - 218, The
current flow calls credentials::ops::store_provider_credentials(...) before
updating and persisting channels_config.telegram (TelegramConfig) inside
connect_channel, which can leave a half-configured state if persisted.save()
fails; change the sequence to make the operation atomic by either (A) update
persisted.channels_config.telegram and call persisted.save() first and only call
credentials::ops::store_provider_credentials(...) after a successful save, or
(B) keep the current order but add a compensating rollback that calls
credentials::ops::delete_provider_credentials(...) (or the appropriate remove
function) if persisted.save() returns an error; ensure error handling in
connect_channel reports the combined failure and that the code references the
same provider_key/profile used in store_provider_credentials and the
TelegramConfig instance so credentials and config remain consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/src/components/channels/DiscordConfig.tsx`:
- Around line 133-142: The code sets the channel state to “connected” before
performing the mandatory restart, so if restartCoreProcess() fails the UI still
shows connected; move the store/state update that marks the channel as connected
so it only runs after await restartCoreProcess() succeeds, or if you must set it
earlier, add rollback logic in the catch to revert the store/state from
connected (and keep calling setError/log) — update the logic around
result.restart_required and restartCoreProcess() and adjust the code that marks
the channel connected to run after restartCoreProcess() or to be reverted in the
catch.

In `@app/src/components/channels/TelegramConfig.tsx`:
- Around line 254-263: The code marks Telegram as connected before attempting
restart (result.restart_required) but doesn't undo that persisted state if
restartCoreProcess() fails; in the catch block for restartErr, call the same
persistence/update routine used when saving the channel connection (the function
that writes the Telegram connection/connected flag to storage or state—locate
where the connection was originally saved) to set the stored connected flag back
to false, then call setError('Channel saved. Restart the app to activate it.')
as now; ensure this update happens inside the catch so the persisted state
mirrors the failed restart and the UI banner and persisted status stay
consistent.

In `@src/openhuman/channels/providers/telegram/channel.rs`:
- Around line 684-689: Remove the message-body preview from the unauthorized
logging path: in the tracing::debug call inside the unauthorized handler (the
call currently using username, sender_id, text_preview =
%truncate_with_ellipsis(text, 80), and the message "[telegram] dropped
message..."), stop including truncate_with_ellipsis(text, 80) and instead log
only sender identity and the message size (e.g., text.len() or similar)
alongside username and sender_id so no user content is persisted in logs.

---

Outside diff comments:
In `@src/openhuman/channels/controllers/ops.rs`:
- Around line 164-218: The current flow calls
credentials::ops::store_provider_credentials(...) before updating and persisting
channels_config.telegram (TelegramConfig) inside connect_channel, which can
leave a half-configured state if persisted.save() fails; change the sequence to
make the operation atomic by either (A) update
persisted.channels_config.telegram and call persisted.save() first and only call
credentials::ops::store_provider_credentials(...) after a successful save, or
(B) keep the current order but add a compensating rollback that calls
credentials::ops::delete_provider_credentials(...) (or the appropriate remove
function) if persisted.save() returns an error; ensure error handling in
connect_channel reports the combined failure and that the code references the
same provider_key/profile used in store_provider_credentials and the
TelegramConfig instance so credentials and config remain consistent.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 047cd959-c419-4a80-8f9b-5412931bda20

📥 Commits

Reviewing files that changed from the base of the PR and between 6410db1 and 844c368.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (11)
  • app/src-tauri/src/core_process.rs
  • app/src-tauri/src/core_update.rs
  • app/src/components/channels/DiscordConfig.tsx
  • app/src/components/channels/TelegramConfig.tsx
  • path=/Users/cardinal/.openhuman/users/69ccc8e95692bb0ddd56c10f/config.toml
  • src/core/jsonrpc.rs
  • src/openhuman/channels/controllers/ops.rs
  • src/openhuman/channels/providers/telegram/channel.rs
  • src/openhuman/channels/runtime/startup.rs
  • src/openhuman/channels/runtime/supervision.rs
  • src/openhuman/config/schema/channels.rs

Comment thread app/src/components/channels/DiscordConfig.tsx
Comment on lines 254 to +263
if (result.restart_required) {
setError(result.message ?? 'Restart the service to activate the channel.');
log('restart required after connect — restarting core process');
try {
await restartCoreProcess();
log('core process restarted successfully');
} catch (restartErr) {
const msg = restartErr instanceof Error ? restartErr.message : String(restartErr);
log('core restart failed: %s', msg);
setError('Channel saved. Restart the app to activate it.');
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Mirror restart failure into the stored Telegram connection state.

This marks Telegram as connected before the required restart. If the restart fails, the banner updates, but the persisted connection status still reads connected even though the listener was never activated.

Suggested change
         if (result.restart_required) {
           log('restart required after connect — restarting core process');
           try {
             await restartCoreProcess();
             log('core process restarted successfully');
           } catch (restartErr) {
             const msg = restartErr instanceof Error ? restartErr.message : String(restartErr);
             log('core restart failed: %s', msg);
+            dispatch(
+              upsertChannelConnection({
+                channel: 'telegram',
+                authMode: spec.mode,
+                patch: {
+                  status: 'error',
+                  lastError: 'Channel saved. Restart the app to activate it.',
+                },
+              })
+            );
             setError('Channel saved. Restart the app to activate it.');
           }
         }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (result.restart_required) {
setError(result.message ?? 'Restart the service to activate the channel.');
log('restart required after connect — restarting core process');
try {
await restartCoreProcess();
log('core process restarted successfully');
} catch (restartErr) {
const msg = restartErr instanceof Error ? restartErr.message : String(restartErr);
log('core restart failed: %s', msg);
setError('Channel saved. Restart the app to activate it.');
}
if (result.restart_required) {
log('restart required after connect — restarting core process');
try {
await restartCoreProcess();
log('core process restarted successfully');
} catch (restartErr) {
const msg = restartErr instanceof Error ? restartErr.message : String(restartErr);
log('core restart failed: %s', msg);
dispatch(
upsertChannelConnection({
channel: 'telegram',
authMode: spec.mode,
patch: {
status: 'error',
lastError: 'Channel saved. Restart the app to activate it.',
},
})
);
setError('Channel saved. Restart the app to activate it.');
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/src/components/channels/TelegramConfig.tsx` around lines 254 - 263, The
code marks Telegram as connected before attempting restart
(result.restart_required) but doesn't undo that persisted state if
restartCoreProcess() fails; in the catch block for restartErr, call the same
persistence/update routine used when saving the channel connection (the function
that writes the Telegram connection/connected flag to storage or state—locate
where the connection was originally saved) to set the stored connected flag back
to false, then call setError('Channel saved. Restart the app to activate it.')
as now; ensure this update happens inside the catch so the persisted state
mirrors the failed restart and the UI banner and persisted status stay
consistent.

Comment thread src/openhuman/channels/providers/telegram/channel.rs
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
src/core/jsonrpc.rs (1)

771-789: Consider storing the task handle for graceful shutdown coordination.

The spawned task runs start_channels, which contains an event loop that runs indefinitely. When axum::serve receives a shutdown signal (line 795), this background task isn't explicitly cancelled or awaited.

If graceful shutdown of channel listeners is desired (e.g., allowing in-flight messages to complete), you may want to store this JoinHandle and coordinate shutdown. However, if the current behavior (OS-level termination of the spawned task) is acceptable, this is fine as-is.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/core/jsonrpc.rs` around lines 771 - 789, The spawned background task
created with tokio::spawn that calls
crate::openhuman::channels::start_channels(config) is never stored or
coordinated with the server shutdown, so it runs indefinitely; change the code
to capture and store the JoinHandle returned by tokio::spawn (e.g., let
channels_handle = tokio::spawn(...)) and wire a graceful shutdown path: create a
cancellation mechanism (CancellationToken or a oneshot) that you pass into
start_channels or signal the task to stop, and then on axum::Server shutdown
await channels_handle.await (or abort if needed) to ensure listeners stop or
finish in-flight work; update start_channels signature if necessary to accept
the shutdown signal so shutdown is coordinated instead of leaving the task
unmanaged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/core/jsonrpc.rs`:
- Around line 771-789: The spawned background task created with tokio::spawn
that calls crate::openhuman::channels::start_channels(config) is never stored or
coordinated with the server shutdown, so it runs indefinitely; change the code
to capture and store the JoinHandle returned by tokio::spawn (e.g., let
channels_handle = tokio::spawn(...)) and wire a graceful shutdown path: create a
cancellation mechanism (CancellationToken or a oneshot) that you pass into
start_channels or signal the task to stop, and then on axum::Server shutdown
await channels_handle.await (or abort if needed) to ensure listeners stop or
finish in-flight work; update start_channels signature if necessary to accept
the shutdown signal so shutdown is coordinated instead of leaving the task
unmanaged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 9e95c505-b76f-4524-a396-4075238a5f7d

📥 Commits

Reviewing files that changed from the base of the PR and between 844c368 and 011b203.

📒 Files selected for processing (4)
  • app/src-tauri/src/core_process.rs
  • src/core/jsonrpc.rs
  • src/openhuman/channels/runtime/startup.rs
  • src/openhuman/channels/runtime/supervision.rs
✅ Files skipped from review due to trivial changes (1)
  • src/openhuman/channels/runtime/supervision.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • app/src-tauri/src/core_process.rs

…onnection updates are dispatched regardless of restart requirement. Improved error handling during core process restart and enhanced logging for connection status.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/src/components/channels/DiscordConfig.tsx`:
- Around line 141-145: The restart-failure catch currently only logs and calls
setError, leaving the Redux connection state stuck at "connecting"; update the
catch to also transition the channel's connection status to "error" by
dispatching the appropriate Redux action (e.g.,
dispatch(updateChannelConnectionStatus(channelId, 'error')) or
dispatch(setChannelStatus(channelId, 'error'))) and, if applicable, clear any
local connecting flag (e.g., setConnecting(false)) so UI badges and state
reflect the failure; keep the existing log(...) and setError(...) behavior.

In `@src/openhuman/channels/providers/telegram/channel.rs`:
- Around line 1858-1873: The code currently unconditionally calls
delete_webhook_for_long_polling() when a 409 webhook conflict is detected in the
getUpdates handling (the webhook_blocks_polling branch); change this to first
call Telegram's getWebhookInfo (or otherwise check webhook ownership) and
compare the returned webhook URL to our expected webhook URL, and only call
delete_webhook_for_long_polling() if the URL matches or if an explicit opt-in
flag (e.g. allow_delete_webhook or opt_out_of_shared_tokens = false) is set in
configuration; if the webhook does not match and no opt-in flag is set, log a
clear warning and back off without deleting. Ensure changes touch the
getUpdates/409 handling, the webhook_blocks_polling logic, and the
delete_webhook_for_long_polling() invocation so the decision flows from
getWebhookInfo or the opt-in flag.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 79c0a860-faf5-480a-add6-0b86ab23adb3

📥 Commits

Reviewing files that changed from the base of the PR and between 011b203 and 25a064d.

📒 Files selected for processing (3)
  • app/src/components/channels/DiscordConfig.tsx
  • app/src/components/channels/TelegramConfig.tsx
  • src/openhuman/channels/providers/telegram/channel.rs
✅ Files skipped from review due to trivial changes (1)
  • app/src/components/channels/TelegramConfig.tsx

Comment on lines +141 to +145
} catch (restartErr) {
const msg = restartErr instanceof Error ? restartErr.message : String(restartErr);
log('core restart failed: %s', msg);
setError('Channel saved. Restart the app to activate it.');
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Update channel state on restart failure (currently left as connecting).

After Line 64 sets connecting, the restart-failure catch only sets local UI error. The Redux connection status is not transitioned to error, so badge/state can stay stale.

Suggested fix
          } catch (restartErr) {
            const msg = restartErr instanceof Error ? restartErr.message : String(restartErr);
            log('core restart failed: %s', msg);
+            dispatch(
+              setChannelConnectionStatus({
+                channel: 'discord',
+                authMode: spec.mode,
+                status: 'error',
+                lastError: 'Channel saved. Restart the app to activate it.',
+              })
+            );
            setError('Channel saved. Restart the app to activate it.');
          }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} catch (restartErr) {
const msg = restartErr instanceof Error ? restartErr.message : String(restartErr);
log('core restart failed: %s', msg);
setError('Channel saved. Restart the app to activate it.');
}
} catch (restartErr) {
const msg = restartErr instanceof Error ? restartErr.message : String(restartErr);
log('core restart failed: %s', msg);
dispatch(
setChannelConnectionStatus({
channel: 'discord',
authMode: spec.mode,
status: 'error',
lastError: 'Channel saved. Restart the app to activate it.',
})
);
setError('Channel saved. Restart the app to activate it.');
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/src/components/channels/DiscordConfig.tsx` around lines 141 - 145, The
restart-failure catch currently only logs and calls setError, leaving the Redux
connection state stuck at "connecting"; update the catch to also transition the
channel's connection status to "error" by dispatching the appropriate Redux
action (e.g., dispatch(updateChannelConnectionStatus(channelId, 'error')) or
dispatch(setChannelStatus(channelId, 'error'))) and, if applicable, clear any
local connecting flag (e.g., setConnecting(false)) so UI badges and state
reflect the failure; keep the existing log(...) and setError(...) behavior.

Comment on lines +1858 to +1873
let webhook_blocks_polling = description.to_lowercase().contains("webhook");
if webhook_blocks_polling {
tracing::warn!(
"[telegram] getUpdates conflict (409): webhook is active; calling deleteWebhook"
);
if self.delete_webhook_for_long_polling().await {
tracing::info!("[telegram] deleteWebhook ok; retrying getUpdates");
continue;
}
tracing::warn!("[telegram] deleteWebhook did not succeed; backing off");
} else {
tracing::warn!(
"Telegram polling conflict (409): {description}. \
Ensure only one `openhuman` process is using this bot token."
);
);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

In the Telegram Bot API, when getUpdatesreturns HTTP 409 because a webhook is active, does callingdeleteWebhook always remove the currently configured webhook for that bot token, even if it belongs to another deployment? What official guidance is given for safely switching between webhook mode and long polling?

💡 Result:

Yes. In Telegram’s Bot API, the webhook configuration is global per bot token (Telegram stores one “current webhook URL” for that bot). So if getUpdates fails with HTTP 409 Conflict because a webhook is set, calling deleteWebhook removes whatever webhook is currently configured for that token—even if it was set by a different deployment/environment using the same token. This follows from the API’s model that you “will not be able to receive updates using getUpdates for as long as an outgoing webhook is set up,” and deleteWebhook is the method to remove that webhook integration. [1]

Official guidance for safely switching modes (webhook ↔ long polling):

  • Webhook → long polling: Call deleteWebhook (optionally drop_pending_updates=true to discard queued updates), then start using getUpdates. Telegram explicitly notes getUpdates won’t work while a webhook is set, and advises using an offset strategy to avoid duplicate updates. [1][2]
  • Long polling → webhook: Call setWebhook to set the webhook URL (optionally drop_pending_updates=true if you want to drop backlog). Telegram explicitly notes that while a webhook is set, getUpdates is unavailable. [1][2]

Practical “safe switch” checks implied by the official methods:

  • Use getWebhookInfo to confirm the url is empty before relying on long polling (it returns an empty url when using getUpdates). [1]

Sources: Telegram Bot API docs + FAQ. [1][2]

References:
[1] (core.telegram.org) (Telegram Bot API: getUpdates/setWebhook notes; deleteWebhook, getWebhookInfo, drop_pending_updates)
[2] (core.telegram.org) (Telegram Bots FAQ: long polling vs webhook exclusivity; offset guidance)

Citations:


Don't unconditionally delete an active webhook on 409 conflict.

When getUpdates returns HTTP 409 because a webhook is active, calling deleteWebhook will remove whatever webhook is currently configured for that bot token—even if it belongs to another deployment. Since Telegram's webhook configuration is global per token (one webhook URL per token), this code can silently take over the bot token and knock another deployment offline.

Check webhook ownership before deleting. Either gate this behind an explicit opt-in flag, or call getWebhookInfo to validate the webhook URL matches expected value before calling deleteWebhook.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/openhuman/channels/providers/telegram/channel.rs` around lines 1858 -
1873, The code currently unconditionally calls delete_webhook_for_long_polling()
when a 409 webhook conflict is detected in the getUpdates handling (the
webhook_blocks_polling branch); change this to first call Telegram's
getWebhookInfo (or otherwise check webhook ownership) and compare the returned
webhook URL to our expected webhook URL, and only call
delete_webhook_for_long_polling() if the URL matches or if an explicit opt-in
flag (e.g. allow_delete_webhook or opt_out_of_shared_tokens = false) is set in
configuration; if the webhook does not match and no opt-in flag is set, log a
clear warning and back off without deleting. Ensure changes touch the
getUpdates/409 handling, the webhook_blocks_polling logic, and the
delete_webhook_for_long_polling() invocation so the decision flows from
getWebhookInfo or the opt-in flag.

@graycyrus graycyrus merged commit aee9c52 into tinyhumansai:main Apr 10, 2026
8 of 9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants