fix(channels): Telegram threading, live listeners, core restart, and webhook cleanup#485
Conversation
- Introduced a new static variable to track previously warned world-readable config files, preventing duplicate warnings. - Updated the warning logic to only log a warning for each unique world-readable config file, improving log clarity and reducing noise. - Added new `ChannelReactionReceived` and `ChannelReactionSent` events to the DomainEvent enum, expanding event handling capabilities in the event bus. - Included tests for the new reaction events to ensure proper functionality and integration.
- Introduced functions to parse log file constraints from environment variables and filter log events based on these constraints. - Enhanced the `init_for_cli_run` function to apply the new filtering logic, improving log management and clarity. - Updated the `conversation_history_key` function to include thread context for Telegram, ensuring accurate message targeting. - Added a new trait method `supports_reactions` to the `Channel` trait, indicating support for emoji reactions. - Implemented integration tests for Telegram channel features, including reaction handling and thread message forwarding.
…dicators - Added support for emoji reactions in Telegram responses, allowing for contextual acknowledgment of user messages. - Implemented a decision heuristic for when to use reactions, improving user interaction quality. - Introduced a typing indicator that activates immediately upon receiving a message, providing instant feedback to users. - Updated the channel delivery instructions to include new reaction syntax and guidelines for usage. - Enhanced tests to cover new reaction handling and message acknowledgment features, ensuring robust functionality.
- Changed the route key in tests from `telegram_alice` to `telegram_alice_chat-1` to match the updated `conversation_history_key` format for Telegram. - This adjustment ensures accurate routing and consistency in message handling tests.
- Refactored the message handling tests to utilize a `ChannelMessage` struct for improved clarity and maintainability. - Updated the route key generation to use the `conversation_history_key` function, ensuring consistency in message routing. - Simplified the invocation of `process_channel_message` by directly passing the constructed message, enhancing readability.
- Updated the `finalize_draft` method in the `Channel` trait and its implementation for `TelegramChannel` to accept an optional `thread_ts` parameter, allowing for message threading. - Adjusted related message handling functions to utilize the new parameter, ensuring proper message context during sending. - Modified tests to reflect changes in the `finalize_draft` method signature, enhancing the robustness of message handling in threaded conversations.
…nnection - Added functionality to restart the core process when a channel connection requires a restart, enhancing the user experience by automating the process. - Implemented error handling to log any issues during the restart, ensuring users are informed to restart the app if necessary. - Updated both Discord and Telegram configuration components to include this new behavior, improving consistency across channel integrations.
- Added warnings for outdated sidecar versions and potential mismatches in UI features, improving user awareness of version compatibility. - Implemented detailed error logging for failed attempts to fetch the latest core release, providing users with clear instructions for manual updates if necessary. - Enhanced logging for reusing existing core RPC endpoints, alerting users to potential issues with stale connections.
…ging - Added support for real-time channel listeners for Telegram and Discord, ensuring inbound bot messages are polled during `openhuman run`. - Introduced a method to check for configured listening integrations, preventing unnecessary listener spawning when not needed. - Enhanced logging for channel connection events and message handling, providing better visibility into channel operations and user interactions. - Updated the Telegram channel connection to log the count of allowed users and mention-only settings for improved debugging.
… imports in channel config components - Bumped the openhuman package version from 0.49.17 to 0.51.18 in Cargo.toml and Cargo.lock. - Refactored import statements in DiscordConfig.tsx and TelegramConfig.tsx to maintain consistency and ensure proper functionality.
- Added `delete_webhook_for_long_polling` method to clear the Bot API webhook, enabling `getUpdates` long polling. - Updated error handling in `fetch_bot_username` to call the new method when a 409 conflict indicates an active webhook, allowing for retries after webhook deletion. - Enhanced logging for better traceability of webhook deletion and polling conflicts.
📝 WalkthroughWalkthroughAdds guarded channel-listener startup on server launch, improves Tauri core RPC/upgrade logging and error handling (including compatibility check), requires UI-triggered core restart for certain channel config changes, enriches Telegram handling (webhook deletion, dropped-message logging), and adds a helper to detect configured listening integrations. Changes
Sequence Diagram(s)sequenceDiagram
actor TauriApp as Tauri App
participant Core as Core Process
participant GitHub as GitHub API
participant Listener as Channel Listener Task
TauriApp->>Core: ensure_running()
alt RPC port already open
Core-->>TauriApp: log warn (reusing RPC port - possible stale core)
end
TauriApp->>Core: check_and_update_core(force?)
Core->>Core: compute below_app_minimum
alt force OR below_app_minimum
Core->>GitHub: fetch_latest_release()
alt fetch fails
Core-->>TauriApp: emit core-update:status "error", log remediation
Core-->>TauriApp: return Err
else fetch succeeds
Core->>Core: proceed with update flow
end
else not forced AND not below_app_minimum
Core->>GitHub: fetch_latest_release()
alt fetch fails
Core-->>TauriApp: emit core-update:status "up_to_date", return Ok(())
else fetch succeeds
Core->>Core: proceed with update flow
end
end
Note over Core,Listener: Server startup (unless disabled)
Core->>Listener: spawn start_channels(config) (background task)
Listener->>Listener: load config
alt has_listening_integrations == true
Listener->>Listener: spawn channel listeners (log details)
else
Listener-->>Listener: log that no listening integrations are configured
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/openhuman/channels/controllers/ops.rs (1)
164-218:⚠️ Potential issue | 🟠 MajorAvoid partial success between credential storage and
config.tomlpersistence.This writes Telegram credentials first and only then updates
channels_config.telegram. Ifpersisted.save()fails,connect_channel()returns an error but the bot token is already stored, so later status checks/retries can observe a half-configured channel.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/channels/controllers/ops.rs` around lines 164 - 218, The current flow calls credentials::ops::store_provider_credentials(...) before updating and persisting channels_config.telegram (TelegramConfig) inside connect_channel, which can leave a half-configured state if persisted.save() fails; change the sequence to make the operation atomic by either (A) update persisted.channels_config.telegram and call persisted.save() first and only call credentials::ops::store_provider_credentials(...) after a successful save, or (B) keep the current order but add a compensating rollback that calls credentials::ops::delete_provider_credentials(...) (or the appropriate remove function) if persisted.save() returns an error; ensure error handling in connect_channel reports the combined failure and that the code references the same provider_key/profile used in store_provider_credentials and the TelegramConfig instance so credentials and config remain consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/src/components/channels/DiscordConfig.tsx`:
- Around line 133-142: The code sets the channel state to “connected” before
performing the mandatory restart, so if restartCoreProcess() fails the UI still
shows connected; move the store/state update that marks the channel as connected
so it only runs after await restartCoreProcess() succeeds, or if you must set it
earlier, add rollback logic in the catch to revert the store/state from
connected (and keep calling setError/log) — update the logic around
result.restart_required and restartCoreProcess() and adjust the code that marks
the channel connected to run after restartCoreProcess() or to be reverted in the
catch.
In `@app/src/components/channels/TelegramConfig.tsx`:
- Around line 254-263: The code marks Telegram as connected before attempting
restart (result.restart_required) but doesn't undo that persisted state if
restartCoreProcess() fails; in the catch block for restartErr, call the same
persistence/update routine used when saving the channel connection (the function
that writes the Telegram connection/connected flag to storage or state—locate
where the connection was originally saved) to set the stored connected flag back
to false, then call setError('Channel saved. Restart the app to activate it.')
as now; ensure this update happens inside the catch so the persisted state
mirrors the failed restart and the UI banner and persisted status stay
consistent.
In `@src/openhuman/channels/providers/telegram/channel.rs`:
- Around line 684-689: Remove the message-body preview from the unauthorized
logging path: in the tracing::debug call inside the unauthorized handler (the
call currently using username, sender_id, text_preview =
%truncate_with_ellipsis(text, 80), and the message "[telegram] dropped
message..."), stop including truncate_with_ellipsis(text, 80) and instead log
only sender identity and the message size (e.g., text.len() or similar)
alongside username and sender_id so no user content is persisted in logs.
---
Outside diff comments:
In `@src/openhuman/channels/controllers/ops.rs`:
- Around line 164-218: The current flow calls
credentials::ops::store_provider_credentials(...) before updating and persisting
channels_config.telegram (TelegramConfig) inside connect_channel, which can
leave a half-configured state if persisted.save() fails; change the sequence to
make the operation atomic by either (A) update
persisted.channels_config.telegram and call persisted.save() first and only call
credentials::ops::store_provider_credentials(...) after a successful save, or
(B) keep the current order but add a compensating rollback that calls
credentials::ops::delete_provider_credentials(...) (or the appropriate remove
function) if persisted.save() returns an error; ensure error handling in
connect_channel reports the combined failure and that the code references the
same provider_key/profile used in store_provider_credentials and the
TelegramConfig instance so credentials and config remain consistent.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 047cd959-c419-4a80-8f9b-5412931bda20
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (11)
app/src-tauri/src/core_process.rsapp/src-tauri/src/core_update.rsapp/src/components/channels/DiscordConfig.tsxapp/src/components/channels/TelegramConfig.tsxpath=/Users/cardinal/.openhuman/users/69ccc8e95692bb0ddd56c10f/config.tomlsrc/core/jsonrpc.rssrc/openhuman/channels/controllers/ops.rssrc/openhuman/channels/providers/telegram/channel.rssrc/openhuman/channels/runtime/startup.rssrc/openhuman/channels/runtime/supervision.rssrc/openhuman/config/schema/channels.rs
| if (result.restart_required) { | ||
| setError(result.message ?? 'Restart the service to activate the channel.'); | ||
| log('restart required after connect — restarting core process'); | ||
| try { | ||
| await restartCoreProcess(); | ||
| log('core process restarted successfully'); | ||
| } catch (restartErr) { | ||
| const msg = restartErr instanceof Error ? restartErr.message : String(restartErr); | ||
| log('core restart failed: %s', msg); | ||
| setError('Channel saved. Restart the app to activate it.'); | ||
| } |
There was a problem hiding this comment.
Mirror restart failure into the stored Telegram connection state.
This marks Telegram as connected before the required restart. If the restart fails, the banner updates, but the persisted connection status still reads connected even though the listener was never activated.
Suggested change
if (result.restart_required) {
log('restart required after connect — restarting core process');
try {
await restartCoreProcess();
log('core process restarted successfully');
} catch (restartErr) {
const msg = restartErr instanceof Error ? restartErr.message : String(restartErr);
log('core restart failed: %s', msg);
+ dispatch(
+ upsertChannelConnection({
+ channel: 'telegram',
+ authMode: spec.mode,
+ patch: {
+ status: 'error',
+ lastError: 'Channel saved. Restart the app to activate it.',
+ },
+ })
+ );
setError('Channel saved. Restart the app to activate it.');
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (result.restart_required) { | |
| setError(result.message ?? 'Restart the service to activate the channel.'); | |
| log('restart required after connect — restarting core process'); | |
| try { | |
| await restartCoreProcess(); | |
| log('core process restarted successfully'); | |
| } catch (restartErr) { | |
| const msg = restartErr instanceof Error ? restartErr.message : String(restartErr); | |
| log('core restart failed: %s', msg); | |
| setError('Channel saved. Restart the app to activate it.'); | |
| } | |
| if (result.restart_required) { | |
| log('restart required after connect — restarting core process'); | |
| try { | |
| await restartCoreProcess(); | |
| log('core process restarted successfully'); | |
| } catch (restartErr) { | |
| const msg = restartErr instanceof Error ? restartErr.message : String(restartErr); | |
| log('core restart failed: %s', msg); | |
| dispatch( | |
| upsertChannelConnection({ | |
| channel: 'telegram', | |
| authMode: spec.mode, | |
| patch: { | |
| status: 'error', | |
| lastError: 'Channel saved. Restart the app to activate it.', | |
| }, | |
| }) | |
| ); | |
| setError('Channel saved. Restart the app to activate it.'); | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/components/channels/TelegramConfig.tsx` around lines 254 - 263, The
code marks Telegram as connected before attempting restart
(result.restart_required) but doesn't undo that persisted state if
restartCoreProcess() fails; in the catch block for restartErr, call the same
persistence/update routine used when saving the channel connection (the function
that writes the Telegram connection/connected flag to storage or state—locate
where the connection was originally saved) to set the stored connected flag back
to false, then call setError('Channel saved. Restart the app to activate it.')
as now; ensure this update happens inside the catch so the persisted state
mirrors the failed restart and the UI banner and persisted status stay
consistent.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
src/core/jsonrpc.rs (1)
771-789: Consider storing the task handle for graceful shutdown coordination.The spawned task runs
start_channels, which contains an event loop that runs indefinitely. Whenaxum::servereceives a shutdown signal (line 795), this background task isn't explicitly cancelled or awaited.If graceful shutdown of channel listeners is desired (e.g., allowing in-flight messages to complete), you may want to store this
JoinHandleand coordinate shutdown. However, if the current behavior (OS-level termination of the spawned task) is acceptable, this is fine as-is.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/core/jsonrpc.rs` around lines 771 - 789, The spawned background task created with tokio::spawn that calls crate::openhuman::channels::start_channels(config) is never stored or coordinated with the server shutdown, so it runs indefinitely; change the code to capture and store the JoinHandle returned by tokio::spawn (e.g., let channels_handle = tokio::spawn(...)) and wire a graceful shutdown path: create a cancellation mechanism (CancellationToken or a oneshot) that you pass into start_channels or signal the task to stop, and then on axum::Server shutdown await channels_handle.await (or abort if needed) to ensure listeners stop or finish in-flight work; update start_channels signature if necessary to accept the shutdown signal so shutdown is coordinated instead of leaving the task unmanaged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/core/jsonrpc.rs`:
- Around line 771-789: The spawned background task created with tokio::spawn
that calls crate::openhuman::channels::start_channels(config) is never stored or
coordinated with the server shutdown, so it runs indefinitely; change the code
to capture and store the JoinHandle returned by tokio::spawn (e.g., let
channels_handle = tokio::spawn(...)) and wire a graceful shutdown path: create a
cancellation mechanism (CancellationToken or a oneshot) that you pass into
start_channels or signal the task to stop, and then on axum::Server shutdown
await channels_handle.await (or abort if needed) to ensure listeners stop or
finish in-flight work; update start_channels signature if necessary to accept
the shutdown signal so shutdown is coordinated instead of leaving the task
unmanaged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9e95c505-b76f-4524-a396-4075238a5f7d
📒 Files selected for processing (4)
app/src-tauri/src/core_process.rssrc/core/jsonrpc.rssrc/openhuman/channels/runtime/startup.rssrc/openhuman/channels/runtime/supervision.rs
✅ Files skipped from review due to trivial changes (1)
- src/openhuman/channels/runtime/supervision.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- app/src-tauri/src/core_process.rs
…onnection updates are dispatched regardless of restart requirement. Improved error handling during core process restart and enhanced logging for connection status.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/src/components/channels/DiscordConfig.tsx`:
- Around line 141-145: The restart-failure catch currently only logs and calls
setError, leaving the Redux connection state stuck at "connecting"; update the
catch to also transition the channel's connection status to "error" by
dispatching the appropriate Redux action (e.g.,
dispatch(updateChannelConnectionStatus(channelId, 'error')) or
dispatch(setChannelStatus(channelId, 'error'))) and, if applicable, clear any
local connecting flag (e.g., setConnecting(false)) so UI badges and state
reflect the failure; keep the existing log(...) and setError(...) behavior.
In `@src/openhuman/channels/providers/telegram/channel.rs`:
- Around line 1858-1873: The code currently unconditionally calls
delete_webhook_for_long_polling() when a 409 webhook conflict is detected in the
getUpdates handling (the webhook_blocks_polling branch); change this to first
call Telegram's getWebhookInfo (or otherwise check webhook ownership) and
compare the returned webhook URL to our expected webhook URL, and only call
delete_webhook_for_long_polling() if the URL matches or if an explicit opt-in
flag (e.g. allow_delete_webhook or opt_out_of_shared_tokens = false) is set in
configuration; if the webhook does not match and no opt-in flag is set, log a
clear warning and back off without deleting. Ensure changes touch the
getUpdates/409 handling, the webhook_blocks_polling logic, and the
delete_webhook_for_long_polling() invocation so the decision flows from
getWebhookInfo or the opt-in flag.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 79c0a860-faf5-480a-add6-0b86ab23adb3
📒 Files selected for processing (3)
app/src/components/channels/DiscordConfig.tsxapp/src/components/channels/TelegramConfig.tsxsrc/openhuman/channels/providers/telegram/channel.rs
✅ Files skipped from review due to trivial changes (1)
- app/src/components/channels/TelegramConfig.tsx
| } catch (restartErr) { | ||
| const msg = restartErr instanceof Error ? restartErr.message : String(restartErr); | ||
| log('core restart failed: %s', msg); | ||
| setError('Channel saved. Restart the app to activate it.'); | ||
| } |
There was a problem hiding this comment.
Update channel state on restart failure (currently left as connecting).
After Line 64 sets connecting, the restart-failure catch only sets local UI error. The Redux connection status is not transitioned to error, so badge/state can stay stale.
Suggested fix
} catch (restartErr) {
const msg = restartErr instanceof Error ? restartErr.message : String(restartErr);
log('core restart failed: %s', msg);
+ dispatch(
+ setChannelConnectionStatus({
+ channel: 'discord',
+ authMode: spec.mode,
+ status: 'error',
+ lastError: 'Channel saved. Restart the app to activate it.',
+ })
+ );
setError('Channel saved. Restart the app to activate it.');
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } catch (restartErr) { | |
| const msg = restartErr instanceof Error ? restartErr.message : String(restartErr); | |
| log('core restart failed: %s', msg); | |
| setError('Channel saved. Restart the app to activate it.'); | |
| } | |
| } catch (restartErr) { | |
| const msg = restartErr instanceof Error ? restartErr.message : String(restartErr); | |
| log('core restart failed: %s', msg); | |
| dispatch( | |
| setChannelConnectionStatus({ | |
| channel: 'discord', | |
| authMode: spec.mode, | |
| status: 'error', | |
| lastError: 'Channel saved. Restart the app to activate it.', | |
| }) | |
| ); | |
| setError('Channel saved. Restart the app to activate it.'); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/src/components/channels/DiscordConfig.tsx` around lines 141 - 145, The
restart-failure catch currently only logs and calls setError, leaving the Redux
connection state stuck at "connecting"; update the catch to also transition the
channel's connection status to "error" by dispatching the appropriate Redux
action (e.g., dispatch(updateChannelConnectionStatus(channelId, 'error')) or
dispatch(setChannelStatus(channelId, 'error'))) and, if applicable, clear any
local connecting flag (e.g., setConnecting(false)) so UI badges and state
reflect the failure; keep the existing log(...) and setError(...) behavior.
| let webhook_blocks_polling = description.to_lowercase().contains("webhook"); | ||
| if webhook_blocks_polling { | ||
| tracing::warn!( | ||
| "[telegram] getUpdates conflict (409): webhook is active; calling deleteWebhook" | ||
| ); | ||
| if self.delete_webhook_for_long_polling().await { | ||
| tracing::info!("[telegram] deleteWebhook ok; retrying getUpdates"); | ||
| continue; | ||
| } | ||
| tracing::warn!("[telegram] deleteWebhook did not succeed; backing off"); | ||
| } else { | ||
| tracing::warn!( | ||
| "Telegram polling conflict (409): {description}. \ | ||
| Ensure only one `openhuman` process is using this bot token." | ||
| ); | ||
| ); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In the Telegram Bot API, when getUpdatesreturns HTTP 409 because a webhook is active, does callingdeleteWebhook always remove the currently configured webhook for that bot token, even if it belongs to another deployment? What official guidance is given for safely switching between webhook mode and long polling?
💡 Result:
Yes. In Telegram’s Bot API, the webhook configuration is global per bot token (Telegram stores one “current webhook URL” for that bot). So if getUpdates fails with HTTP 409 Conflict because a webhook is set, calling deleteWebhook removes whatever webhook is currently configured for that token—even if it was set by a different deployment/environment using the same token. This follows from the API’s model that you “will not be able to receive updates using getUpdates for as long as an outgoing webhook is set up,” and deleteWebhook is the method to remove that webhook integration. [1]
Official guidance for safely switching modes (webhook ↔ long polling):
- Webhook → long polling: Call
deleteWebhook(optionallydrop_pending_updates=trueto discard queued updates), then start usinggetUpdates. Telegram explicitly notesgetUpdateswon’t work while a webhook is set, and advises using anoffsetstrategy to avoid duplicate updates. [1][2] - Long polling → webhook: Call
setWebhookto set the webhook URL (optionallydrop_pending_updates=trueif you want to drop backlog). Telegram explicitly notes that while a webhook is set,getUpdatesis unavailable. [1][2]
Practical “safe switch” checks implied by the official methods:
- Use
getWebhookInfoto confirm theurlis empty before relying on long polling (it returns an emptyurlwhen usinggetUpdates). [1]
Sources: Telegram Bot API docs + FAQ. [1][2]
References:
[1] (core.telegram.org) (Telegram Bot API: getUpdates/setWebhook notes; deleteWebhook, getWebhookInfo, drop_pending_updates)
[2] (core.telegram.org) (Telegram Bots FAQ: long polling vs webhook exclusivity; offset guidance)
Citations:
- 1: https://core.telegram.org/bots/api%20%20?utm_source=openai
- 2: https://core.telegram.org/bots/faq?utm_source=openai
Don't unconditionally delete an active webhook on 409 conflict.
When getUpdates returns HTTP 409 because a webhook is active, calling deleteWebhook will remove whatever webhook is currently configured for that bot token—even if it belongs to another deployment. Since Telegram's webhook configuration is global per token (one webhook URL per token), this code can silently take over the bot token and knock another deployment offline.
Check webhook ownership before deleting. Either gate this behind an explicit opt-in flag, or call getWebhookInfo to validate the webhook URL matches expected value before calling deleteWebhook.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/channels/providers/telegram/channel.rs` around lines 1858 -
1873, The code currently unconditionally calls delete_webhook_for_long_polling()
when a 409 webhook conflict is detected in the getUpdates handling (the
webhook_blocks_polling branch); change this to first call Telegram's
getWebhookInfo (or otherwise check webhook ownership) and compare the returned
webhook URL to our expected webhook URL, and only call
delete_webhook_for_long_polling() if the URL matches or if an explicit opt-in
flag (e.g. allow_delete_webhook or opt_out_of_shared_tokens = false) is set in
configuration; if the webhook does not match and no opt-in flag is set, log a
clear warning and back off without deleting. Ensure changes touch the
getUpdates/409 handling, the webhook_blocks_polling logic, and the
delete_webhook_for_long_polling() invocation so the decision flows from
getWebhookInfo or the opt-in flag.
Summary
OPENHUMAN_DISABLE_CHANNEL_LISTENERS=1for tests.TelegramConfigtriggerrestartCoreProcess()when the core returnsrestart_required: true, so connecting a bot does not require a manual app quit.reply_to_message_id, typing indicators, update deduplication,edited_message/message_reactionhandling, case-insensitive allowlist, and conversation history keyed by chat where appropriate.[REACTION:emoji]markers and inbound reaction events on the event bus (with allowlist checks).deleteWebhookand retrygetUpdatesso long polling can run after a webhook was previously set.Problem
Telegram channels were easy to misconfigure in practice: listeners were not always started with the server, the UI told users to restart manually instead of orchestrating a core restart, replies did not attach to the triggering message, duplicate
update_ids could double-fire the agent, and bots still using webhooks could not long-poll until the webhook was cleared. Several smaller gaps (typing visibility, reactions, history scoping, allowlist casing) made the channel feel unfinished.Solution
run_server_inner(src/core/jsonrpc.rs) spawnsstart_channels(config)whenChannelsConfig::has_listening_integrations()is true, unlessOPENHUMAN_DISABLE_CHANNEL_LISTENERSis set.restartCoreProcess()after successful connect whenrestart_requiredis returned.message_idtothread_tsfor outboundreply_to_message_id; addsTelegramTypingTask,TelegramUpdateWindow, reaction parsing/sending, anddelete_webhook_for_long_pollingon 409 webhook conflicts.has_listening_integrations()and related schema/helpers; connect path persists Telegram settings where applicable.finalize_draftthread context and optionalsupports_reactions()default; core update logging tightened incore_update.rs.Submission Checklist
Impact
Related
Summary by CodeRabbit
New Features
Improvements
Bug Fixes