Conversation
- Append iSparto collaboration mode to CLAUDE.md (roles, triggers, branching, guardrails) - Create .claude/settings.json with agent teams env and tmux teammate mode - Generate plan.md reflecting current project state (v1.5.0, clean backlog) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…n (v1.6.0) - New realtime mode: forward tracked messages instantly on arrival - 7-layer rate protection suite (rate_limiter.py): sliding window, jittered delay, media throttle, hourly/daily caps, exponential backoff, circuit breaker + Bark alert, startup warmup — all configurable with conservative defaults - GUI support: mode toggle with experimental warning, confirmation dialog with risk disclosure, persistent warning banner, rate protection config fields - Config: new [realtime] section with push_mode and 7 rate protection parameters - Startup message to control chat with active rate limit summary - 51 new unit tests for rate limiter and config validation (144 total) - Documentation updated in 4 languages (EN, zh-Hans, zh-Hant, ja) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ence (v1.6.1) - Enable PRAGMA journal_mode=WAL and busy_timeout=5000 on app DB (storage.py) - Custom _WalSqliteSession subclass injects WAL + busy_timeout into Telethon session DB - Add sqlite3.OperationalError retry (3 attempts, 1s delay) in daemon reconnect loop - Doctor command warns when session/db files are in cloud-sync directories (Dropbox, iCloud, OneDrive, Google Drive) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Detect Dropbox/iCloud/OneDrive/Google Drive paths on config load - Show amber warning banner at top of GUI when data files are in sync directory - Informational only, does not block usage Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add to Key Features bullet list with experimental label - Add realtime config rows to configuration table - Add "Realtime push mode" subsection under Configuration with GUI-first setup steps and 7-layer rate protection summary Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Translate all user-facing text in the web GUI to Simplified Chinese, including page titles, labels, buttons, tooltips, status messages, validation errors, and timezone/time-format presets. Cache-busting query strings added to static asset URLs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add a client-side i18n system that automatically detects browser language via navigator.language and switches between English and Chinese (zh-CN). All 150+ user-facing strings are now translatable through a centralized translation table in JavaScript, keeping the Python backend language-neutral. - English and Chinese translation tables with t()/tf() helper functions - Timezone preset and time format unit label translation via tLabel() - Dynamic HTML lang attribute and page title based on detected language - All 20 GUI-related tests pass Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace Chinese quotes "" with brackets 「」 to avoid breaking the JavaScript string literal. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
A stray Unicode left/right double quotation mark (U+201C/U+201D) broke the JavaScript template literal at runtime. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Append ?v=2 to app.js and app.css references in HTML to ensure browsers load the updated i18n version instead of cached copies. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- After the first warning, register a filter to ignore subsequent identical UserWarning messages so GUI status polling doesn't flood the log. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ferences - Merge 1.6.0 + 1.6.1 changelog entries into single 1.6.0 release (all 4 languages) - Add cloud-sync warning fix and GUI warning suppression to 1.6.0 notes - Update README version references from v1.5.0 to v1.6.0 (all 4 languages) - Set pyproject.toml version to 1.6.0 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add GUI internationalization entry to 1.6.0 changelog (4 languages) - Update README Key Features to mention auto language detection (4 languages) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add 34 new i18n keys (17 en + 17 zh) for realtime push section, cloud-sync banner, risk confirmation dialog, and rate limit fields - Replace all hardcoded English strings with t()/tf() calls Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b34c075e29
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| await _send_message_with_fallback( | ||
| self.client, | ||
| self._fallback_client, | ||
| control.control_chat_id, | ||
| text, |
There was a problem hiding this comment.
Surface FloodWaits to rate-protection handlers
Realtime push currently wraps outbound sends with _send_message_with_fallback/_send_media_for_message, which internally use _with_floodwait and sleep-retry on FloodWaitError instead of re-raising. That means the except errors.FloodWaitError branch in _push_message never executes, so record_flood_wait() is not called and the exponential backoff/circuit-breaker state is never updated when Telegram starts rate-limiting sends.
Useful? React with 👍 / 👎.
| reply_to=reply_to, | ||
| fallback_client=self._fallback_client, | ||
| ) | ||
| self.rate_protection.record_send() |
There was a problem hiding this comment.
Apply rate accounting per outbound realtime message
acquire() is called once and record_send() is recorded once for the whole forwarding operation, but that operation can emit multiple Telegram messages (formatted text plus each media attachment). In media-heavy traffic this undercounts actual sends, so minute/hour/day windows and inter-send pacing are effectively bypassed and configured limits can be exceeded even when realtime protection is enabled.
Useful? React with 👍 / 👎.
…idually - Add on_flood_wait callback to _with_floodwait so the realtime pusher's circuit breaker and exponential backoff are updated even when FloodWait is handled internally by sleep-retry. - Count each media attachment as a separate send in rate accounting, with its own acquire() + record_send() cycle, so configured per-minute/hour/day limits are not silently exceeded by media-heavy messages. - Thread on_flood_wait through _send_with_backoff, _send_file_with_backoff, and their _with_fallback wrappers (optional param, no change for existing non-realtime callers). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
@codex review |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: cf29dc6056
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| wait_for = exc.seconds + 1 | ||
| logger.warning("FloodWait: sleeping for %ss", wait_for) | ||
| if on_flood_wait is not None: | ||
| on_flood_wait(exc.seconds) | ||
| await asyncio.sleep(wait_for) |
There was a problem hiding this comment.
Honor computed backoff in FloodWait retry loop
The FloodWait retry path always sleeps exc.seconds + 1 and immediately retries, even though on_flood_wait updates exponential backoff and can trip the circuit breaker; because the adjusted wait is never used here, repeated FloodWaits still run at Telegram’s minimum delay and the breaker does not actually stop in-flight retries. In high-volume realtime mode this defeats the new rate-protection guarantees and can continue hammering send APIs after multiple FloodWait events.
Useful? React with 👍 / 👎.
| except Exception: | ||
| logger.exception("Realtime pusher error; will continue.") |
There was a problem hiding this comment.
Requeue failed realtime pushes instead of dropping them
When _push_message raises any non-FloodWaitError exception (for example transient network/RPC failures), the outer loop only logs and continues, but the dequeued message is not reinserted. That makes realtime forwarding lossy under temporary send failures, and with skip_html_report=true those messages may never reach the control chat at all.
Useful? React with 👍 / 👎.
…me pushes - _with_floodwait now uses the backoff-adjusted wait time (returned by on_flood_wait callback) when it exceeds Telegram's minimum, so repeated FloodWaits trigger progressively longer sleeps in realtime mode. - record_flood_wait() now returns the adjusted wait value from compute_wait(). - Realtime pusher retries non-FloodWait failures up to 3 times before dropping a message, preventing silent message loss on transient errors. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 438bb1d167
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| attempts = retry_counts.get(db_msg.message_id, 0) + 1 | ||
| if attempts < self._MAX_RETRIES: | ||
| retry_counts[db_msg.message_id] = attempts |
There was a problem hiding this comment.
Key retries by chat+message to avoid cross-chat collisions
The retry bookkeeping is keyed only by db_msg.message_id, but Telegram message IDs are scoped per chat, not globally. In realtime mode with multiple targets, two different chats can both have message_id=123; a failure in one chat will increment/pop the same retry counter used by the other chat, causing premature drops or skipped retries. Use a composite key like (db_msg.chat_id, db_msg.message_id) for retry_counts.
Useful? React with 👍 / 👎.
| sqlite_retries = 0 | ||
| while True: | ||
| try: | ||
| await client.run_until_disconnected() | ||
| return # graceful disconnect |
There was a problem hiding this comment.
Reset SQLite retry counter after recovered failures
sqlite_retries is initialized once outside the reconnect loop and is only reset on network exceptions, so transient sqlite3.OperationalError events accumulate across the whole daemon lifetime. This means the 4th sporadic lock error (even hours/days apart) will hit the fatal > _SQLITE_MAX_RETRIES path despite not being a persistent outage. The counter should be reset after a successful recovery cycle instead of accumulating indefinitely.
Useful? React with 👍 / 👎.
…poradic errors - Key realtime retry_counts by (chat_id, message_id) instead of bare message_id, preventing cross-chat collision when multiple targets share the same message_id. - Reset SQLite OperationalError retry counter when errors are spaced more than 30s apart, so sporadic lock errors spread over hours/days no longer accumulate to a fatal exit. - Add regression tests for both SQLite burst/sporadic retry behavior. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d352c33900
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| return | ||
| now = time.monotonic() | ||
| jitter = random.uniform(-1.0, 1.0) | ||
| required_gap = max(0.0, self._min_interval + jitter) |
There was a problem hiding this comment.
Enforce min_interval even when jitter is negative
The jitter calculation can reduce the effective gap below min_interval_sec (e.g., min_interval_sec=3 and jitter -1 yields a 2-second gap), which contradicts the config contract and weakens the realtime flood-protection guarantees under bursty traffic. This makes sends happen faster than the operator explicitly configured, so the delay should be clamped to never go below self._min_interval.
Useful? React with 👍 / 👎.
|
|
||
| def _purge(self, now: float) -> None: | ||
| cutoff = now - self.span_seconds | ||
| while self.timestamps and self.timestamps[0] < cutoff: |
There was a problem hiding this comment.
Purge window entries at the exact cutoff boundary
Using < cutoff keeps timestamps that are exactly span_seconds old, but seconds_until_free() simultaneously returns 0, so acquire() can proceed and record another send while the old event is still counted in-window. At rate-limit boundaries this allows a one-message overshoot and inconsistent counter state; boundary entries should be removed when they are exactly at cutoff.
Useful? React with 👍 / 👎.
…utoff - JitteredDelay: clamp required_gap lower bound to min_interval so negative jitter never reduces the inter-send gap below the configured minimum. - _Window._purge: use <= instead of < so timestamps exactly at the cutoff boundary are removed, preventing a one-message overshoot at rate-limit edges. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…t paths - Align on_flood_wait parameter type to Callable[[int], float] across _send_with_backoff, _send_file_with_backoff, and fallback wrappers. - Use backoff-adjusted wait time in _push_message FloodWait except branch, consistent with _with_floodwait behavior. - Add retry_counts size cap (1000) to prevent unbounded dict growth in long-running daemon. - Fix GUI cloud sync banner version reference: v1.6.1 → v1.6.0. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Summary
doctorand GUI warn when data resides in cloud-synced directoriesChangelog
See CHANGELOG.md — v1.6.0 (2026-03-26)
Test plan
pytest tests/— all tests pass (including rate limiter suite)🤖 Generated with Claude Code