perf(daemon): bridge tracing events to /ws/logs (partial #66)#111
perf(daemon): bridge tracing events to /ws/logs (partial #66)#111
Conversation
Adds `BroadcastLogLayer`, a `tracing-subscriber` layer that forwards events to the existing `BroadcastHub::log_tx` channel so WebSocket clients subscribed to `/ws/logs` see the same events the daemon writes to stderr. The native ESP32 `write-flash` path in `fbuild-deploy::esp32_native` already emits per-region and 10%-throttled progress via `tracing::info!()`; this layer surfaces that progress on the WebSocket stream without any new API surface — closing the follow-up noted in #66 comments. Design notes: - Layer early-outs on `receiver_count() == 0`, so the common "no `/ws/logs` clients attached" case pays a single atomic load per event — no JSON serialization, no allocation. - Events from `handlers::websockets` are dropped by module filter to avoid feeding client connect/disconnect notices back onto the channel they announce. - `BroadcastHub` is now built in `main.rs` before tracing init and passed into `DaemonContext::with_hub(...)` so the layer can be registered against `log_tx` before the first `tracing::info!`. The original `DaemonContext::new(...)` is retained as a thin wrapper for tests. - Payload shape matches the welcome frame `/ws/logs` already emits (`{"type":"log","level":..,"message":..,"timestamp":..,"module":..}`) so clients parse every line with one schema. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (4)
📝 WalkthroughWalkthroughThe changes introduce a broadcast logging layer that intercepts tracing events and publishes them to a shared channel, enabling external components to subscribe to log messages. A new Changes
Sequence DiagramsequenceDiagram
participant App as Application
participant TS as Tracing System
participant BLL as BroadcastLogLayer
participant Hub as BroadcastHub
participant WS as WebSocket Handler
App->>TS: Emit tracing event
TS->>BLL: Invoke on_event()
alt Receivers exist
BLL->>BLL: Render message + fields
BLL->>BLL: Serialize to JSON
BLL->>Hub: Broadcast message
Hub->>WS: Send to subscribers
else No receivers
BLL->>BLL: Skip (early return)
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Two orthogonal micro-optimizations that shave the remaining server-side cost off the session-trusted verify-skip path (#116). Targets the best-case warm-deploy budget of < 1 s end-to-end (see #114 comment): ## `ImageHashMemo` on `DaemonContext` Warm redeploys re-read bootloader + partitions + firmware (~2–4 MB) and re-hash them on every call, costing ~5–15 ms. The memo keys on firmware path + the three files' `mtime` tuple — when all three match, reuse the stored hash instead of touching disk. Self-invalidates when any file's `mtime` advances (i.e. the next build produced new output). ## `DeviceManager::refresh_devices_if_stale` `refresh_devices()` costs ~20–30 ms on Windows (OS port enumeration). The trust-hash path called it on every deploy to keep `last_disconnect_at` up to date, but a 2 s freshness window is plenty to catch a physical unplug/replug between two warm deploys. The new `refresh_devices_if_stale(Duration)` short-circuits inside the window. ## Impact on the best-case arithmetic Server-side cost on a warm trust-skip deploy drops from ~50 ms (refresh + SHA-256 + lookup + early return) to ~1–2 ms (DashMap probe + metadata stat for the 3 files). Combined with the #116 trust-skip early return and #111 /ws/logs stream, that puts the <1 s best-case target (#114 comment) squarely in reach once the in-memory build fingerprint (#91 follow-up) lands. ## Tests 5 new unit tests: - `device_manager::refresh_devices_if_stale_skips_inside_window` - `device_manager::refresh_devices_if_stale_reruns_when_expired` - `image_hash_memo_tests::memo_hit_reuses_hash` - `image_hash_memo_tests::memo_miss_on_firmware_mtime_change` - `image_hash_memo_tests::memo_skipped_when_inputs_missing` All 121 `fbuild-daemon` lib tests pass; clippy `-D warnings` clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
…dow (#118) Two orthogonal micro-optimizations that shave the remaining server-side cost off the session-trusted verify-skip path (#116). Targets the best-case warm-deploy budget of < 1 s end-to-end (see #114 comment): ## `ImageHashMemo` on `DaemonContext` Warm redeploys re-read bootloader + partitions + firmware (~2–4 MB) and re-hash them on every call, costing ~5–15 ms. The memo keys on firmware path + the three files' `mtime` tuple — when all three match, reuse the stored hash instead of touching disk. Self-invalidates when any file's `mtime` advances (i.e. the next build produced new output). ## `DeviceManager::refresh_devices_if_stale` `refresh_devices()` costs ~20–30 ms on Windows (OS port enumeration). The trust-hash path called it on every deploy to keep `last_disconnect_at` up to date, but a 2 s freshness window is plenty to catch a physical unplug/replug between two warm deploys. The new `refresh_devices_if_stale(Duration)` short-circuits inside the window. ## Impact on the best-case arithmetic Server-side cost on a warm trust-skip deploy drops from ~50 ms (refresh + SHA-256 + lookup + early return) to ~1–2 ms (DashMap probe + metadata stat for the 3 files). Combined with the #116 trust-skip early return and #111 /ws/logs stream, that puts the <1 s best-case target (#114 comment) squarely in reach once the in-memory build fingerprint (#91 follow-up) lands. ## Tests 5 new unit tests: - `device_manager::refresh_devices_if_stale_skips_inside_window` - `device_manager::refresh_devices_if_stale_reruns_when_expired` - `image_hash_memo_tests::memo_hit_reuses_hash` - `image_hash_memo_tests::memo_miss_on_firmware_mtime_change` - `image_hash_memo_tests::memo_skipped_when_inputs_missing` All 121 `fbuild-daemon` lib tests pass; clippy `-D warnings` clean. Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Summary
BroadcastLogLayer(crates/fbuild-daemon/src/log_layer.rs), a tracing-subscriberLayerthat forwards events to the existingBroadcastHub::log_txchannel so/ws/logssubscribers see the same output the daemon writes to stderr.write-flashprogress already emits viatracing::info!()infbuild-deploy::esp32_native; this layer makes those per-region and 10%-throttled progress lines visible on the WebSocket stream — closing the follow-up from the perf(deploy): replace esptool subprocess with espflash native crate #66 code comments / issue comment without adding a separate progress API.BroadcastHubis constructed inmain.rsbefore tracing init and threaded intoDaemonContext::with_hub(...)so the layer can be registered againstlog_txbefore the firsttracing::info!.Behavior notes
fbuild_daemon::handlers::websocketsare filtered out so the/ws/logshandler's own connect/disconnect notices don't feed themselves back onto the channel they announce./ws/logsalready emits:{"type":"log","level":..,"message":..,"timestamp":..,"module":..}— clients parse every line with one schema.Test plan
uv run cargo check -p fbuild-daemon --all-targetsuv run cargo clippy -p fbuild-daemon --all-targets -- -D warningsuv run cargo test -p fbuild-daemon --lib(111 passed; 4 new log_layer tests)/ws/logsagainst a local daemon during an ESP32 deploy withFBUILD_USE_ESPFLASH_WRITE=1and confirm progress frames stream live.🤖 Generated with Claude Code
Summary by CodeRabbit