better packet queuing & pacing for custom palette live preview#5515
better packet queuing & pacing for custom palette live preview#5515
Conversation
much fewer packets get dropped on larger setups
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
WalkthroughAdds a monotonically increasing Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant UI as Client UI
participant WS as WebSocket
participant HTTP as HTTP Queue
participant LED as /json/state
User->>UI: Start preview apply
UI->>UI: _applySeq++ (seq = N)
UI->>HTTP: flush pending previews (mark dropped)
alt ws exists && ws.bufferedAmount < 32768
UI->>WS: ws.send(preview)
WS-->>UI: sent
UI->>UI: wait ~15ms
else
alt HTTP queue length < 5
UI->>HTTP: enqueue(preview)
HTTP-->>UI: queued
UI->>HTTP: enforce ~120ms spacing between sends
else
UI->>UI: return -1 (queue full)
end
end
UI->>LED: GET /json/state
LED-->>UI: state
alt new preview started before completion
User->>UI: Start preview apply
UI->>UI: _applySeq++ (seq = N+1)
UI->>HTTP: flush pending previews (mark dropped)
UI->>UI: abort in-flight operations for seq N
end
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
wled00/data/cpal/cpal.htm (1)
634-670:⚠️ Potential issue | 🟠 MajorClearing
_httpQueueleaks unresolved promises and strands the previousapplyLED.At line 667 you do
_httpQueue.length = 0, but the items dropped from the queue each hold aresolvecallback (pushed at line 640). Those promises are never settled, so the previousapplyLEDinvocation — which is awaitingrequestJsonat line 689 — remains suspended forever and can never reach theseq !== _applySeqguard at line 687 to exit. Each superseded preview leaks a pending async frame, and in the (narrow) case where the loop at line 686 is awaiting a queued item, the old run keeps a reference to itsarr/palCachedata indefinitely.Resolve them before clearing so the old awaiters unblock and promptly hit the seq check:
🛠️ Proposed fix
- // discard pending HTTP chunks from any previous preview so stale data doesn't drain slowly - if (window._httpQueue) _httpQueue.length = 0; + // discard pending HTTP chunks from any previous preview so stale data doesn't drain slowly; + // resolve dropped entries so their awaiters can observe the seq change and exit + if (window._httpQueue) { + while (_httpQueue.length) _httpQueue.shift().resolve(-1); + }Relatedly, when the queue cap is hit at line 638 the caller receives
-1butapplyLEDat line 689 never inspects the return value, so that chunk is silently dropped. Worth a comment there noting the supersede logic is what makes this acceptable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@wled00/data/cpal/cpal.htm` around lines 634 - 670, The issue: clearing window._httpQueue with "_httpQueue.length = 0" drops queued items that each contain a resolve callback and leaves their promises unresolved, leaking suspended applyLED calls; fix by draining and resolving each queued entry before clearing. Locate the queue producer (the push of "{ cmd, resolve }" and the run() loop that shifts _httpQueue) and modify the applyLED path that currently does "if (window._httpQueue) _httpQueue.length = 0" to instead iterate the array, call each entry.resolve(...) with a sensible sentinel (e.g., a canceled code like -1 or an Error) so awaiting callers unblock, then clear the array and proceed; also add a brief comment near the queue-cap check (the "if (_httpQueue.length >= 5) return -1" branch) noting that callers may receive -1 when a chunk is dropped by queue-cap and that applyLED relies on supersede logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@wled00/data/cpal/cpal.htm`:
- Around line 634-670: The issue: clearing window._httpQueue with
"_httpQueue.length = 0" drops queued items that each contain a resolve callback
and leaves their promises unresolved, leaking suspended applyLED calls; fix by
draining and resolving each queued entry before clearing. Locate the queue
producer (the push of "{ cmd, resolve }" and the run() loop that shifts
_httpQueue) and modify the applyLED path that currently does "if
(window._httpQueue) _httpQueue.length = 0" to instead iterate the array, call
each entry.resolve(...) with a sensible sentinel (e.g., a canceled code like -1
or an Error) so awaiting callers unblock, then clear the array and proceed; also
add a brief comment near the queue-cap check (the "if (_httpQueue.length >= 5)
return -1" branch) noting that callers may receive -1 when a chunk is dropped by
queue-cap and that applyLED relies on supersede logic.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: d364f7b7-da46-42dc-a07d-94551b51a770
📒 Files selected for processing (1)
wled00/data/cpal/cpal.htm
There was a problem hiding this comment.
🧹 Nitpick comments (3)
wled00/data/cpal/cpal.htm (3)
634-638: Inconsistentwindow.prefixing on the HTTP queue globals.
_httpQueue/_httpRunare initialized as explicitwindowproperties (lines 635-636), but every subsequent reference (lines 638, 640, 642-644, 656, and 668 inapplyLED) drops the prefix. This only works because the<script>tag runs in sloppy mode where bare identifiers resolve through the global object. If this file is ever moved to a module / strict mode, these becomeReferenceErrors.Either declare them with
letat module scope (preferred) or usewindow.consistently.♻️ Suggested refactor
- let _applySeq = 0; // incremented each time applyLED fires; used to cancel stale in-flight previews + let _applySeq = 0; // incremented each time applyLED fires; used to cancel stale in-flight previews + let _httpQueue = [], _httpRun = 0;- // HTTP fallback - if (!window._httpQueue) { - window._httpQueue = []; - window._httpRun = 0; - } - if (_httpQueue.length >= 5) return -1; // queue full; applyLED cancels stale queues before sending + // HTTP fallback + if (_httpQueue.length >= 5) return -1; // queue full; applyLED cancels stale queues before sending…and update the
applyLEDclear-queue block accordingly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@wled00/data/cpal/cpal.htm` around lines 634 - 638, The globals _httpQueue and _httpRun are initialized with window._httpQueue/window._httpRun but referenced without the window prefix inside applyLED, which will break in strict/module mode; fix by choosing one approach and applying it consistently: preferably declare them at module scope with let _httpQueue = _httpQueue || []; let _httpRun = _httpRun || 0 (or initialize them once at top-level) and then update all references in applyLED to use the plain identifiers (_httpQueue, _httpRun); alternatively, if you prefer keeping them on the window object, change all uses in applyLED (and the initialization) to window._httpQueue and window._httpRun so every access is consistent. Ensure the clear-queue block and capacity check in applyLED reference the same identifiers you choose.
623-631: 15 ms post-send wait is paid by everyrequestJsoncaller, not just preview chunks.
requestJsonis also invoked for non-chunked control messages (e.g.{seg:{frz:false}}from the preview checkbox handler at line 121 and thebeforeunloadhandler at line 701). With this change those calls now also block for 15 ms after a successful WS send. It's harmless functionally, but if the pacing is meant to throttle only the preview-chunk pipeline, consider gating the wait on a parameter (e.g.requestJson(cmd, {pace:true})) and only opting in from the chunk loop inapplyLED.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@wled00/data/cpal/cpal.htm` around lines 623 - 631, The post-send 15ms delay in requestJson causes all callers (not just the preview chunk pipeline) to wait; change requestJson(cmd) to accept an optional flag (e.g., requestJson(cmd, {pace:false}) or requestJson(cmd, pace)) and only perform the await new Promise(...) when that flag is true; update the chunked-send path inside applyLED (the loop that sends preview chunks) to call requestJson with pacing enabled, and leave other callers such as the preview checkbox handler ({seg:{frz:false}}) and the beforeunload handler to call requestJson without pacing so they do not incur the delay.
662-695: Seq-invalidation pattern looks correct — small note on the ignored return value.The
_applySeqbump + queue flush + per-chunk seq check cleanly cancel stale previews; concurrentapplyLEDinvocations resolve safely (the older loop sees the seq mismatch on its next iteration after its in-flightrequestJsonresumes, even when resolved with-1).Minor: line 691 doesn't inspect the return of
requestJson, so a-1(queue full / cancelled chunk) is indistinguishable from success. Today the queue cap is effectively unreachable because the chunk loop is sequential (max 1 in-flight), so this is benign — flagging only in case the pacing model changes later. Could short-circuit early:- if (seq !== _applySeq) return; // superseded mid-send - let chunk = [s.start + j, ...arr.slice(j, j + maxCol)]; - await requestJson({ seg: { id: s.id, i: chunk } }); + if (seq !== _applySeq) return; // superseded mid-send + let chunk = [s.start + j, ...arr.slice(j, j + maxCol)]; + if (await requestJson({ seg: { id: s.id, i: chunk } }) === -1) return;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@wled00/data/cpal/cpal.htm` around lines 662 - 695, In applyLED(), the per-chunk await of requestJson doesn't inspect its return value so a cancelled queue entry (returns -1) is treated the same as success; capture the result of the await inside the chunk loop (await requestJson(...)), and if it returns -1 (or another sentinel for cancelled/full) then short-circuit (break or return) to avoid continuing to send further chunks for that stale preview; update the chunk loop in applyLED to check the requestJson result and exit the function or segment loop when it signals cancellation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@wled00/data/cpal/cpal.htm`:
- Around line 634-638: The globals _httpQueue and _httpRun are initialized with
window._httpQueue/window._httpRun but referenced without the window prefix
inside applyLED, which will break in strict/module mode; fix by choosing one
approach and applying it consistently: preferably declare them at module scope
with let _httpQueue = _httpQueue || []; let _httpRun = _httpRun || 0 (or
initialize them once at top-level) and then update all references in applyLED to
use the plain identifiers (_httpQueue, _httpRun); alternatively, if you prefer
keeping them on the window object, change all uses in applyLED (and the
initialization) to window._httpQueue and window._httpRun so every access is
consistent. Ensure the clear-queue block and capacity check in applyLED
reference the same identifiers you choose.
- Around line 623-631: The post-send 15ms delay in requestJson causes all
callers (not just the preview chunk pipeline) to wait; change requestJson(cmd)
to accept an optional flag (e.g., requestJson(cmd, {pace:false}) or
requestJson(cmd, pace)) and only perform the await new Promise(...) when that
flag is true; update the chunked-send path inside applyLED (the loop that sends
preview chunks) to call requestJson with pacing enabled, and leave other callers
such as the preview checkbox handler ({seg:{frz:false}}) and the beforeunload
handler to call requestJson without pacing so they do not incur the delay.
- Around line 662-695: In applyLED(), the per-chunk await of requestJson doesn't
inspect its return value so a cancelled queue entry (returns -1) is treated the
same as success; capture the result of the await inside the chunk loop (await
requestJson(...)), and if it returns -1 (or another sentinel for cancelled/full)
then short-circuit (break or return) to avoid continuing to send further chunks
for that stale preview; update the chunk loop in applyLED to check the
requestJson result and exit the function or segment loop when it signals
cancellation.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: a48de8e9-84c1-4c89-b947-931b6e7dd010
📒 Files selected for processing (1)
wled00/data/cpal/cpal.htm
* better packet queuing / pacing for custom palette live preview * fix leak * remove window prefix from variables
much fewer packets get dropped on larger setups at the cost of slightly slower update rate.
Summary by CodeRabbit