Skip to content

better packet queuing & pacing for custom palette live preview#5515

Merged
DedeHai merged 3 commits intowled:mainfrom
DedeHai:improved_customPal_preview
Apr 20, 2026
Merged

better packet queuing & pacing for custom palette live preview#5515
DedeHai merged 3 commits intowled:mainfrom
DedeHai:improved_customPal_preview

Conversation

@DedeHai
Copy link
Copy Markdown
Collaborator

@DedeHai DedeHai commented Apr 18, 2026

much fewer packets get dropped on larger setups at the cost of slightly slower update rate.

Summary by CodeRabbit

  • Bug Fixes
    • Prevented stale or in-flight LED preview updates from completing, improving responsiveness and visual correctness during rapid changes.
    • Improved connection handling and pacing by choosing the best transport and limiting queued preview requests to avoid overload and dropped operations.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 18, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 9dbf42ed-5451-42b3-890e-0c2c683e32e9

📥 Commits

Reviewing files that changed from the base of the PR and between 1cacd13 and c32858e.

📒 Files selected for processing (1)
  • wled00/data/cpal/cpal.htm
🚧 Files skipped from review as they are similar to previous changes (1)
  • wled00/data/cpal/cpal.htm

Walkthrough

Adds a monotonically increasing _applySeq to cancel stale preview operations, clears pending HTTP preview queue when a new preview starts, gates WebSocket usage by ws.bufferedAmount < 32768 with an HTTP-queue fallback (queue capped), and inserts a short ~15ms pause after ws.send().

Changes

Cohort / File(s) Summary
LED Preview & Transport Logic
wled00/data/cpal/cpal.htm
Introduced _applySeq counter and checks to abort stale/in-flight preview dispatches; flushes/resolves pending HTTP preview requests as dropped when a new preview begins; WebSocket used only when connected and ws.bufferedAmount < 32768, otherwise HTTP queue used (returns -1 when queue length ≥5); added ~15ms wait after ws.send(); HTTP fallback maintains ~120ms inter-request pacing.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant UI as Client UI
    participant WS as WebSocket
    participant HTTP as HTTP Queue
    participant LED as /json/state

    User->>UI: Start preview apply
    UI->>UI: _applySeq++ (seq = N)
    UI->>HTTP: flush pending previews (mark dropped)

    alt ws exists && ws.bufferedAmount < 32768
        UI->>WS: ws.send(preview)
        WS-->>UI: sent
        UI->>UI: wait ~15ms
    else
        alt HTTP queue length < 5
            UI->>HTTP: enqueue(preview)
            HTTP-->>UI: queued
            UI->>HTTP: enforce ~120ms spacing between sends
        else
            UI->>UI: return -1 (queue full)
        end
    end

    UI->>LED: GET /json/state
    LED-->>UI: state

    alt new preview started before completion
        User->>UI: Start preview apply
        UI->>UI: _applySeq++ (seq = N+1)
        UI->>HTTP: flush pending previews (mark dropped)
        UI->>UI: abort in-flight operations for seq N
    end
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Suggested reviewers

  • netmindz
  • softhack007
🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'better packet queuing & pacing for custom palette live preview' directly reflects the main changes: improved packet queuing/pacing logic and WebSocket fallback behavior for the custom palette preview feature.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
wled00/data/cpal/cpal.htm (1)

634-670: ⚠️ Potential issue | 🟠 Major

Clearing _httpQueue leaks unresolved promises and strands the previous applyLED.

At line 667 you do _httpQueue.length = 0, but the items dropped from the queue each hold a resolve callback (pushed at line 640). Those promises are never settled, so the previous applyLED invocation — which is awaiting requestJson at line 689 — remains suspended forever and can never reach the seq !== _applySeq guard at line 687 to exit. Each superseded preview leaks a pending async frame, and in the (narrow) case where the loop at line 686 is awaiting a queued item, the old run keeps a reference to its arr/palCache data indefinitely.

Resolve them before clearing so the old awaiters unblock and promptly hit the seq check:

🛠️ Proposed fix
-			// discard pending HTTP chunks from any previous preview so stale data doesn't drain slowly
-			if (window._httpQueue) _httpQueue.length = 0;
+			// discard pending HTTP chunks from any previous preview so stale data doesn't drain slowly;
+			// resolve dropped entries so their awaiters can observe the seq change and exit
+			if (window._httpQueue) {
+				while (_httpQueue.length) _httpQueue.shift().resolve(-1);
+			}

Relatedly, when the queue cap is hit at line 638 the caller receives -1 but applyLED at line 689 never inspects the return value, so that chunk is silently dropped. Worth a comment there noting the supersede logic is what makes this acceptable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@wled00/data/cpal/cpal.htm` around lines 634 - 670, The issue: clearing
window._httpQueue with "_httpQueue.length = 0" drops queued items that each
contain a resolve callback and leaves their promises unresolved, leaking
suspended applyLED calls; fix by draining and resolving each queued entry before
clearing. Locate the queue producer (the push of "{ cmd, resolve }" and the
run() loop that shifts _httpQueue) and modify the applyLED path that currently
does "if (window._httpQueue) _httpQueue.length = 0" to instead iterate the
array, call each entry.resolve(...) with a sensible sentinel (e.g., a canceled
code like -1 or an Error) so awaiting callers unblock, then clear the array and
proceed; also add a brief comment near the queue-cap check (the "if
(_httpQueue.length >= 5) return -1" branch) noting that callers may receive -1
when a chunk is dropped by queue-cap and that applyLED relies on supersede
logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@wled00/data/cpal/cpal.htm`:
- Around line 634-670: The issue: clearing window._httpQueue with
"_httpQueue.length = 0" drops queued items that each contain a resolve callback
and leaves their promises unresolved, leaking suspended applyLED calls; fix by
draining and resolving each queued entry before clearing. Locate the queue
producer (the push of "{ cmd, resolve }" and the run() loop that shifts
_httpQueue) and modify the applyLED path that currently does "if
(window._httpQueue) _httpQueue.length = 0" to instead iterate the array, call
each entry.resolve(...) with a sensible sentinel (e.g., a canceled code like -1
or an Error) so awaiting callers unblock, then clear the array and proceed; also
add a brief comment near the queue-cap check (the "if (_httpQueue.length >= 5)
return -1" branch) noting that callers may receive -1 when a chunk is dropped by
queue-cap and that applyLED relies on supersede logic.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: d364f7b7-da46-42dc-a07d-94551b51a770

📥 Commits

Reviewing files that changed from the base of the PR and between 5e49a1c and 67904df.

📒 Files selected for processing (1)
  • wled00/data/cpal/cpal.htm

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (3)
wled00/data/cpal/cpal.htm (3)

634-638: Inconsistent window. prefixing on the HTTP queue globals.

_httpQueue / _httpRun are initialized as explicit window properties (lines 635-636), but every subsequent reference (lines 638, 640, 642-644, 656, and 668 in applyLED) drops the prefix. This only works because the <script> tag runs in sloppy mode where bare identifiers resolve through the global object. If this file is ever moved to a module / strict mode, these become ReferenceErrors.

Either declare them with let at module scope (preferred) or use window. consistently.

♻️ Suggested refactor
-		let _applySeq = 0; // incremented each time applyLED fires; used to cancel stale in-flight previews
+		let _applySeq = 0; // incremented each time applyLED fires; used to cancel stale in-flight previews
+		let _httpQueue = [], _httpRun = 0;
-			// HTTP fallback
-			if (!window._httpQueue) {
-				window._httpQueue = [];
-				window._httpRun = 0;
-			}
-			if (_httpQueue.length >= 5) return -1; // queue full; applyLED cancels stale queues before sending
+			// HTTP fallback
+			if (_httpQueue.length >= 5) return -1; // queue full; applyLED cancels stale queues before sending

…and update the applyLED clear-queue block accordingly.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@wled00/data/cpal/cpal.htm` around lines 634 - 638, The globals _httpQueue and
_httpRun are initialized with window._httpQueue/window._httpRun but referenced
without the window prefix inside applyLED, which will break in strict/module
mode; fix by choosing one approach and applying it consistently: preferably
declare them at module scope with let _httpQueue = _httpQueue || []; let
_httpRun = _httpRun || 0 (or initialize them once at top-level) and then update
all references in applyLED to use the plain identifiers (_httpQueue, _httpRun);
alternatively, if you prefer keeping them on the window object, change all uses
in applyLED (and the initialization) to window._httpQueue and window._httpRun so
every access is consistent. Ensure the clear-queue block and capacity check in
applyLED reference the same identifiers you choose.

623-631: 15 ms post-send wait is paid by every requestJson caller, not just preview chunks.

requestJson is also invoked for non-chunked control messages (e.g. {seg:{frz:false}} from the preview checkbox handler at line 121 and the beforeunload handler at line 701). With this change those calls now also block for 15 ms after a successful WS send. It's harmless functionally, but if the pacing is meant to throttle only the preview-chunk pipeline, consider gating the wait on a parameter (e.g. requestJson(cmd, {pace:true})) and only opting in from the chunk loop in applyLED.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@wled00/data/cpal/cpal.htm` around lines 623 - 631, The post-send 15ms delay
in requestJson causes all callers (not just the preview chunk pipeline) to wait;
change requestJson(cmd) to accept an optional flag (e.g., requestJson(cmd,
{pace:false}) or requestJson(cmd, pace)) and only perform the await new
Promise(...) when that flag is true; update the chunked-send path inside
applyLED (the loop that sends preview chunks) to call requestJson with pacing
enabled, and leave other callers such as the preview checkbox handler
({seg:{frz:false}}) and the beforeunload handler to call requestJson without
pacing so they do not incur the delay.

662-695: Seq-invalidation pattern looks correct — small note on the ignored return value.

The _applySeq bump + queue flush + per-chunk seq check cleanly cancel stale previews; concurrent applyLED invocations resolve safely (the older loop sees the seq mismatch on its next iteration after its in-flight requestJson resumes, even when resolved with -1).

Minor: line 691 doesn't inspect the return of requestJson, so a -1 (queue full / cancelled chunk) is indistinguishable from success. Today the queue cap is effectively unreachable because the chunk loop is sequential (max 1 in-flight), so this is benign — flagging only in case the pacing model changes later. Could short-circuit early:

-					if (seq !== _applySeq) return; // superseded mid-send
-					let chunk = [s.start + j, ...arr.slice(j, j + maxCol)];
-					await requestJson({ seg: { id: s.id, i: chunk } });
+					if (seq !== _applySeq) return; // superseded mid-send
+					let chunk = [s.start + j, ...arr.slice(j, j + maxCol)];
+					if (await requestJson({ seg: { id: s.id, i: chunk } }) === -1) return;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@wled00/data/cpal/cpal.htm` around lines 662 - 695, In applyLED(), the
per-chunk await of requestJson doesn't inspect its return value so a cancelled
queue entry (returns -1) is treated the same as success; capture the result of
the await inside the chunk loop (await requestJson(...)), and if it returns -1
(or another sentinel for cancelled/full) then short-circuit (break or return) to
avoid continuing to send further chunks for that stale preview; update the chunk
loop in applyLED to check the requestJson result and exit the function or
segment loop when it signals cancellation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@wled00/data/cpal/cpal.htm`:
- Around line 634-638: The globals _httpQueue and _httpRun are initialized with
window._httpQueue/window._httpRun but referenced without the window prefix
inside applyLED, which will break in strict/module mode; fix by choosing one
approach and applying it consistently: preferably declare them at module scope
with let _httpQueue = _httpQueue || []; let _httpRun = _httpRun || 0 (or
initialize them once at top-level) and then update all references in applyLED to
use the plain identifiers (_httpQueue, _httpRun); alternatively, if you prefer
keeping them on the window object, change all uses in applyLED (and the
initialization) to window._httpQueue and window._httpRun so every access is
consistent. Ensure the clear-queue block and capacity check in applyLED
reference the same identifiers you choose.
- Around line 623-631: The post-send 15ms delay in requestJson causes all
callers (not just the preview chunk pipeline) to wait; change requestJson(cmd)
to accept an optional flag (e.g., requestJson(cmd, {pace:false}) or
requestJson(cmd, pace)) and only perform the await new Promise(...) when that
flag is true; update the chunked-send path inside applyLED (the loop that sends
preview chunks) to call requestJson with pacing enabled, and leave other callers
such as the preview checkbox handler ({seg:{frz:false}}) and the beforeunload
handler to call requestJson without pacing so they do not incur the delay.
- Around line 662-695: In applyLED(), the per-chunk await of requestJson doesn't
inspect its return value so a cancelled queue entry (returns -1) is treated the
same as success; capture the result of the await inside the chunk loop (await
requestJson(...)), and if it returns -1 (or another sentinel for cancelled/full)
then short-circuit (break or return) to avoid continuing to send further chunks
for that stale preview; update the chunk loop in applyLED to check the
requestJson result and exit the function or segment loop when it signals
cancellation.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: a48de8e9-84c1-4c89-b947-931b6e7dd010

📥 Commits

Reviewing files that changed from the base of the PR and between 67904df and 1cacd13.

📒 Files selected for processing (1)
  • wled00/data/cpal/cpal.htm

@softhack007 softhack007 added connectivity Issue regarding protocols, WiFi connection or availability of interfaces optimization re-working an existing feature to be faster, or use less memory labels Apr 18, 2026
@DedeHai DedeHai merged commit a2d970e into wled:main Apr 20, 2026
25 checks passed
@DedeHai DedeHai deleted the improved_customPal_preview branch April 20, 2026 16:58
DedeHai added a commit that referenced this pull request Apr 20, 2026
* better packet queuing / pacing for custom palette live preview
* fix leak
* remove window prefix from variables
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

connectivity Issue regarding protocols, WiFi connection or availability of interfaces optimization re-working an existing feature to be faster, or use less memory

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants