Reduce conntrack GC allocation churn by reusing sweep scratch buffers#4
Closed
Reduce conntrack GC allocation churn by reusing sweep scratch buffers#4
Conversation
Co-authored-by: psaab <196946+psaab@users.noreply.github.com>
Copilot
AI
changed the title
[WIP] Improve garbage collection performance for sessions and state
Reduce conntrack GC allocation churn by reusing sweep scratch buffers
Feb 25, 2026
psaab
added a commit
that referenced
this pull request
Feb 25, 2026
…il opts Four fixes sourced from GitHub Copilot PR review (#2-#5): 1. NAT64 state cleanup (PR #5): compileNAT64() returned early when len(ruleSets)==0, skipping SetNAT64Count(0) and DeleteStaleNAT64(). Removing all NAT64 rules left stale prefixes in BPF maps. 2. DNAT wildcard port-0 (PR #3): skip redundant dnat_table wildcard lookup when meta->dst_port is already 0 — the lookup would be identical to the one that just failed. Applied to both v4 and v6. 3. GC scratch buffer reuse (PR #4): sweep() allocated fresh slices every 10s cycle. Reuse backing arrays via [:0] reset to reduce allocation churn under high session turnover. 4. DHCPv6 nil opts guard (PR #2): move opts==nil check before getDUID() so nil opts returns nil modifiers, not a DUID-only modifier list. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
psaab
added a commit
that referenced
this pull request
Apr 20, 2026
Commit ef92b44 cites median CoV "39.5 %" but the fresh 5-run data block in docs/785-d3-validation.md lists median 38.8 % (third element of the sorted set {19.2, 37.2, 38.8, 48.4, 63.2}). Add an Errata section recording the correct number so the doc stands alone and the PR review reference is resolved without rewriting history. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
5 tasks
psaab
added a commit
that referenced
this pull request
Apr 20, 2026
Folds in round-2 findings from both adversarial reviewers (Codex plan angle + systems angle). No weakening of fairness non-regression gates; no review docs modified. Codex round-2 findings: - #4 (Step 0 per-item gate): Step 0 is now a per-item PASS/FAIL checklist. Each sub-step (0.1 IRQ, 0.2 NAPI+coalescence, 0.3 TCP CC, 0.4 rings, 0.5 C-states, 0.6 XSK bind) records observed vs expected with an explicit PASS/FAIL disposition, and the step emits a mandatory summary table. "X of N audit items PASS" replaces any whole-step PASS. - #5 (CoV rollback floor): CoV regression gate now requires BOTH `> 2 x stddev(pre-CoV)` AND `> 3 percentage points` (MIN_COV_DELTA_PP = 3). Prevents tight-baseline noise rollbacks. - #6 (latency gate spec): probe source = cluster-userspace-host, target = 172.16.80.200, CPU-isolated via taskset on a non-worker CPU, dual-size concurrent `ping -s 56` + `ping -s 1400` for the full test window, per-size p50/p99 captured. p99 rollback gate now requires `> 2 x stddev(pre-p99)` AND `> 20 us` absolute floor. - #7 (ring-quadruple consolidation): Step 0.4 becomes the single authoritative ring-audit table naming authoritative + secondary counters per ring. Step 5 shrinks to deltas-only against the Step 0.4 table. Counters that don't yet exist in the code (fill_batch_starved, completion_reap_max_batch) are named as Phase C "Instrumentation pre-work" prerequisites, not silent holes. Systems round-2 findings: - R2-1 (dual ping size): covered by Codex #6 fold-in; Step 3 runs `-s 56` and `-s 1400` concurrently, each on its own isolated CPU. - R2-2 (`ss -ti` cadence): specified as every 5 s for full window, all flows (filtered by iperf3 port, not just a sample flow). - R2-3 (`perf stat` scope): pinned to `perf stat --per-thread -p $WORKER_PIDS` where `WORKER_PIDS = pgrep -f xpf-userspace-dp`. Explicitly not system-wide. Phase C pre-work section lists which Step 0.4 counters already exist in userspace-dp (dbg_tx_ring_full, dbg_sendto_enobufs, dbg_pending_overflow, pending_tx_local_overflow_drops, tx_submit_error_drops, outstanding_tx, rx_fill_ring_empty_descs via xsk_ffi) and which are named but do NOT yet exist (fill_batch_starved, completion_reap_max_batch) — these become proxy-or-instrument decisions at the start of Phase C. Deferred findings: none from round 2; explicit fold-in map added at the end of the plan. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
psaab
added a commit
that referenced
this pull request
Apr 21, 2026
Closes the four Codex round-3 items on step1-plan.md: - Round-3 #3 (PARTIAL): §4.2 now publishes the Monte Carlo TRUE- POSITIVE power under a 56 %-skew alternative (per-cell fire rate 0.6302 at max>=9), NOT the >99% the prior defense implied. The honest defense is the multi-cell aggregation in §4.6: P(>=2 of 8 cells fire | 56% skew) = 0.9949. - Round-3 #2 HIGH: §4.7 adds a Threshold Y re-derivation policy. Bootstrap 95% CI for Y is [1.82, 2.88] with n=4, so Y=2.72 sits near the CI upper edge. When Step 1 produces new baseline cells, the script must be re-run with the expanded --cells list; if Y moves more than the CI half-width (0.53), update the plan. - Round-3 #4 HIGH (0.77 expected false A across 12 cells): §4.6 commits the FP-discount / multi-cell aggregation policy. Single Verdict A firings are treated as noise; Verdict A triggers Step 2 only when k_A >= 2 cells fire. Same rule applies to Verdict B. Verdict C stays single-cell (per-cell FP < 0.01). §8 Step-2 triggers are rewritten to use k_ counts from §4.6. - Round-3 #3 HIGH (CoS apply no-rollback): §6 step 2 documents the new atomic apply-cos-config.sh contract (commit check + atomic commit + post-commit verify + rollback 1 on failure). §6 step 7 (remove-with-cos) now requires the same atomic pattern. §10 risks table adds a dedicated row for the apply-atomicity guarantee. No weakening of thresholds. The FP-discount policy is the tightening Codex asked for: single-cell A was previously load- bearing; it is now explicitly discounted. All derivations are reproducible via committed Python scripts (stdlib only). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
psaab
added a commit
that referenced
this pull request
Apr 21, 2026
HIGH #1 — submit-side amortization collapse on small batches. §3.1 revised: per-commit stamp (one monotonic_nanos() per writer.commit()) applied to the `inserted` descriptors only; retry-tail not stamped. Why `now_ns` reuse is rejected (up to ~1 ms staleness across the worker loop at worker.rs:619 + afxdp.rs:176-178). HIGH #2 — relaxed-atomic cross-CPU visibility on ARM. §3.6.a decision: keep Relaxed on both writer and reader, document bounded-skew semantics (mirrors existing umem.rs:1322-1329 pattern), downgrade invariants 6/7 and §8 hard-stop #4 to "|sum - count| / count ≤ 0.01" rather than exact equality. Upgrade to Release/Acquire rejected on ~2 % ARM cost grounds. HIGH #3 — sidecar false sharing across workers. §3.3 rewritten after reading the code: each binding has its own UMEM via `WorkerUmemPool::new` at worker.rs:445 with `shared_umem=false`, wrapped in `Rc<WorkerUmemInner>` at umem.rs:16-18 (single-owner thread). Cross-worker false-sharing is structurally impossible. UMEM-headroom in-frame approach (c) rejected on blast-radius grounds. HIGH #4 (Codex numbering) — overhead arithmetic. §3.4 rewritten with three operating-point numbers (`inserted = 256 / 64 / 1`); honest worst-case 45 ns/pkt = 9.4 % of per-queue 481 ns budget. Corrects earlier 0.13 % figure that hid the partial-batch regime and divided by workers instead of queue. Also closes MED #5 (sentinel vs clock-0 collision), MED #6 (sidecar size 192 KiB not 64 KiB; 3×ring_entries per bind.rs:37-44), MED #7 (wire-size growth ~8-10 KiB per status poll), MED #8 (Bonferroni family corrected to 3 composite tests per cell × 12 cells = 36), MED #9 (symbolic two-thread race test replaced by partial-batch, retry-unwind, and bounded-skew tests). LOW #10 (no ambient `now_ns` reuse), LOW #13 (named const asserts + boundary test). Plan-ready: pending Codex round-2. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
psaab
pushed a commit
that referenced
this pull request
Apr 21, 2026
…wned-static-send, 5% stop, bucket-0) Round-2 review left all three HIGH findings PARTIAL or OPEN, plus two MED findings around silent acceptance-criteria regression and bucket-0 deferral. Each is now closed with an in-place plan change: HIGH #1 (VDSO fast path, PARTIAL -> CLOSED). Add §3.4a grounding the "NOT a syscall" claim on strace (host) + AT_SYSINFO_EHDR (bpfrx-fw0 VM) evidence committed in the prior commit. Chose option (a) + (c) from the findings — verify on the target AND document the deployment dependency with a named remediation path (graceful degrade via the existing sentinel rule, not panic). Invariant 1 in §4 now cites the evidence and names the fallback. HIGH #2 (1% skew tolerance untested, Bonferroni tighter than tolerance, OPEN -> CLOSED). Derive K_skew = ceil(λ × W_read) = 1 completion per snapshot at 1 Mpps × 1 µs in §3.6 R2. Replace §11.3 Bonferroni with a cell-level block-permutation test (Fisher-Pitman style) whose null distribution is constructed from the data itself — within-block reshuffles absorb K_skew noise by construction, so the 0.0014 bound is neither needed nor applied. Add §6.1 test #7 that computes K_skew from the harness's own measured write rate and read-window, asserting |sum - count| <= K_skew + 2 (paranoia margin for TSO/ARM). The 1% integration hard-stop in §8 is now defended as the scheduler-preemption-robust bound, not a guess. HIGH #3 (snapshot-thread crossing, PARTIAL -> CLOSED). Add §3.5a pinning BindingCountersSnapshot to owned values via a compile-time const assert that requires 'static + Send. The struct already derives Clone + Serialize + Deserialize + Default on u32/u64/i32 scalars; the #812 extension adds Vec<u64> + two u64s, all owned. Rust type-system trick: 'static on a struct with no lifetime parameter mechanically rejects any future &'a U field addition. Defense in depth: §6.1 test #4 (JSON round-trip) also fails for any accidental reference field via serde's DeserializeOwned default. MED (overhead hard-stop silently widened from 1% to 5%/10%). Defended in §8.1 (option (b) from findings): the 1% bound was proposed before §3.4 re-derived actual costs. Typical case 3.1%, worst case 9.4% on the inserted==1 partial-batch regime documented at tx.rs:5953-5961 / tx.rs:6164-6174. A 1% gate would hard-stop on the instrumentation's own presence under partial-batch traffic; that is the wrong failure mode. 5% steady-state + 10% small-batch soft-gate is the cheapest correctness-preserving budget; alternatives (rdtsc, sampled stamping, bucket-midpoint sum) all lose measurement fidelity. MED (bucket-0 coarseness deferred). Resolved as out-of-scope in §3.2 and §12 item 8. The §11 classifier's pre-registered statistics (D1/D2/D3) all read from buckets 3+ — no verdict depends on sub-µs resolution. Bucket 0 is intentionally coarse because the MQFQ-vs-shaper separation (verdict B) lives in tens-of-microseconds (buckets 4-7), not sub-µs. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
psaab
added a commit
that referenced
this pull request
Apr 21, 2026
…ntics, block-permutation math) Three focused round-4 fixes to the plan: 1. HIGH #2 PARTIAL: widen λ from 2 Mpps to 3 Mpps per worker to match the plan's own small-packet line-rate derivation (25 Gbps / 64 B / 4 workers ≈ 3.05 Mpps). K_skew = ceil(3e6 × 1e-6) = 3 completions, not 2. Downstream numbers updated throughout §3.6: pure memory-ordering bound `K_skew / C ≤ 0.03 %` at C ≥ 10 000; off-CPU pathology `4 ms × 3 Mpps = 12 000` completions, `12 000 / 200 000 = 6 %`. 2. NEW (1 % gate vs 4 ms preemption): adopt the "observability of the event is the point" stance explicitly. The 1 % gate IS expected to fire under ≥ ~1.5 ms CFS preemption; that fire is a measurement pathology the harness should surface, not a false positive. §3.6 now calibrates three regimes: pure memory-ordering (`≤ 0.03 %`, unit-test §6.1 #7), preemption- extended short-window (up to ~6 %, investigation-worthy not auto-block), integration (60-s run at C ≥ 10^7 where 4 ms preemption dilutes to 0.12 %, sustained > 1 % = merge block). §8 hard-stop #4 rewritten as a two-part rule: aggregate `|sum - count| / count > 0.01` on the full run AND per-snapshot fire rate > 5 % across N = 20 short-window snapshots (the "system not stable enough to measure" signal). 3. §11.3 NEW: replace the degenerate whole-window mass-ratio formulation (order-invariant under block permutation, trivial null — round-4 Codex finding) with a formally specified two-sample Fisher-Pitman block permutation test on PER-BLOCK statistics. Pre-registered per-block `T_D1,b`, `T_D2,b`, `T_D3,b` (order-sensitive because they vary across blocks), cell-level reduction `T_v = max_b T_v,b`, two-sample `Δ = mean(cell) − mean(baseline)` statistic, N_perm = 10 000, one-sided empirical p-value `p_v ≤ 0.05`. Cites Pesarin & Salmaso (2010) §3.2 and provides concrete `scipy.stats.permutation_test` invocation with `permutation_type='independent'`, `n_resamples=10_000`, `alternative='greater'`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
psaab
added a commit
that referenced
this pull request
Apr 21, 2026
Extend the snapshot chain:
BindingLiveState.owner_profile_owner.tx_submit_latency_{hist,count,sum_ns}
→ BindingLiveSnapshot (fixed-size [u64; N] + scalars)
→ BindingStatus.tx_submit_latency_* (Vec<u64> + u64 + u64, serde)
→ BindingCountersSnapshot.tx_submit_latency_* (projected for step1)
Wire-compat:
- All three new fields carry `#[serde(default)]`. Pre-#812 producers
that omit the fields deserialize as empty Vec / zero u64 — no Go-
side parser break (plan §3.5 / §7 "pre-#812 wire-contract break"
row).
- Pre-existing DRAIN_HIST_BUCKETS = 16 layout is reused; wire format
is byte-for-byte the same shape the drain histogram uses at
protocol.rs:881, so the step1-capture consumer slurps both
through the same code path.
Coordinator refresh (Coordinator::update_binding_statuses):
- Copy the fixed-cap array into the BindingStatus Vec in-place via
`resize + copy_from_slice` — the Vec buffer is reused across
~1s polls, no per-poll alloc. Unregistered bindings get the
Vec cleared to match the zero-init contract the other
owner-profile fields already follow.
Compile-time guard (plan §3.5a HIGH #3 resolution):
- Named const item `_ASSERT_BINDING_COUNTERS_SNAPSHOT_IS_OWNED_STATIC_SEND`
pins `BindingCountersSnapshot: 'static + Send`. A future field
addition that introduces a borrowed reference (&'a T, Cow<'a,T>,
Rc<T>, ...) fails the build with a message pointing at this
specific struct, not at some downstream generic call.
- Complementary to JSON round-trip test (#6.1 test #4) which
mechanically requires DeserializeOwned on the encode path.
Test fix: main.rs binding_counters_snapshot_serializes_with_expected_wire_keys
populates the new fields + asserts the three new wire keys appear
on the serialized JSON object.
Plan: docs/pr/812-tx-latency-histogram/plan.md §3.5 / §3.5a.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
psaab
added a commit
that referenced
this pull request
Apr 21, 2026
Add 9 Rust unit tests covering the plan §6.1 / §5 test surface: Rust-side (umem.rs test module): 1. tx_latency_hist_bucket_boundary_roundtrip — plan §6.1 #1 / §5.1. Drive record_tx_completions_with_stamp with deterministic T0 and T0+K for K ∈ {500, 1500, 10_000, 100_000, 10_000_000}; assert exactly one count in the bucket predicted by bucket_index_for_ns and zero in every other bucket. Pairs with the existing bucket boundary test so a bucket-layout drift fails BOTH pins. 2. tx_latency_hist_partial_batch_stamping_only_touches_accepted_prefix — plan §6.1 #2 / §3.1 R1. `inserted ∈ {1, 2, 32, 64, 256}`; assert only the first `inserted` sidecar slots hold the stamp, tail stays at TX_SIDECAR_UNSTAMPED. The Codex HIGH #1 small- batch regime contract. 3. tx_latency_hist_retry_unwind_leaves_no_stamps — plan §6.1 #3. The `inserted == 0` retry-unwind path hands an empty iterator to stamp_submits; every sidecar slot stays at the sentinel. 4. tx_latency_hist_sentinel_skip_for_unstamped_completion — plan §6.1 #5 / §5.4 / Codex MED #5. A completion against an unstamped slot (never stamped, or canonicalized 0 from a VDSO-failure stamp) drops the sample — no bucket increment, no count/sum increment. 5. tx_latency_hist_single_thread_sum_equals_count — plan §6.1 #6 / §5.2. Drive N = 10_000 synthetic stamps + completions in one thread; assert `sum(hist) == count` exactly AND `sum_ns == sum(deltas)` exactly (single-thread invariant 6 per plan §4). 6. tx_latency_hist_cross_thread_snapshot_skew_within_bound — plan §6.1 #7 / §3.6 R2. Writer thread fires 1M+ fetch_adds; reader thread snapshots ≥ 5000 times. For each snapshot compute K_skew_i = ceil(λ_obs × W_read_i) + 2 using Instant::now() bracketing, assert |sum - count| ≤ K_skew_i. The derivation-driven cross-thread bound, not a free parameter. 7. tx_submit_ns_sidecar_single_writer_ownership_is_rc_not_arc — plan §6.1 #6. Compile-time pin via fn-pointer probes that `WorkerUmem::shares_allocation_with` (calls Rc::ptr_eq) and `WorkerUmem::allocation_ptr` (calls Rc::as_ptr) retain the Rc shape. A future Rc→Arc migration breaks both bodies — silent drift becomes a loud compile failure. 8. (extension of binding_live_snapshot_propagates_709_owner_profile_counters) — pin that the snapshot() path copies all three new atomics (hist + count + sum_ns) into BindingLiveSnapshot. main.rs tests (wire-contract side): 9. tx_latency_hist_serialization_roundtrip — plan §6.1 #4. JSON encode/decode round-trip on a non-trivial histogram; assert field-equality including Vec<u64> contents. 10. tx_latency_hist_backward_compat_old_payload_deserializes — plan §6.1 #4 (second half) / §7 PR #804 wire-contract break row. Pre-#812 JSON payload (fields absent) deserializes with empty Vec and zero u64 via `#[serde(default)]`. THE backward-compat contract for step1-capture. 11. tx_latency_hist_binding_counters_snapshot_is_static_send — plan §6.1 #8. Runtime corollary of the named compile-time const-assert added in commit 5 — exercising the `'static + Send` bound at test time too. Defence-in-depth if the const-assert were ever silently removed. Also extends existing binding_counters_snapshot_serializes_with_expected_wire_keys and binding_counters_snapshot_projects_ring_pressure_fields to include the three new wire keys so a rename or misattribution is caught. Refactor: factored the per-offset reap fold out of reap_tx_completions into a new shared helper `record_tx_completions_with_stamp` so the unit pins exercise the PRODUCTION algorithm, not a test-only fake. The live reap_tx_completions now calls the helper; same semantics, same atomics, same order. Go-side (pkg/dataplane/userspace/protocol.go): Add tx_submit_latency_hist / _count / _sum_ns fields on BindingStatus and BindingCountersSnapshot. omitempty keeps forward-compat — a pre-#812 helper that lacks these fields decodes into empty slice / zero u64 without Unmarshal erroring. Needed by the Go decoder that the daemon's status-poll path uses for step1-capture. Plan: docs/pr/812-tx-latency-histogram/plan.md §6.1. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
6 tasks
psaab
added a commit
that referenced
this pull request
Apr 25, 2026
* #805 D3 RSS refresh on workers↔queues transition — plan Bug: when `system dataplane workers` is bumped from <queue_count to >=queue_count (e.g. 4→6 on 6-queue mlx5), applyRSSIndirectionOne early-returns because computeWeightVector gives nil for workers>=queues. The previously-written [1,1,1,1,0,0] table stays live; queues 4 and 5 starve. Fix: when nil-because-workers>=queues, inspect the live table; if not default round-robin, run `ethtool -X iface default`. Plan covers: root cause, detection helper, implementation (single-file rss_indirection.go change), tests (5 cases — 2 behavioral + 2 parser + 1 regression), acceptance (test cluster live deploy + Codex review + Copilot + test-failover), risks (rebalance race already mitigated by rssWriteMu, manual table writes intentionally clobbered by daemon ownership). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * #805 plan — address Codex round 1 (2 HIGH, 6 MED, 1 LOW) R1 verdict was PLAN-READY NO. Critical correctness fixes: HIGH: - 1: claimed `rssWriteMu`, `applyRSSIndirectionLocked`, epoch bumps that came from #840 (REVERTED). Master state has no locking, void return. §3 added documenting actual master contract. Fix is purely additive within the current void-returning shape. - 2: claimed bool return on `applyRSSIndirectionOne`. Master has void return. Plan now uses void `maybeRestoreDefault` helper consistent with master shape. MED: - 3: "uses every queue at least once" check accepts false-positive customs. §4 captured empirically on loss:xpf-userspace-fw0 — mlx5 default is exact round-robin `entry[i] = i mod queue_count`. §7 tightened to exact-match. - 4: empirical default-table capture committed via §4 + fixture in test #8. - 5: skip-reason was string-parsing. §5 uses structured `workers > 1 && workers >= queues` condition directly. - 6: probe-error behavior unspecified. §6 mirrors existing apply-path's ErrNotFound + generic err handling. - 7: missing tests for queue_count=0, workers>queues (not just ==). §9 tests #2 + #6 added. LOW: - 8: workers∈{0,1} regression guard added (§9 tests #4 + #5). - 9: `make test-failover` demoted from merge blocker to optional defense-in-depth. §13 review-response table maps each finding to resolution. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * #805 plan — address Codex round 2 (1 MED real bug, 2 LOW spec, 1 LOW wording, 1 LOW test) R2 verdict was PLAN-READY NO. Fixes: MED: - 2: indirectionTableIsDefault could return true vacuously on empty/unparseable ethtool -x output. Added sawAnyRow guard (mirrors existing indirectionTableMatches shape) + §9 test #11 to pin the empty-output case. - 3: runtime queue-count-only changes (ethtool -L without config commit) explicitly listed out-of-scope in §12. Operators changing ringparam are expected to follow with a config commit. Netlink-watch loop for ringparam events is separate scope. LOW: - 1: §3 wrongly cited d.applySem as the startup-vs-reconcile serializer. Real mechanism is lifecycle ordering — startup runs from enumerateAndRenameInterfaces before API/CLI is wired up. Wording corrected so reviewers don't waste time chasing a non-existent semaphore. - 4: §9 test #8 fixture location pinned: inline string literal, not a testdata/ file. Output is small enough to embed cleanly. - 5: §9 test #12 added: BootSequence_4then6_RestoresDefault covers the operator workflow end-to-end (workers=4 writes constrained, workers=6 sees stale and restores default). §13 R2 review-response table added (5 findings). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * #805 plan — address Codex round 3 (1 LOW edge case) R3 LOW: sawAnyRow flag was set when a row-index parsed but BEFORE any queue token was parsed. Input like "0:\n" (row index with empty value list) would set sawAnyRow=true, the inner for-loop over bytes.Fields would not execute, and the function would return true — vacuously-default, exactly the failure mode R2#2 was supposed to prevent. Fix: rename sawAnyRow → sawAnyEntry, set inside the inner field loop only AFTER a queue value has been parsed AND matched the expected round-robin position. Now requires at least one verified queue entry before returning true. §9 test #11 expanded to enumerate three failure cases: 1. empty []byte{} 2. non-row text (header only) 3. row index with no queue tokens (the R3 case) §13 R3 review-response row added. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * #805 D3 RSS indirection refresh on workers↔queues transition When `system dataplane workers` is bumped from a value below the NIC RX-queue count to a value at-or-above it (e.g. 4 → 6 on a 6-queue mlx5), the previously-written `[1,1,1,1,0,0]` indirection table stays live. Queues 4 and 5 now host worker-bound AF_XDP sockets but receive no RSS traffic, starving those workers. Root cause: `applyRSSIndirectionOne` calls `computeWeightVector`, which returns nil when `workers >= queues`. The caller treats nil as "skip" and never touches the live table. Correct on a fresh install (kernel default round-robin = what we want) but wrong on the workers<queues → workers>=queues transition. Fix: on the nil-weights skip path, when `workers > 1 && workers >= queues > 0`, inspect the live indirection table; if it isn't the kernel's default round-robin shape, run `ethtool -X iface default` to restore it. `indirectionTableIsDefault` is a strict round-robin parser: `entry[i] == i mod queueCount` exactly. Verified empirically against the live mlx5 default table on `loss:xpf-userspace-fw0/ge-0-0-2`. Stricter than `indirectionTableMatches` (which would accept any custom table that uses every queue at least once); rejects empty, unparseable, value-less, and non-round-robin inputs via `sawAnyEntry` guard. `maybeRestoreDefault` mirrors the existing apply-path's ethtool-probe failure handling: ErrNotFound → log Warn, return; generic err → log Warn with output, return; never attempts a write on probe failure. Other skip paths (workers <= 0, workers == 1) leave the table alone — those bring-up / single-worker cases have no prior workers<queues state to undo. 12 new tests in pkg/daemon/rss_indirection_test.go: - 7 behavioral (workers transition cases, stale-table preserved on workers∈{0,1}, queueCount=0 short-circuit, probe-failure skip) - 4 parser (round-robin true, concentrated false, every-queue- but-non-round-robin false, empty/unparseable/value-less false) - 1 end-to-end (BootSequence_4then6 covers operator workflow: step 1 writes [1,1,1,1,0,0]; step 2 sees stale and restores) Plan + 4 Codex review rounds documented at docs/pr/805-rss-refresh/plan.md (PLAN-READY YES at R4). Closes #805. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * #805 address Codex code-review LOWs (3 fixes: doc + 4 tests) Codex code review verdict: MERGE YES with 3 LOW (cosmetic doc + missing test coverage). All applied: LOW 1: Stale §M2 invariant comment in applyRSSIndirection's docblock said "workers >= queue_count is skipped" but the new maybeRestoreDefault path probes and may write. Updated to reflect the new behavior. LOW 2: Generic-error branches in maybeRestoreDefault (non- ErrNotFound -x probe failure; non-ErrNotFound -X default write failure) had no regression coverage. - Added argvErr scripted-error mechanism to fakeRSSExecutor: argv-prefix-keyed (out, err) tuples that take precedence over the existing per-iface ethtoolX/ethtoolC paths. - Test #13 RestoreEthtoolXProbeGenericError_LogAndSkip: -x returns generic error → no -X default invocation. - Test #14 RestoreEthtoolXDefaultGenericError_LoggedAndSwallowed: -X default returns generic error → both probe and write recorded, function returns normally without propagating. LOW 3: New branch's interaction with non-mlx5 / empty-driver guard untested at applyRSSIndirectionOne entry-point level. - Test #15 NonMlxDriver_WorkersEqualsQueues_NotTouched: virtio_net driver with workers=6,queues=6 stale table → zero ethtool calls (driver guard short-circuits before restore path). - Test #16 EmptyDriver_WorkersEqualsQueues_NotTouched: empty driver string (sysfs unreadable) → same expectation. All 16 #805 tests pass plus 880+ existing tests clean. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * #805 address Copilot inline (queues>0 vs queues>1 mismatch) Comment in applyRSSIndirectionOne said "workers >= queues > 1" but the guard was `workers > 1 && workers >= queues && queues > 0`. On a single-queue NIC (queues=1) the guard would let maybeRestoreDefault run, even though there's no possible concentration to undo: queueCount=1 means entry[i] = i mod 1 = 0 for every i, which is both the default and the only possible layout. Tighten the guard to `queues > 1` to match the comment's intent and skip the unnecessary probe on single-queue NICs. Test #17 QueueCountOne_NoOp pins the new behavior with workers=6, queues=1, stale-looking table — expects zero ethtool calls. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * #805 address Copilot inline review (5 fixes — clarity + determinism) 1. rss_indirection.go:242 — log "rss weight reshaping skipped" instead of "rss indirection skipped" so it's clear that the weight-vector write is what's skipped, not the whole RSS path. When workers>=queues>1 with a stale table, the next line(s) show the actual restore action. 2. rss_indirection_test.go:47 — comment for argvErr said "checked AFTER" but code checks BEFORE. Aligned to actual precedence. 3. rss_indirection_test.go:71 — Go map iteration is randomized, so two overlapping prefix keys would produce flaky results. Implemented longest-prefix-wins tie-breaking for determinism. 4. rss_indirection_test.go:846 — test #17 appeared before #16. Reordered to be sequential. 5. queues>1 vs queues>0 mismatch was already fixed in 744f8b2 (Copilot saw the change but flagged a stale comment-vs-code diff — addressed via the comment update accompanying #1). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
psaab
added a commit
that referenced
this pull request
Apr 26, 2026
- mouse_latency_probe.py: drop unused `random` and `struct` imports (Copilot R2 #6). - mouse_latency_probe.py: docstring now documents all three min-attempts floors (M=1: 500, 2≤M<10: 1000, M≥10: 5000) so the intermediate branch is no longer surprising (Copilot R2 #5). - test-mouse-latency.sh: distinguish iperf3-settle pull failure from a real cwnd-not-settled, with `INVALID-iperf3-settle-pull-failed` attribution (Copilot R2 #1). - test-mouse-latency.sh: explicit probe.json availability check — `INVALID-probe-pull-failed`/`probe-missing`/`probe-invalid-json` instead of letting the matrix wrapper silently lose attribution (Copilot R2 #2). - test-mouse-latency.sh: journalctl stderr now lands in `${OUT_DIR}/jc-stderr-${FW}.txt` instead of a per-rep `/tmp/` file on the caller, so it's captured alongside the other rep artifacts (Copilot R2 #3). - test-mouse-latency-matrix.sh: preflight JSON parsing wrapped in try/except + .get() with explicit `FAIL invalid-json=...` / `FAIL missing-field=...` lines, so a partial write or schema drift produces an actionable diagnosis instead of an aborting stack trace (Copilot R2 #4). 70 tests still green; both shell scripts pass `bash -n`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
psaab
added a commit
that referenced
this pull request
Apr 26, 2026
- test-mouse-latency.sh stale-artifact cleanup list now includes jc-stderr-*.txt so a rerun into an existing rep dir doesn't inherit prior journalctl stderr (Copilot R2 #3 introduced this artifact path; Codex R9 noticed it was missing from the wipe). - test-mouse-latency-matrix.sh preflight JSON parser now coerces `validity` to {} when it isn't a dict, so schema-drifted JSON produces a clean FAIL line instead of stack-tracing on `validity.get(...)` (Copilot R2 #4 partial residual). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
psaab
added a commit
that referenced
this pull request
Apr 26, 2026
- mouse_latency_probe.py: per-attempt connect/recv timeouts now bounded by remaining time to the deadline, so a probe near the deadline can no longer overrun by up to 10s and consume the iperf3 slack budget. Probe runtime now consistently ≤ duration + small constant. (Copilot R3 #1) - mouse_latency_probe.py: payload generated once per coroutine instead of per-attempt — uniqueness is irrelevant for the byte-stateless echo path; the per-attempt os.urandom was avoidable CPU on the source. (Copilot R3 #2) - test-mouse-latency-matrix.sh: WALL_CAP_HIT flag so the outer cell loop also stops once the wall budget is exceeded — previously `return 0` only exited run_cell and the outer loop would iterate the remaining cells, repeatedly tripping the cap. (Copilot R3 #3) - test-mouse-latency-matrix.sh: rep_is_valid wraps the JSON parse + key access in try/except so a malformed probe.json or schema drift produces a clean "invalid" verdict instead of a Python stack trace in the matrix log. (Copilot R3 #4) - test-mouse-latency.sh: apply-cos-config now targets the current RG0 primary (resolved via current_primary) instead of hard-coded fw0. If the cluster is already failed over before the rep starts, hardcoding fw0 would attempt to apply the fixture on the secondary. (Copilot R3 #5) - test-mouse-latency.sh: screen-pre snapshot now captured from the same node the post-snapshot will follow (current_primary at pre-time, same node at post-time). The pre/post diff is now consistent regardless of which node was primary at rep start. Records the captured node in screen-pre-fw.txt for analyst cross-reference. (Copilot R3 #6) 70 tests still green; both shell scripts pass `bash -n`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
psaab
added a commit
that referenced
this pull request
Apr 26, 2026
…w-up (#906) * #905 mouse-latency tail: plan v4 (7 Codex rounds, PLAN-NEEDS-MINOR) Plan for measurement-only PR characterizing mouse-latency tail under elephant load using the operator-provisioned echo server on 172.16.80.200:7. 12-cell run matrix (N elephants × M mice), PASS gate at p99(N=128,M=10) ≤ 2× idle baseline. Seven Codex hostile review rounds; disposition tables in §11: - R1: 17 findings, PLAN-NEEDS-MAJOR - R2: 7 findings, PLAN-NEEDS-MAJOR - R3: 6 findings, PLAN-NEEDS-MAJOR - R4: 8 findings, PLAN-NEEDS-MAJOR - R5: 5 findings, PLAN-NEEDS-MINOR - R6: 3 findings, PLAN-NEEDS-MINOR - R7: 2 findings, PLAN-NEEDS-MINOR Stopping plan iteration at R7 per the #838 lesson: late rounds surfacing minor spec-clarity issues with no structural defects indicate it's time to let code review take over. Implementation follows. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * #905 mouse-latency tail: harness implementation Adds the measurement harness specified in docs/pr/905-mouse-latency/plan.md. 12 new files in test/incus/: Python parsers + tests (no third-party deps; stdlib statistics + asyncio only): - cluster_status_parse.py — `cli show chassis cluster status` → list of (rg_id, node_id, state) triples (incl. secondary-hold) - iperf3_sum_parse.py — iperf3 text-mode `[SUM]` row → bps - mouse_latency_probe.py — closed-loop M-coroutine TCP probe driver with histogram + statistics.quantiles percentiles + per-coroutine RPS distribution + validity gate - mouse_latency_aggregate.py — per-cell median-by-p99 + decision verdict; honours orchestrator INVALID-* markers - mouse_latency_orchestrate.py — cwnd-settle / collapse / RG-flap helper subcommands Shell orchestrator + matrix wrapper: - test-mouse-latency.sh — one rep with the full validity pipeline from plan §4.5 (CoS preflight, mpstat over the probe window only, dual-node journalctl HA-transition diff, 1Hz RG state polling, post-snapshot follows current primary) - test-mouse-latency-matrix.sh — 12-cell matrix with preflight gating + 10/15-rep accounting per plan §4.7 68 unit tests pass via `python3 -m unittest discover -s test/incus/ -p '*_test.py'`. Five rounds of Codex hostile code review; each round's findings addressed inline. Smoke run on the loss cluster has not yet been executed. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * #905 mouse-latency: address Copilot R1 review (8 inline comments) - plan.md: filename references hyphenated → underscored to match the Python module names actually shipped (mouse_latency_probe.py etc.). - plan.md: §7.1 merge gate dropped "findings.md exists" (the harness PR merges first; findings come in a follow-up after the matrix runs). - plan.md: §4.5 step 2 now states the implemented mpstat lifecycle (`mpstat 1 <duration>` over the probe window only, not duration+30). - plan.md: §4.5 step 3 clarifies that polling the primary alone is sufficient because `Manager.FormatStatus` returns both nodes' rows in one query. - mouse_latency_probe.py: compute_validity uses the previously-unused `completed` parameter to surface count-bookkeeping inconsistencies. Two new unit tests. - mouse_latency_probe.py + mouse_latency_aggregate.py: per-coroutine RPS distribution renamed to attempts_per_second_per_coroutine_* (the previous achieved_rps_per_coroutine_* conflated workload- offered attempts with completion-rate, which uses different totals). - test-mouse-latency-matrix.sh: top-of-file comment now describes the actual rep-accounting behavior ("up to 15 reps as needed for 10 valid"), not a >30%-trigger conditional that was simplified out. - test-mouse-latency-matrix.sh: preflight comment explains the 60s duration choice (M=1 floor of 500 attempts). 70 tests green via the discover command from §7.1. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * #905 mouse-latency: address Codex R6 (1 HIGH + 1 MED) R6 found a stale-artifact contamination bug. Two cells with the same rep index (e.g. cell_N0_M10/rep_00 and cell_N128_M10/rep_00) both wrote to /tmp/probe-rep_00.json on the source container; a failed pull on the second cell silently inherited the first cell's data. Includes the cell name in REP_TAG so remote temp paths are unique, plus an explicit `rm -f` of the temp files at rep start as defense in depth. R6 MED: a failover/failback hidden inside missed 1Hz RG poll samples could pass as stable. Added a final RG-state snapshot at end of rep and an explicit initial-vs-final triple-set comparison (in addition to the per-sample drift check on the poll log). Both shell scripts pass `bash -n`; 70 unit tests still green. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * #905 mouse-latency: address Codex R7 HIGH (local stale-artifact) R6 HIGH (remote temp-file collision) and R7 fix moved both REP_TAG and remote-side rm to be cell-aware. Codex R7 found the same class of bug on the LOCAL side: if OUT_DIR is reused (e.g. a rerun into an existing rep dir) and the probe run fails before overwriting probe.json, the previous run's stale data would silently survive and be picked up by `rep_is_valid` in the matrix wrapper. Add an explicit rm of all per-rep local artifacts (probe.json, iperf3.txt, mpstat.txt, RG state files, INVALID-* markers, etc.) at rep start. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * #905 mouse-latency: address Copilot R2 review (6 inline comments) - mouse_latency_probe.py: drop unused `random` and `struct` imports (Copilot R2 #6). - mouse_latency_probe.py: docstring now documents all three min-attempts floors (M=1: 500, 2≤M<10: 1000, M≥10: 5000) so the intermediate branch is no longer surprising (Copilot R2 #5). - test-mouse-latency.sh: distinguish iperf3-settle pull failure from a real cwnd-not-settled, with `INVALID-iperf3-settle-pull-failed` attribution (Copilot R2 #1). - test-mouse-latency.sh: explicit probe.json availability check — `INVALID-probe-pull-failed`/`probe-missing`/`probe-invalid-json` instead of letting the matrix wrapper silently lose attribution (Copilot R2 #2). - test-mouse-latency.sh: journalctl stderr now lands in `${OUT_DIR}/jc-stderr-${FW}.txt` instead of a per-rep `/tmp/` file on the caller, so it's captured alongside the other rep artifacts (Copilot R2 #3). - test-mouse-latency-matrix.sh: preflight JSON parsing wrapped in try/except + .get() with explicit `FAIL invalid-json=...` / `FAIL missing-field=...` lines, so a partial write or schema drift produces an actionable diagnosis instead of an aborting stack trace (Copilot R2 #4). 70 tests still green; both shell scripts pass `bash -n`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * #905 mouse-latency: address Codex R9 (2 small fixes) - test-mouse-latency.sh stale-artifact cleanup list now includes jc-stderr-*.txt so a rerun into an existing rep dir doesn't inherit prior journalctl stderr (Copilot R2 #3 introduced this artifact path; Codex R9 noticed it was missing from the wipe). - test-mouse-latency-matrix.sh preflight JSON parser now coerces `validity` to {} when it isn't a dict, so schema-drifted JSON produces a clean FAIL line instead of stack-tracing on `validity.get(...)` (Copilot R2 #4 partial residual). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * #905 mouse-latency: address Copilot R3 review (6 inline comments) - mouse_latency_probe.py: per-attempt connect/recv timeouts now bounded by remaining time to the deadline, so a probe near the deadline can no longer overrun by up to 10s and consume the iperf3 slack budget. Probe runtime now consistently ≤ duration + small constant. (Copilot R3 #1) - mouse_latency_probe.py: payload generated once per coroutine instead of per-attempt — uniqueness is irrelevant for the byte-stateless echo path; the per-attempt os.urandom was avoidable CPU on the source. (Copilot R3 #2) - test-mouse-latency-matrix.sh: WALL_CAP_HIT flag so the outer cell loop also stops once the wall budget is exceeded — previously `return 0` only exited run_cell and the outer loop would iterate the remaining cells, repeatedly tripping the cap. (Copilot R3 #3) - test-mouse-latency-matrix.sh: rep_is_valid wraps the JSON parse + key access in try/except so a malformed probe.json or schema drift produces a clean "invalid" verdict instead of a Python stack trace in the matrix log. (Copilot R3 #4) - test-mouse-latency.sh: apply-cos-config now targets the current RG0 primary (resolved via current_primary) instead of hard-coded fw0. If the cluster is already failed over before the rep starts, hardcoding fw0 would attempt to apply the fixture on the secondary. (Copilot R3 #5) - test-mouse-latency.sh: screen-pre snapshot now captured from the same node the post-snapshot will follow (current_primary at pre-time, same node at post-time). The pre/post diff is now consistent regardless of which node was primary at rep start. Records the captured node in screen-pre-fw.txt for analyst cross-reference. (Copilot R3 #6) 70 tests still green; both shell scripts pass `bash -n`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
psaab
added a commit
that referenced
this pull request
Apr 26, 2026
1. mouse_latency_orchestrate.py: rg-state-flapped sample sort now keys on int(ts) rather than the string ts to avoid lexicographic mis-ordering if timestamp digit width ever varies (Copilot R3 #1). 2. mouse_latency_aggregate.py: select_valid_reps now requires p99 to be numerically present, not just validity.ok=True. Median selection no longer coerces missing p99 to 0 (which could mis-pick a malformed rep as the median and skew the verdict). (Copilot R3 #2) 3. test-mouse-latency.sh: rename "SYN-cookie counter snapshot" to "Screen flood-counter snapshot" — the underlying CLI command `show security screen statistics zone wan` reports SCREEN flood-event counters, NOT SYN-cookie-specific counters. The manifest's `screen_engaged` field still measures the right signal; just the comments were misleading. (Copilot R3 #3, #4) Edits are mid-flight-safe for the running iperf-c shared matrix (naming and parsing fixes; no behavior change to active reps). 72 tests still green. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
7 tasks
psaab
added a commit
that referenced
this pull request
May 6, 2026
…1202) * #1187 plan v1: extend BatchCounters to cover disposition + screen_drops + tx_errors * #1187 plan v2: scope narrowed (drop tx_errors, defer 3 small counters), reframe primary value as DDoS resilience per Gemini Pro 3.1, fix forward_candidate_packets leak per Codex finding #5, route through TelemetryContext per Codex finding #4 * #1187 plan v3: screen_drops MANDATORY batching (no fallback) per both reviewers; preserve exception_packets current semantics; correct cache-line wording * #1187 plan v4: round-3 fixes — TelemetryContext (not BatchCounters), delete stale open-questions, correct cache-line wording, reframe forward_candidate_packets at disposition.rs:161 as cold (not hot-path) leak per Codex finding * #1187 plan v5: name record_forwarding_disposition_hot/cold explicitly (Codex round-4 #1) + scrub stale 'leak' / 'the leaked one' framing (Codex round-4 #2) + soften cache-line wording to acknowledge unspecified layout (Codex round-4 #3) * #1187 plan v6: scrub final hot-path leak residue (§4.4 heading + framing); reword section 4.4 from Option A/B to 'out of scope'; update §10 question 3 to reflect softened cache-line wording * #1187 plan v7: scrub stale Option A at lines 95+194; correct §4.4 framing — coordinator/inject.rs is RPC-driven cold path, not 1Hz status poll * #1187 Phase 1: extend BatchCounters with disposition + screen_drops counters Per docs/pr/1187-telemetry-double-buffer/plan.md v7 (PLAN-READY). Codex rounds 1-7 + Gemini Pro 3.1 rounds 1-2. Adds 8 new u64 fields to BatchCounters (afxdp/mod.rs:308-336): screen_drops, policy_denied_packets, route_miss_packets, neighbor_miss_packets, discard_route_packets, next_table_packets, local_delivery_packets, exception_packets. Extends flush() with 8 new conditional flush blocks. Introduces DispositionCounters<'a> enum in afxdp/disposition.rs with Hot(&mut BatchCounters) / Cold(&BindingLiveState) variants and per-counter bump methods. Refactors record_disposition and record_forwarding_disposition to take DispositionCounters instead of `live: &BindingLiveState`. Hot callers in poll_descriptor.rs:2071 and 2096 pass DispositionCounters::Hot(telemetry.counters); cold callers in coordinator/inject.rs:43 and 59 pass DispositionCounters::Cold(live). stage_screen_check at poll_stages.rs:227 now takes counters: &mut BatchCounters in place of binding_live (only used for screen_drops). Drop verdict bumps counters.screen_drops += 1 through the batch — DDoS resilience: SYN flood is the primary trigger, unbatched atomics here would cause MESI ping-pong with the coordinator's status reads under volumetric attack. Three counters explicitly DEFERRED to a follow-up PR (plan §2): config_gen_mismatches, fib_gen_mismatches, unsupported_packets — they fire only during reconcile windows AND are gated by record_exception()'s mutex/timestamp/string work, so the per-atomic saving is dominated by other costs. They keep direct fetch_add to live. forward_candidate_packets at disposition.rs:161 (now in cold-path ForwardCandidate arm) is reachable only from coordinator/inject.rs RPC path, not the worker per-packet hot path. Hot path already routes through telemetry.counters.forward_candidate_packets at poll_descriptor.rs:213,1706 (existing). PR keeps the cold-path direct write — bumping a counter on the coordinator-inject path that fires at RPC rate has no MESI-thrash concern. tx_errors batching DROPPED from this PR (Codex round-1 finding): real fan-out spans 6+ sites including umem/mod.rs, tx/drain.rs, tx/transmit.rs, cos/queue_service/mod.rs, worker/cos.rs. Also, BatchCounters is created AFTER the first drain_pending_tx() call in worker/lifecycle.rs:59, so a TX-only error during early drain would be silently lost. Needs separate design. cargo build --release: clean cargo test --release: 974/974 pass * #1187 review: add #[inline] to DispositionCounters::bump_*, privatize 8 new BatchCounters fields, fix plan version comment Agent-Logs-Url: https://github.com/psaab/xpf/sessions/312e8185-0492-4934-ae75-9da522b56a35 Co-authored-by: psaab <196946+psaab@users.noreply.github.com> * #1187 review: fix stale BatchCounters comment, update plan §4.2+§4.5 to match impl, add 3 hot/cold regression tests Agent-Logs-Url: https://github.com/psaab/xpf/sessions/232c0153-2abb-447d-86ad-668df14a1d44 Co-authored-by: psaab <196946+psaab@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: psaab <196946+psaab@users.noreply.github.com>
psaab
added a commit
that referenced
this pull request
May 7, 2026
…-KILL Codex (task-moupsqds) PLAN-NEEDS-MAJOR — not KILL on race-safety; v1 issues fixable. v2 applies all 5 fixes: #1. AFD accounting moves from pop-time to settle-time. Pop-time fetch_add over-states served work because TX-failure restoration pushes items back. Same pitfall that killed #1215 v1. v2 hooks into apply_cos_send_result family in tx_completion.rs and increments per-bucket served_bytes only for the inserted prefix. No fetch_sub anywhere; rebuild bucket hash from settle-time item (cheap, batch-amortized). #2. Published summary carries window DELTA, not cumulative. v1 compared cumulative aggregate_served to one-window fair_share -> would mark/drop active buckets forever. v2 adds window_served_delta + fair_share_window_bytes (matching units); snapshot owner stores prior_aggregate in its own state and publishes the delta. #3. Batch-hoist ArcSwap.load AND ECN-write costs. v1 budget claim 3-5ns for ArcSwap.load was wrong; real cost ~30ns per load (arc-swap v1.8.2 docs/source). v2 hoists ArcSwap.load to drain-batch entry (~30ns / TX_BATCH_SIZE=64 = ~0.5ns amortized per pop). Per-pop cost: 2 array reads + 1 conditional ECN write = ~10ns marked, ~4ns unmarked. #4. ECN write fast-path helper. v1 said ~5ns; ecn.rs:98 actually parses IPv4 + updates checksum. v2 adds cos_item_mark_ce_fast that uses cached parsed metadata from xdp_main / forward.rs instead of re-parsing. ~10ns marked write vs ~50ns full parse. #5. Smoothed CSFQ drop probability (vs Gemini's drop-cliff critique). v1 binary 'if delta > 2*fair { drop }' creates bursty-drop pathology. v2 uses CSFQ formula: p_drop = max(0, (delta - fair) / delta) scaled to 0-255 for cheap random-byte comparison. Matches the classic CSFQ paper. Gemini (task-mouptde4) PLAN-KILL: cache-line bouncing concern still on the table for ArcSwap.load (Gemini disputes the hazard-pointer fast path is contention-free under 12M loads/sec). v2 mitigates by batch-hoisting (#3) but Gemini may still PLAN-KILL on its position that 'PR #1217 should be the steady-state product.' That's acceptable: v2 keeps the research-only framing; #1211 does NOT block #1217 shipping. ECN deployment reality (Gemini B): also acceptable — even if <50% of TCP receivers honor CE marks, the CSFQ probabilistic-drop fallback (#5) still applies pressure to ECN-stripped flows. Submit-failure pitfall (Gemini E + Codex #1) FIXED by #1 above — not a residual concern in v2.
psaab
added a commit
that referenced
this pull request
May 7, 2026
…ILL #1) Round-2 verdicts: - Codex (task-mousvzy4): PLAN-NEEDS-MAJOR — 4 majors (canonical text contradicts v2 fixes; bpf_random_u32 won't compile; ECN metadata premise wrong; settle hook location wrong) - Gemini (task-mousw4d4): PLAN-KILL — RFC 3168 violation (binary 100% ECN mark vs probabilistic drop) is a fairness cliff that starves ECN flows; document inconsistency between v2 fixes section and §3.2 hot path; complexity-vs-value (with #1217 as alternative) v3 fixes: #6 RFC 3168-compliant ECN marking (Gemini #1 FATAL): v2 had `if delta > fair && ECT { mark }` — unconditional 100% mark. RFC 3168 mandates same probability for mark vs drop. v3 unifies under the smoothed CSFQ probability: same p applies to ECT (marks) and non-ECT (drops). #7 Per-worker non-atomic PRNG (Codex #2): bpf_random_u32() doesn't exist in Rust userspace. v3 uses splitmix64 with per-worker state seeded from existing cos_flow_hash_seed_from_os path; no rand crate dep needed. #8 ECN-mark moves to pre-submit (Codex #3 + design reorg): TxRequest doesn't carry L3 offsets, so the 'fast-path' claim was wrong. v3 reorganizes: per-pop reads delta/fair (batch-hoisted), pre-submit in tx/dispatch.rs decides mark/drop/pass with cached L3 offsets, settle increments served_bytes for inserted prefix (Fix #1 unchanged). #9 Correct settle hook reference (Codex #4): settle_exact_*_scratch_submission_flow_fair (queue_service/mod.rs ~740), not apply_cos_send_result. §3.2 reconciliation (both reviewers' document inconsistency finding): Added explicit NOTE pointing readers to the v2 fixes section as the canonical design, and listing each §3.2 element that's superseded. The original §3.2 text retained for traceability of design evolution but marked clearly as not-the-implementation. Gemini's PLAN-KILL on complexity-vs-value remains a legitimate position. v3 doesn't dispute that view; it ensures the technical flaws Gemini and Codex flagged are fixed so the value/complexity debate is the only remaining issue. Per user directive: '#1211 stays research-only; PR #1217 ships even if AFD dies'. PLAN-KILL on complexity-vs-value grounds is acceptable.
psaab
added a commit
that referenced
this pull request
May 7, 2026
…tals Round-1 verdicts both PLAN-NEEDS-MAJOR with convergent fatals: - Codex: DistinctFlowTracker locality wrong (Arc + &mut self conflict); 6% CPU on hot path; semantic drift (per-binding vs per-queue); Go can't compute observed_cov from current status - Gemini: HashMap on hot path = FATAL; 1024 cap = FATAL high-fan-in false-pass; Rust+Go formula drift = FAIL v2 fundamental rescope: DROPPED FROM v1: - Production Prometheus exports (xpf_fairness_*) -> deferred to #1220 - DistinctFlowTracker HashMap entirely - Per-packet record() on flow-cache lookup - Rust+Go re-implementation - Continuous starved-flow counter KEPT FROM v1: - Rust pure-fn module (Codex independently verified Cstruct math correct) - gRPC per-binding active_flow_count field NEW IN v2: - Active flow count comes from EXISTING flow_cache via O(N) scan at worker's existing 100ms tick (Gemini's remediation). Cost ~10us/sec/ worker on a tick path that has spare time. NO per-packet write. - Per-queue active flows + ≥1% throughput qualification: computed AT THE HARNESS from iperf3 JSON output, not in the data plane (addresses Codex blocker #3). - Single source of truth = Rust binary 'fairness-eval' that reads iperf3-out.json + binding-flows.jsonl and emits {Cstruct, observed_cov, regime, verdict}. No Go side at all in v2. (Addresses Gemini D / Codex #4) Net effect: v2 ships ~150 LOC pure-fns + ~30 LOC flow_cache scan + ~20 LOC gRPC plumbing + ~120 LOC eval binary + ~80 LOC bash harness = total ~400 LOC vs v1's ~600 LOC, with simpler hot path (no new HashMap), no Prometheus changes, no Rust+Go drift. Production observability (Prometheus + rolling windows) becomes issue #1220 to file after v2 lands. Operational goal preserved: answer 'is 47% iperf3 CoV at structural ceiling or scheduler bug?' This is the deliverable test in §7.
psaab
added a commit
that referenced
this pull request
May 7, 2026
Codex round-3 (task-mov1wpqo) PLAN-NEEDS-MAJOR with 4 findings:
1. v3 harness sketch ran iperf3 -P N without --cport, contradicting
the per-stream --cport claim earlier
2. RSS tuple wrong for -R workload (data direction reversed; on-wire
RX tuple at measured-direction interface, not control tuple;
plus possible NAT translation)
3. bound_rx_queue field doesn't exist in BindingStatus (real fields:
QueueID/WorkerID/Interface/Ifindex). No public proto for binding
status — must specify control-socket vs Manager.Status() vs new gRPC
4. Stale impl details on last_used_epoch
v4 architectural simplification: drop RSS-join entirely. After
verifying the actual codebase via pkg/dataplane/userspace/protocol.go:615:
- Data plane has flow_cache + epoch counter (Fix #v3-1 unchanged)
- 100ms-tick scan publishes per-binding active_flow_count to a
Prometheus metric: xpf_userspace_binding_active_flow_count{binding_slot=N}
- Harness scrapes /metrics every 1s during the iperf3 run
- {a_i} = per-binding active_flow_count read directly. No RSS hash
computation, no indirection-table read, no direction-reversal
complications for -R, no kernel hash key extraction.
This addresses ALL 4 Codex findings:
- #1: dropped per-stream --cport. Single iperf3 -P N -J. Per-stream
join via start.connected[].socket <-> intervals[].streams[].socket
(the canonical iperf3 JSON join Codex suggested)
- #2: irrelevant; we don't compute RSS hashes
- #3: harness reads via existing /metrics endpoint. ONE new metric
added (snapshot, not rolling-window — much narrower than v1's
production exports deferred to #1220)
- #4: last_used_epoch impl details cleaned up
Trade-offs explicitly acknowledged + documented:
- Per-binding count includes all flows, not just per-CoS-queue.
Fine for iperf3-only test workloads. Production observability with
per-queue qualification deferred to #1220.
- No ≥1% throughput qualification at data plane. Conservative effect
(slightly higher Cstruct than strict contract reading; gate slightly
more forgiving). Starved-flow gate (Gate 1) catches starved flows
exactly via iperf3 per-stream throughputs at harness layer.
Net architecture: data plane tracks epoch (1-2 ns/lookup), publishes
binding count on tick (10 us/sec/worker), exposes via /metrics.
Harness uses 1 new Rust binary, 1 new Prometheus metric, 1 new
flow_cache field. Total ~300 LOC vs v1's ~600 LOC.
psaab
added a commit
that referenced
this pull request
May 7, 2026
…ount + Prometheus + fairness-eval (#1220) * #1219 plan v1: fairness harness — Cstruct + distinct-flow-count + Prometheus Implementation plan for the harness work mandated by the fairness-regimes contract (PR #1217 e1ec6b9). Three deliverables: 1. Rust pure-fn module (userspace-dp/src/fairness/mod.rs): - compute_cstruct(distribution: &[u32]) -> f64 - compute_observed_cov(per_flow: &[u64]) -> f64 - starved_flow_count(per_flow_buckets: &[Vec<u64>]) -> u32 - is_saturated(buckets: &[u64], cap: u64) -> bool Unit-tested against the contract's 5-row worked-example table (0,0.47,0.20,0.58,0,0). 2. Per-binding distinct-flow-count signal (DistinctFlowTracker with bounded LRU set, single-writer record() on flow-cache hit path, periodic age_out() on worker tick, atomic snapshot read). MAX_TRACKED_FLOWS=1024, FLOW_AGE_OUT_NS=1s. 3. Prometheus exports (4 metrics: cstruct, observed_cov, starved_flows, saturated). Re-implemented in Go from same spec with shared test-vector CI gate to prevent Rust/Go drift. Test harness fairness-harness.sh wraps iperf3 -> per-flow buckets -> Go helper -> contract gates -> PASS/FAIL. Smoke fixture (deterministic RSS-skew per Codex Path 0) is a separate follow-up issue; v1 uses iperf3's natural RSS. 7 open questions for adversarial review including the distinct- flow cap, hash collisions, Rust/Go drift strategy, HA failover timing of the tracker, steady-state window detection, Prometheus cadence, harness self-test scope. PLAN-KILL is acceptable if the harness logic complexity outweighs the operational value (the immediate goal: answer 'is 47% iperf3 CoV at structural ceiling or scheduler bug'). * v2: massively reduced scope addressing 4 Codex blockers + 3 Gemini fatals Round-1 verdicts both PLAN-NEEDS-MAJOR with convergent fatals: - Codex: DistinctFlowTracker locality wrong (Arc + &mut self conflict); 6% CPU on hot path; semantic drift (per-binding vs per-queue); Go can't compute observed_cov from current status - Gemini: HashMap on hot path = FATAL; 1024 cap = FATAL high-fan-in false-pass; Rust+Go formula drift = FAIL v2 fundamental rescope: DROPPED FROM v1: - Production Prometheus exports (xpf_fairness_*) -> deferred to #1220 - DistinctFlowTracker HashMap entirely - Per-packet record() on flow-cache lookup - Rust+Go re-implementation - Continuous starved-flow counter KEPT FROM v1: - Rust pure-fn module (Codex independently verified Cstruct math correct) - gRPC per-binding active_flow_count field NEW IN v2: - Active flow count comes from EXISTING flow_cache via O(N) scan at worker's existing 100ms tick (Gemini's remediation). Cost ~10us/sec/ worker on a tick path that has spare time. NO per-packet write. - Per-queue active flows + ≥1% throughput qualification: computed AT THE HARNESS from iperf3 JSON output, not in the data plane (addresses Codex blocker #3). - Single source of truth = Rust binary 'fairness-eval' that reads iperf3-out.json + binding-flows.jsonl and emits {Cstruct, observed_cov, regime, verdict}. No Go side at all in v2. (Addresses Gemini D / Codex #4) Net effect: v2 ships ~150 LOC pure-fns + ~30 LOC flow_cache scan + ~20 LOC gRPC plumbing + ~120 LOC eval binary + ~80 LOC bash harness = total ~400 LOC vs v1's ~600 LOC, with simpler hot path (no new HashMap), no Prometheus changes, no Rust+Go drift. Production observability (Prometheus + rolling windows) becomes issue #1220 to file after v2 lands. Operational goal preserved: answer 'is 47% iperf3 CoV at structural ceiling or scheduler bug?' This is the deliverable test in §7. * v3: add flow_cache epoch + RSS-join harness mapping (Codex round-2 fixes) Round-2 verdicts: - Gemini (task-mov1moka): PLAN-READY - Codex (task-mov1mk84): PLAN-NEEDS-MAJOR with 2 NEW blockers Codex new blocker #1: FlowCacheEntry has no last_used_ns field. v2's '100ms scan flow_cache for fresh entries' was unimplementable. v3 Fix #1: add last_used_epoch: u16 to FlowCacheEntry. Owner-only single u16 store on every lookup() hit (~1-2 ns/lookup). Per-binding current_epoch atomic incremented at the worker's existing 100ms tick. Active-flow count = entries with epoch within last 10 ticks (1s window). Tick-side O(N) scan over 8192 entries every 100ms = 80K loads/sec/worker = ~10us/sec. Wraparound-safe via wrapping_sub on u16 (256 epochs = 25.6s headroom vs 1s window). Codex new blocker #2: per-queue + ≥1% throughput qualification not joinable. iperf3 -P N uses single destination port; RSS decides binding; no way to map 'stream X had ≥1% throughput' to 'stream X landed on binding Y'. v3 Fix #2: distinct source ports per iperf3 stream (--cport CPORT_BASE+i). Each stream has unique 5-tuple. Harness reads kernel RSS config (ethtool -x for indirection table; ethtool -u for n-tuple rules; RSS key from ethtool) and computes Toeplitz hash for each stream's 5-tuple to deterministically map streams to RX queues. RX queue → binding via existing BindingStatus.bound_rx_queue field. ≥1% qualification applied at harness using mapped streams. Self-test on harness startup: single-stream prediction must match observed binding TX bytes; fail-fast on mismatch. Hot-path cost: single u16 store per lookup. Order of magnitude smaller than v1's HashMap-insert proposal (60 ms/sec/core). Production observability still deferred to #1220. * v4: drop RSS-join entirely; harness reads {a_i} via Prometheus scrape Codex round-3 (task-mov1wpqo) PLAN-NEEDS-MAJOR with 4 findings: 1. v3 harness sketch ran iperf3 -P N without --cport, contradicting the per-stream --cport claim earlier 2. RSS tuple wrong for -R workload (data direction reversed; on-wire RX tuple at measured-direction interface, not control tuple; plus possible NAT translation) 3. bound_rx_queue field doesn't exist in BindingStatus (real fields: QueueID/WorkerID/Interface/Ifindex). No public proto for binding status — must specify control-socket vs Manager.Status() vs new gRPC 4. Stale impl details on last_used_epoch v4 architectural simplification: drop RSS-join entirely. After verifying the actual codebase via pkg/dataplane/userspace/protocol.go:615: - Data plane has flow_cache + epoch counter (Fix #v3-1 unchanged) - 100ms-tick scan publishes per-binding active_flow_count to a Prometheus metric: xpf_userspace_binding_active_flow_count{binding_slot=N} - Harness scrapes /metrics every 1s during the iperf3 run - {a_i} = per-binding active_flow_count read directly. No RSS hash computation, no indirection-table read, no direction-reversal complications for -R, no kernel hash key extraction. This addresses ALL 4 Codex findings: - #1: dropped per-stream --cport. Single iperf3 -P N -J. Per-stream join via start.connected[].socket <-> intervals[].streams[].socket (the canonical iperf3 JSON join Codex suggested) - #2: irrelevant; we don't compute RSS hashes - #3: harness reads via existing /metrics endpoint. ONE new metric added (snapshot, not rolling-window — much narrower than v1's production exports deferred to #1220) - #4: last_used_epoch impl details cleaned up Trade-offs explicitly acknowledged + documented: - Per-binding count includes all flows, not just per-CoS-queue. Fine for iperf3-only test workloads. Production observability with per-queue qualification deferred to #1220. - No ≥1% throughput qualification at data plane. Conservative effect (slightly higher Cstruct than strict contract reading; gate slightly more forgiving). Starved-flow gate (Gate 1) catches starved flows exactly via iperf3 per-stream throughputs at harness layer. Net architecture: data plane tracks epoch (1-2 ns/lookup), publishes binding count on tick (10 us/sec/worker), exposes via /metrics. Harness uses 1 new Rust binary, 1 new Prometheus metric, 1 new flow_cache field. Total ~300 LOC vs v1's ~600 LOC. * v5: address Codex round-4 (4 findings; task-mov2afuw) 1. Stale v3 RSS section deleted entirely (lines 340-403 in v4 still had the 6-step ethtool/Toeplitz/RSS-join design that v4 was supposed to replace). v5 leaves only a one-line '§3.4.1 (DELETED) v3 RSS-join steps removed' marker. 2. Metric pipeline path explicit. v4 was internally inconsistent between 'harness polls gRPC' (line 175) and 'Prometheus unchanged' (line 487) and 'harness scrapes new Prometheus' (round-3 resolution). v5 spells out the full plumbing: Rust BindingLiveState.active_flow_count (AtomicU32, owner 100ms tick) -> Rust snapshot BindingStatus.active_flow_count -> helper-process control-socket JSON -> Go BindingStatus ActiveFlowCount uint32 (pkg/dataplane/userspace/protocol.go:615) -> Prometheus emitter in pkg/api/metrics.go:424 with new metrics_test.go case. 3. Harness fail-fast guard for {a_i} correctness. Without ≥1% throughput qualification at the data plane, over-counting could move Cstruct either direction. v5 mandates: during steady-state, sum(per_binding_active_flow_count) must match non-starved iperf3 stream count within max(2, 0.10 * N) tolerance, else harness exits with diagnostic. 4. fairness-eval binary location corrected: was userspace-dp/bin (doesn't exist); v5 uses canonical Cargo src/bin/ subdirectory: userspace-dp/src/bin/fairness-eval.rs (auto-discovered, no [[bin]] entry needed in Cargo.toml). Section 4 (Public API preservation) updated: Prometheus is no longer 'unchanged' — adds 1 new metric xpf_userspace_binding_active_flow_count{binding_slot=N} (snapshot, not rolling window; rolling-window production exports remain deferred to #1220). * v6: purge gRPC/proto references; pin to helper-process JSON only Codex round-5 (task-mov2ix1o): PLAN-NEEDS-MINOR with single residual issue — §3.3 still framed active_flow_count as 'published to gRPC' with a proto/xpf/v1/dataplane.proto example block, even though v5's intended path is JSON over the helper-process control socket. §4 'gRPC: 1 new field on per-binding status JSON' had the same ambiguity. v6 fixes: §3.3 'Atomic gauge published to gRPC' renamed to 'Atomic gauge published via existing JSON status path'. Replaced the proto example with a Rust BindingStatus snapshot struct that's serialized into the helper-process control-socket JSON and decoded by Go BindingStatus (pkg/dataplane/userspace/protocol.go :615). Explicit note: 'No public gRPC / proto change'. §4 Public API: 'gRPC: 1 new field on per-binding status JSON' replaced with 'gRPC / public proto: no change' + separate bullet clarifying the new field is on the internal helper-process status JSON, backward-compatible via json:omitempty. All other Codex round-4 findings remain addressed. v6 is a single-edit cleanup; expecting PLAN-READY. * v7: purge 6 remaining stale gRPC refs (Codex round-6 minor) Codex round-6 (task-mov2qck7) PLAN-NEEDS-MINOR. v6 fixed §3.3 + §4 correctly but 6 stale 'gRPC' references remained scattered: - line 175-178: 'BindingStatus.active_flow_count copied to gRPC status' -> 'copied to the helper-process status JSON' - line 240: snapshot doc-comment 'reads via the gRPC status surface' -> 'reads via the helper-process status JSON surface' - line 524: test plan 'verify gRPC field is populated' -> 'verify the new active_flow_count field is populated in the helper-process status JSON (and surfaced through Manager.Status())' - line 563: open question 'gRPC poll cadence' -> 'Prometheus scrape cadence' - line 578: HA-failover detection 'gRPC reports role_change_at' -> 'status JSON reports role_change_at' - line 583: verdict request 'gRPC field' -> 'helper-process status JSON field' Remaining 'gRPC' mentions are all legitimate: header + the §3.3 + §4 'No public gRPC / proto change' negations + line 23/48 'not via a new gRPC RPC'. Plan should now be internally consistent on the helper-process JSON path. Expecting PLAN-READY. * #1219 part 1/N: Rust fairness pure-fns module Pure-fn module for the fairness contract gate computations per docs/fairness-regimes.md (merged via PR #1217 e1ec6b9). Functions: - compute_cstruct(distribution: &[u32]) -> f64 Population CoV across the per-flow share multiset {1/a_i : repeated a_i times for active workers}. Idle workers excluded. - compute_observed_cov(per_flow_throughputs: &[u64]) -> f64 - starved_flow_count(per_flow_buckets: &[Vec<u64>]) -> u32 Counts flows below 1% of mean per-cell throughput for the ENTIRE window (transient dips don't count). - is_saturated(aggregate_buckets: &[u64], cap: u64) -> bool >= 95% of cap for >= 80% of buckets. Tests pin all 5 worked-example Cstruct values from the contract: - {2,2,2,2,2,2}: 0.00 - {1,1,2,2,3,3}: 0.47 - {0,2,2,2,3,3}: 0.20 - {1,3,0,0,0,0}: 0.58 - {6,0,0,0,0,6}: 0.00 22/22 tests pass. cargo build clean. Zero hot-path impact (the module is invoked only by the harness binary, not by the data plane). Next parts: flow_cache last_used_epoch counter (part 2); BindingLiveState + 100ms tick scan (part 3); status JSON extension + Go decoder (part 4); Prometheus metric (part 5); fairness-eval binary (part 6); harness script (part 7). Plan PLAN-READY through 7 review rounds; Codex final round-7 (task-mov2x81x) no findings. * #1219 part 2/N: flow_cache last_used_epoch + tick_advance_epoch + count_active_flows Per plan §3.2 (Fix #v3-1): add a per-binding epoch counter to FlowCache and a u16 last_used_epoch on FlowCacheEntry written on every successful lookup() hit. Per-hit cost: single u16 store on a struct already in cache from the key check. ~1-2 ns/lookup vs the v1 HashMap-insert proposal at ~30 ns. Periodic count_active_flows scan iterates entries in O(N) = 8192 cells, comparing wrapping_sub(current_epoch, last_used_epoch) < 10 (1s window at 100ms tick cadence). Wraparound-safe via u16 wrapping arithmetic; window << 25.6s wraparound period. Worker tick will call tick_advance_epoch() at 100ms cadence (part 3 will wire this into the per-binding live state + status snapshot path). Tests added (5 new): - count_active_flows_starts_at_zero - count_active_flows_excludes_never_touched_entries - count_active_flows_marks_recently_hit - count_active_flows_ages_out_after_window - count_active_flows_handles_epoch_wraparound cargo test 999/999 + 32 flow_cache (was 27). cargo build clean. No existing tests break. * #1219 part 3/N: wire active_flow_count into BindingLiveState + ~65ms tick Hooks the flow_cache epoch counter (part 2) into the worker's existing periodic tick at update_binding_debug_state (umem/mod.rs:986). BindingLiveState gains: - active_flow_count: AtomicU32 (initialized to 0) update_binding_debug_state now: - calls binding.flow.flow_cache.tick_advance_epoch() per tick - stores binding.flow.flow_cache.count_active_flows() into binding.live.active_flow_count via Relaxed atomic store Tick cadence note: the actual cadence is ~65ms (driven by the 0xFFFF debug-state counter at ~1M calls/sec / 65536 = ~15 Hz). The plan's 100ms target is approximate; with ACTIVE_WINDOW_EPOCHS=10 the active-flow window is ~650ms, comfortably under the 1s target and well clear of u16 wraparound (~16.6s). cargo test 1004/1004 (was 999). cargo build clean. Hot-path unaffected — no per-packet write change beyond part 2's u16 store. * #1219 part 4/N: BindingStatus snapshot + JSON pipeline Wires the BindingLiveState.active_flow_count atomic (part 3) through the snapshot path so the helper-process control-socket JSON carries the field for the Go decoder + Prometheus emitter (part 5). Changes: - BindingDebugSnapshot (worker/mod.rs:1778): add active_flow_count: u32 - BindingStatus (protocol.rs:1149): add #[serde] active_flow_count: u32 - BindingCountersSnapshot (protocol.rs:1502): add active_flow_count: u32 + propagate via b.active_flow_count in From impl (protocol.rs:1656) - umem/mod.rs:672 snapshot construction: load active_flow_count.load(Relaxed) - main_tests.rs: 3 BindingCountersSnapshot test sites add field cargo test 1004/1004 + 8/8 drift CI clean. * #1219 part 5/N: Go BindingStatus.ActiveFlowCount + Prometheus emitter Plumbs the helper's active_flow_count snapshot through the Go manager and emits it as a new Prometheus gauge. Changes: - pkg/dataplane/userspace/protocol.go: add ActiveFlowCount uint32 on BindingStatus + on the lean per-binding snapshot struct (`active_flow_count` JSON, omitempty for forward-compat) - pkg/api/metrics.go: - new xpfCollector.bindingActiveFlowCount Desc - new prometheus.NewDesc("xpf_userspace_binding_active_flow_count", labels: binding_slot, queue_id, worker_id, iface) - emitBindingActiveFlowCount() walks status.Bindings, emits one GaugeValue per binding - hooked into collectUserspaceStatus() - registered in Describe() go build ./... clean. go test ./pkg/api/ ./pkg/dataplane/ clean. Net Rust → Go → Prometheus pipeline now complete: flow_cache.last_used_epoch (per-hit u16 store) → BindingLiveState .active_flow_count (~65ms tick scan) → BindingDebugSnapshot → helper-process status JSON → Go BindingStatus.ActiveFlowCount → Prometheus xpf_userspace_binding_active_flow_count gauge. Per the plan: NO public gRPC change. The new field is on the internal helper-process control-socket JSON only. * #1219 part 6/N: fairness-eval Rust binary Test harness binary at userspace-dp/src/bin/fairness-eval.rs (Cargo's auto-discovered src/bin/ subdirectory; no [[bin]] entry needed). Reads: - iperf3 -P N -J --forceflush JSON output - binding-flows.tsv (timestamp, binding_slot, count) — produced by the harness scraping /metrics for xpf_userspace_binding_active_flow_count Computes per docs/fairness-regimes.md: - {a_i}: median per-binding count over the steady-state window - Cstruct: structural CoV ceiling from {a_i} - observed_CoV: sample CoV across per-stream window-mean throughputs - starved_flow_count: streams below 1% of mean for entire window - saturated: aggregate vs (Na/Nv) × shaper_rate - gap = observed - Cstruct; PASS iff gap <= epsilon (0.05) AND starved_flow_count == 0 AND harness fail-fast guard holds Harness fail-fast guard (Codex round-4 finding #3): sum(a_i) ≈ non-starved iperf stream count within max(2, 10% × N). Disagreement flags background-flow pollution / RSS unexpected behavior; harness should not report a verdict from inconsistent inputs. Pure-fns shared with the main binary via #[path = '../fairness.rs'] mod so the math is single-source-of-truth (Rust only; no Go re-impl). Output: JSON verdict {distribution_a_i, n_active, cstruct, observed_cov, gap, epsilon, saturated, aggregate_mbps, starved_flow_count, a_i_sum_check_ok, verdict, failure_reasons}. Exit code 0 on PASS, 1 on FAIL, 2 on parse/IO error. Smoke-tested with synthetic data: 2-stream {1Gbps, 500Mbps} test correctly reports CoV=0.33, gap=0.33, FAIL with Gate 2 diagnostic. Build: cargo build --release --bin fairness-eval clean * #1219 part 7/N: fairness-harness.sh test driver script Bash wrapper at test/incus/fairness-harness.sh that: 1. Runs iperf3 -P N -J --forceflush against $TARGET on $PORT for $T s 2. Concurrently scrapes /metrics every 1s, extracts xpf_userspace_binding_active_flow_count{binding_slot=N} into a TSV 3. Calls fairness-eval Rust binary on both inputs 4. Returns the binary's exit code (0 PASS, 1 FAIL, 2 parse error) Defaults match the iperf-c P=12 -R workload that produced today's 47% per-flow CoV measurement — the immediate operational target. Configurable via positional args + env (METRICS_URL, N_WORKERS, SHAPER_RATE_BPS, WARMUP, FINAL_BURST, FAIRNESS_EVAL path). awk extracts the metric using the standard Prometheus exposition format (single value at end of line; binding_slot label parsed via regex). Per-second cadence over the steady-state window gives 60+ samples per binding for stable {a_i} median. bash -n syntax-clean. Parts 1-7 SHIPPED. Implementation pipeline complete: - Rust pure-fns (part 1) ✓ - flow_cache last_used_epoch (part 2) ✓ - BindingLiveState + ~65ms tick (part 3) ✓ - Snapshot → JSON pipeline (part 4) ✓ - Go BindingStatus + Prometheus emitter (part 5) ✓ - fairness-eval binary (part 6) ✓ - harness script (part 7) ✓ Next: smoke matrix on loss userspace cluster + open PR + dispatch triple-review of code. * #1219 part 4 fix: bridge active_flow_count through coordinator::refresh_bindings Smoke-test discovery: even though parts 1-7 shipped, the metric xpf_userspace_binding_active_flow_count stayed at 0 during iperf3 load. Tracing the snapshot pipeline revealed that coordinator::refresh_bindings (afxdp/coordinator/mod.rs:1130) is where BindingLiveSnapshot fields are copied into BindingStatus (the wire-visible JSON struct). I missed adding active_flow_count to that copy block. Without this bridge, BindingLiveState.active_flow_count atomic was correctly being updated by the worker tick (visible via the snapshot fn at umem/mod.rs:669), but the value was never landing on the BindingStatus that gets serialized to the helper-process status JSON (and thus never reached the Go side / Prometheus emitter). One-line fix: copy snap.active_flow_count -> binding.active_flow_count alongside the existing flow_cache_collision_evictions copy. cargo test 1004/1004 + 22 fairness + 8 drift CI clean. Pipeline now end-to-end: flow_cache.last_used_epoch (per-hit u16 store) -> count_active_flows on 100ms tick -> BindingLiveState.active_flow_count atomic -> BindingLiveSnapshot.active_flow_count -> coordinator::refresh_bindings copies it into BindingStatus ← THIS WAS MISSING -> Helper-process status JSON -> Go BindingStatus.ActiveFlowCount -> Prometheus xpf_userspace_binding_active_flow_count gauge. Re-deploying to verify metric now reports non-zero during load. * #1219 part 7 fix: bidirectional flow_cache entries + portable awk End-to-end smoke discoveries: 1. mawk on the cluster doesn't support gawk's 3-arg match($0, re, arr). The harness scrape_metrics function fell back to no extraction, producing an empty binding-flows.tsv. Replaced with portable sed -nE pattern that works on both gawk and mawk. 2. Each TCP flow creates flow_cache entries on BOTH ingress AND egress bindings (forward flow + reverse flow). With 12 streams, sum(a_i) is ~24 not ~12. Updated the harness fail-fast guard: expected_sum = 2 × n_non_starved tolerance = max(2, 0.10 × expected_sum) OPERATIONAL ANSWER (the deliverable that motivated the harness): End-to-end run on the loss userspace cluster, 12 stream iperf3 P=12 t=30 -R against 172.16.80.200:5201: distribution_a_i = [2,7,0,2,2,0,0,2,0,3,0,0,1,2,0,4,1,0] n_active = 10 / 18 cstruct = 59.9% (structural CoV ceiling for this RSS distribution) observed_cov = 51.4% gap = -8.45pp (observed BELOW ceiling) saturated = true (22.3 Gbps) starved_flow_count = 0 verdict = PASS This is the answer to the user's mandate question: 'is 47% iperf3 CoV at structural ceiling or scheduler bug?'. The 47-51% per-flow CoV is 8.45pp BELOW the structural ceiling for the observed RSS distribution. The per-worker scheduler is doing better than the RSS placement allows. There is no scheduler bug. The remaining variance is pure RSS placement skew. The fairness contract gates that PR #1217 codified are now empirically verifiable on the test bench. The user's 'flow evenness' work is either: - Done (gate passes; current implementation is at-or-below the structural ceiling) - Or requires moving to the killed-architectural-paths (#937 ingress XDP_REDIRECT — kernel-blocked; #1211 AFD ECN overlay — research-only) Per the fairness-regimes contract, this measurement result LANDS the contract: gate satisfied for this run. * Remove fairness-eval binary committed by accident; built artifact, not source * review fixes: epoch-0 sentinel, hot-path double-borrow, slow-path inline, parse_args panic Agent-Logs-Url: https://github.com/psaab/xpf/sessions/0bcf2f3a-3a88-4a11-a42c-0ec3d48666b0 Co-authored-by: psaab <196946+psaab@users.noreply.github.com> * review nits: clarify BUG message + assertion text Agent-Logs-Url: https://github.com/psaab/xpf/sessions/0bcf2f3a-3a88-4a11-a42c-0ec3d48666b0 Co-authored-by: psaab <196946+psaab@users.noreply.github.com> * test: add TestEmitBindingActiveFlowCount_LabelsAndValue (Codex round-1 finding #2) Pins the Prometheus emitter's wire shape per Codex round-1 review: 3-binding fixture → 3 metrics, slot=0 ActiveFlowCount=5 → gauge value 5 with correct labels {binding_slot=0, queue_id=0, worker_id=0, iface=ge-0-0-1}. Mirrors the existing emitWorkerRuntime test pattern. Atop Copilot SWE Agent's autonomous review-fix commits (87973ff + 1526ff5) which addressed: epoch-0 sentinel, hot-path double-borrow, slow-path inline, parse_args panic, BUG message clarity. My local 161a3435 'round-1 review fixes' commit got 2-way conflict on flow_cache.rs + fairness-eval.rs vs the agent's edits — agent's versions were equivalent on the substantive fixes (epoch-0, panic exits, etc) plus added the double-borrow + slow-path-inline fixes I missed. Reset local to the agent's tip and re-applied only the Go metric test (which the agent didn't add). * #1219 round-2 review fixes: re-apply start.connected[] seeding + plan.md cleanup Codex round-2 (task-mov55tld) MERGE-NEEDS-MAJOR. Round-1 finding #1 (start.connected[] seeding) regressed when I reset to Copilot agent's 87973ff during the rebase conflict — the agent's version of fairness-eval.rs didn't include the seeding, and my reapplied seeding got dropped. Re-applied here: 1. Seed per_stream_buckets from start.connected[].socket so streams that contributed zero bytes for the entire steady-state window appear in the map with empty bucket vec → starved_flow_count correctly increments. 2. n_iperf_streams derives from start.connected[].len() when non-empty, falling back to test_start.num_streams. This makes the harness fail-fast guard actually catch missing-from-intervals streams. 3. plan.md cleanup (Codex round-2 finding #2): - 8192 entries → 4096 entries (matches FLOW_CACHE_SIZE constant) - 100ms tick → ~65ms gate (call-rate dependent) - 256 epochs → fixed already in code, plan was already correct Test verification: - 2-stream JSON with socket 5 active + socket 7 silent → starved_flow_count: 1 (was 0 before fix), verdict: FAIL with Gate 1 diagnostic. Codex's round-2 reproducer scenario. - All 22 fairness pure-fn tests + 22 flow_cache tests + 8 drift CI pass. - cargo build --release clean (1 warning: dead-code on local_port, marked #[allow] earlier). * #1219 round-3 review fixes: per-WORKER {a_i} aggregation + Copilot test additions Codex round-3 (task-mov5ervt) MERGE-NEEDS-MAJOR + Gemini round-1 (task-mov4lrtd) MERGE-NEEDS-MAJOR convergent fatal: harness was building {a_i} by binding_slot across ALL interfaces (3 ifaces × 6 queues = 18 bindings), producing fictitious 18-element distribution instead of the contract's per-worker count for the test's data-direction interface. v3 fix: 1. fairness-harness.sh scrape_metrics now writes 6 columns (timestamp, binding_slot, queue_id, worker_id, iface, count) so fairness-eval can filter and aggregate properly. 2. fairness-eval --iface arg added; backward-compat parser handles both new 6-column TSV and legacy 3-column. 3. fairness-eval {a_i} computation rewritten: - filter rows to --iface (test's data-direction) - aggregate by (timestamp, worker_id) summing counts - take median per-worker over the steady-state window - distribution_a_i is now per-worker, not per-binding-slot 4. Default IFACE=ge-0-0-2 in harness script (loss cluster's data direction; configurable via env). Plus Copilot 5 comments addressed: 5. Test pinning active_flow_count projection through BindingCountersSnapshot::From: assert snap.active_flow_count == 71 added to the existing projection test. 6. Wire-key assertion now includes 'active_flow_count' in the binding_counters_snapshot_serializes_with_expected_wire_keys test; fixture value changed from 0 → 31 so omitempty doesn't skip the key. 7. local_port: u32 marked #[allow(dead_code)] with rationale. Test verification: - 6-col TSV with balanced 6-worker distribution → {a_i} = [2,2,2,2,2,2], Cstruct=0.00 (correct). - cargo test 1006/1006 + 22 fairness + 8 drift CI clean. - cargo build --release clean (1 expected dead-code warning on dropped local_port path). * review fixes (round 3): fix 100ms doc comments, u16 wraparound math, read_to_string panic, mod fairness cfg(test) Agent-Logs-Url: https://github.com/psaab/xpf/sessions/a70b49a8-2bcc-408d-9839-19d6b7ba6ff2 Co-authored-by: psaab <196946+psaab@users.noreply.github.com> * Codex round-4 (task-mov5x7j7) MERGE-NEEDS-MAJOR — caught a real bug in the round-3 fix: 1. **Major: fail-fast guard was still 2× n_streams.** With the --iface filter introduced in round-3, sum(a_i) is one direction only. P=12 should expect ~12, not ~24 — round-3's untouched guard would reject correct single-iface data. Fixed: select multiplier based on whether iface filtering is in effect. 2. **Minor: silent zeros on legacy-input + --iface combo.** A legacy 3-col TSV (iface="") combined with a fresh harness's --iface flag silently dropped every row → empty {a_i} → bogus PASS. Fixed: detect "filter set but no iface label in any row" and warn loudly + treat --iface as unset for that input. 3. **Minor: stale 100ms / 8192-entry prose.** flow_cache.rs:359 doc-comment and plan.md:291 still claimed the old 100ms tick and ~8192-entry cap — now ~65ms (umem 0xFFFF gate) and 4096 throughout. 4. **Real test gap (Codex check #6):** added two unit tests in src/bin/fairness-eval.rs: - six_col_multi_iface_per_worker_aggregation: 2 ts × 3 ifaces × 6 workers, verify --iface filter collapses to per-worker sums on the filtered iface only and noise on other ifaces is dropped. - three_col_legacy_parses_with_empty_iface: pin legacy parser produces iface="" and worker_id=binding_slot. Test verification: - cargo test --release --bin fairness-eval: 24/24 pass (was 22 fairness pure-fns; +2 new tsv_tests). - Full cargo test --release: 1006/1006 + 24 + 8 drift CI clean. * #1219 round-5 review fixes: extract aggregate_per_worker helper + clean up stale prose Codex round-5 (task-mov69swk) MERGE-NEEDS-MINOR. Two findings: 1. **Tests didn't cover the production fix.** Round-4's added tests only exercised the parser/filter shape — not iface_filter_active, per_ts_worker grouping, or direction_multiplier. A reverted `direction_multiplier = 2` would not have failed any test. Round-5 fix: extract two helpers that ARE the production fix: - `aggregate_per_worker(rows, iface_arg, n_workers, warmup, final)` → returns AggregateResult { distribution_a_i, iface_filter_active }. - `direction_multiplier(iface_filter_active: bool) -> u32`. main() now calls both. New aggregation_tests module exercises: - filter_iface_and_groups_by_worker (noise on other iface MUST NOT contaminate filtered distribution) - sums_multiple_queues_per_worker (BTreeMap<(ts, worker_id), u32> accumulator must SUM across queues, not replace) - legacy_3col_disables_filter (legacy parser produces iface="" even with --iface set; filter must collapse to inactive) - missing_workers_default_to_zero - median_smooths_jitter (single outlier on either side is filtered) - direction_multiplier_iface_filter_active_is_one (this would have caught the round-3 bug Codex round-4 found) - direction_multiplier_no_iface_filter_is_two (legacy bidirectional fall-through preserved) These tests directly fail under any of: - per-binding grouping in place of per-worker; - reverted direction_multiplier; - filter applied to legacy iface="" rows; - sum-then-median replaced by raw count. 2. **Stale 100ms / 8192 prose** at plan.md:34, 169, 235, 270, 279, 364, umem/mod.rs:235, flow_cache.rs:497. All mechanically replaced with ~65 ms (umem 0xFFFF gate) and 4096 entries / ~650 ms window throughout. Final grep confirms zero residual matches. Test verification: - cargo test --release --bin fairness-eval: 31/31 pass (was 24; +7 new) - Full suite: 1006/1006 + 31 + 8 drift CI clean. - Final grep -rn '100 ms|100ms|~8192|8192 entries|256 epochs|25\.6 s|256 ticks' on docs/pr/1219-fairness-harness/, userspace-dp/src/afxdp/, and userspace-dp/src/bin/fairness-eval.rs returns zero matches. * #1219 round-6 review fixes: metrics.go HELP text + plan.md "1s" → "~650 ms" Codex round-6 (task-mov6lp4k) MERGE-NEEDS-MINOR. Two findings, both addressed: 1. **pkg/api/metrics.go:362 stale Prometheus HELP text.** The xpf_userspace_binding_active_flow_count gauge was documented as "active in the last ~1s" but the ACTIVE_WINDOW_EPOCHS=10 × ~65ms tick = ~650 ms. User-visible metric documentation; fixed to match the implementation. 2. **plan.md:95 / :281 / :366 still said "1 second window"** / "1s" / "1 second" — three more sites missed in the round-5 sweep. Now consistent with the rest of the doc at ~650 ms. Note: plan.md:36 ("Harness scrapes /metrics every 1 second") and :466 ("last 1s" final-burst) and umem/mod.rs:384/:409 ("~1s debug-report tick") are about UNRELATED 1-second intervals (Prometheus scrape cadence, iperf3 final-burst window, and the existing debug-report tick) — they are correct as-is and were intentionally left untouched. Gemini round-3 (task-mov6mk28) verdict: **MERGE-READY**. Helper extraction parity, test coverage of subtle bugs (worker_id != binding_slot, multi-queue summation), direction_multiplier code paths, Cstruct math regression, plan doc consistency, worker_id stability, hot-path impact, architectural completeness, JSON wire schema, and arg validation all PASS. "You are good to merge." Test verification: - go test ./pkg/api/...: pass. - cargo test --release: 1006 + 31 + 8 unchanged. * review fixes (round 5): min steady-state guard, worker_id validation, metrics 1s→650ms, omitempty comment, plan.md stale text, harness label-order comment Agent-Logs-Url: https://github.com/psaab/xpf/sessions/93fd3daa-a3d2-472e-843c-8413351b77bd Co-authored-by: psaab <196946+psaab@users.noreply.github.com> * review fixes (round 5+): min steady-state guard, worker_id Result, omitempty comment, harness label-order comment, plan.md stale text Agent-Logs-Url: https://github.com/psaab/xpf/sessions/93fd3daa-a3d2-472e-843c-8413351b77bd Co-authored-by: psaab <196946+psaab@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: psaab <196946+psaab@users.noreply.github.com>
psaab
added a commit
that referenced
this pull request
May 7, 2026
Per the v6 plan (docs/pr/547-rss-skew-fixture/plan.md). Both reviewers
PLAN-READY (Gemini round-2 explicit, Codex's round-5+6 only flagged
text residue).
userspace-dp/tests/fairness_eval_blackbox.rs (~600 LOC):
- Hand-rolled TempGuard with Drop cleanup (no tempfile crate dep);
uses SystemTime::now().as_nanos() + process::id() + per-test prefix
for collision-resistant naming, matching fairness-eval.rs::tsv_tests
pattern.
- synth_iperf3_json: minimum schema fairness-eval consumes — connected
sockets + per-interval streams.
- synth_tsv_6col: 6-column TSV matching what fairness-harness.sh emits.
- run_eval: env!('CARGO_BIN_EXE_fairness-eval') subprocess invocation.
- Black-box discipline (v6 §3.5): NO compute_cstruct call, NO #[path]
shortcut, NO internal-helper imports. Asserts only exit code,
verdict string, failure_reasons class membership, distribution_a_i
values, and broad numeric relationships (gap = observed_cov - cstruct).
7 tests:
- pass_case_skew_with_iface_noise: 6-stream balanced PASS, iface
filter drops ge-0-0-3 noise, distribution_a_i = [1;6].
- gate1_starved_flow_fails: 1 starved flow (0 bps), Gate 1 in
failure_reasons, exit 1.
- gate2_cov_gap_exceeds_epsilon_fails: heavy per-stream skew, no
starved flow, gap > 0.05, Gate 2 in failure_reasons, exit 1.
- guard_sum_mismatch_fails: TSV reports 100 flows on worker 0, 0
elsewhere; sum guard fires, a_i_sum_check_ok=false, exit 1.
- guard_empty_tsv_fails_via_sum_guard: header-only TSV, sum=0,
Guard FAIL — observed_cov=cstruct=0 so Gate 2 must NOT fire
(per Codex round-3 finding #4).
- exit2_out_of_range_worker_id: worker_id=99 vs n_workers=6, exit 2,
no verdict JSON, stderr explains the error.
- verdict_emits_required_keys: schema test pinning the 10 required
JSON keys (distribution_a_i, n_active, cstruct, observed_cov, gap,
saturated, a_i_sum_check_ok, starved_flow_count, verdict,
failure_reasons). A rename of any required key fails this loudly;
additive changes to the diagnostic 6 don't break the fixture.
Each fixture uses ≥60 second steady-state windows (fairness-eval has
a hardcoded MIN_STEADY_STATE_SECS=60 guard).
Test verification:
- cargo test --release --test fairness_eval_blackbox: 7/7 pass.
- 5x flake check: 5/5 clean.
- Full cargo test --release: 1006 + 32 + 8 + 7 all pass; no
regressions.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
5 tasks
psaab
added a commit
that referenced
this pull request
May 7, 2026
* #547 plan v1 (DRAFT) — deterministic RSS-skew fixture for the harness Pending Codex hostile + Gemini adversarial plan review. Tests-only PR plan: parameterise the fairness-eval binary with a known per-worker {a_i} distribution + matched per-stream throughput specs, write synthetic iperf3.json + 6-col binding-flows.tsv, invoke fairness-eval, assert verdict matches hand-computed expected. 5 worked-example fixtures pin the same distributions fairness.rs::tests pins, plus negative cases for Gate 1 / Gate 2 / saturation. PLAN-KILL is an acceptable outcome if reviewers conclude the fixture's value is too small to justify the LOC. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * v2: rewrite scope per Codex round-1 PLAN-NEEDS-MAJOR (task-movo6xm1) Codex caught 3 substantive flaws in v1: 1. Fixture matrix duplicated fairness.rs::tests + fairness-eval bin tests. v2 reframes around binary-contract coverage (CLI args, file IO, exit codes, JSON shape) — the only surface the unit tests don't exercise. 2. 'Saturation negative' was architecturally wrong. saturated is a diagnostic bool, not in failure_reasons. v2 drops the saturation negative as a verdict-asserting test; it can live as a classification test if needed. 3. Subprocess path was fragile. v1 used Command::new('./target/...'); v2 uses env!('CARGO_BIN_EXE_fairness-eval') so cargo's bin dependency wires the path correctly. No feature gate. Plus value claim narrowed: v2 explicitly does NOT validate future fairness mechanisms — only the binary's external contract. Mechanism validation belongs at the cluster harness level. 5 v2 cases: - PASS (skew + iface noise; verdict PASS, exit 0) - Gate 1 FAIL (starved flow; failure_reasons contains 'starved') - Gate 2 FAIL (CoV gap > epsilon; failure_reasons contains 'Gate 2') - Guard FAIL (sum(a_i) mismatch isolated from Gate 1/2) - Exit 2 (out-of-range worker_id; current Err-on-out-of-range path) Plus a required-keys schema test that pins the 8 required JSON fields (so renames break loudly while additive changes don't). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * v3: address Codex round-2 PLAN-NEEDS-MINOR (task-movoly14) 5 cleanup items addressed: 1. Black-box boundary tightened. v2 said the fixture would compute expected Cstruct via direct call to compute_cstruct, but userspace-dp has bin targets only — integration tests can't reach into #[cfg(test)] mod fairness in main.rs without reintroducing internal-math coupling. v3 makes the fixture subprocess-only: exit code, verdict string, failure-class membership, distribution from input, broad numeric relationships. No internal-helper imports. 2. Required-keys set expanded from 8 to 10: added n_active (PASS case asserts it) and starved_flow_count (structurally important). Inconsistency in v2 caught by Codex. 3. Test command corrected: cargo test --manifest-path userspace-dp/Cargo.toml --release. v2 said 'cargo test --release' from repo root, which has no Cargo.toml. 4. Empty-input semantics fixed. v2 said empty TSV is Gate 2 FAIL; Codex pointed out it's actually the sum guard (Guard FAIL) because observed_cov == 0 and cstruct == 0 with equal iperf streams. Empty intervals not added — would require production-code fix, out of scope. 5. tempfile crate added as dev-dep (not PID-based naming). Test count: 6 cases (was 5). New §3.5 'Black-box discipline' spells out the no-internal-import rule with concrete examples. Codex round-2 cost/benefit: 200 LOC is worth it because the shell harness shells out and unit tests don't cover that boundary. Gemini round-1 (task-movo7jfk) failed at Pro 3 rate limit — single retry per memory rule (gemini infra outage merge policy). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * v4: address Codex round-3 + Gemini round-1-retry PLAN-NEEDS-MINOR Codex round-3 (task-movp0i14): - §5 'Hidden invariants' contradicted v3 §3.5 'Black-box discipline' by saying the fixture calls compute_cstruct. Reworded: that helper is the single source of truth INTERNALLY (called by fairness-eval itself); integration test does NOT import it. - Stale '5 cases' / '~150-200 LOC' / old open questions cleaned up to '6 cases' / '~250 LOC' / consolidated 'resolved by review rounds' structure. Gemini round-1-retry (task-movp199v): - Drop tempfile crate dev-dep; reuse the SystemTime::now().as_nanos() + process::id() pattern that fairness-eval.rs::tsv_tests already uses (lines 539-550 at HEAD). Wrap in 10-line TempGuard struct with Drop impl. No new deps; workspace tree unchanged. Both reviewers explicitly noted 'if Path 4 is dead, this is YAGNI; consider PLAN-KILL'. v4 keeps implement-now path because the user's standing mandate is to drive per-5-tuple fairness end-to-end and fairness-eval is already the merge bar for any future mechanism PR. PLAN-KILL escape hatch preserved if circumstances change. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * v5: address Codex round-4 PLAN-NEEDS-MINOR (3 stale-residue text fixes) Codex round-4 (task-movpbbpz) — 3 findings, all stale-residue: 1. Line 482 'may not justify even 200 LOC of test code' → '~250 LOC'. v3 → v4 grew to 6 cases + schema test = ~250 LOC; the LOC estimate in the methodology footnote wasn't updated. 2. Line 92 'v3 fix: add tempfile crate as dev-dep ... use tempfile::tempdir()' read like an instruction even though v4 §3.3 below supersedes it. Marked as 'v3 proposed; v4 superseded with the hand-rolled TempGuard approach — the tempfile dep is NOT actually added.' 3. fairness-eval.rs line reference 539-550 (parser code, wrong location) → 729+ (the actual write_tmp function in tsv_tests module, commit 9d3faf0). Gemini round-2 (task-movpbwzi): **PLAN-READY**. 'v4 has systematically eliminated the architectural coupling, scoped the verification correctly, and removed unnecessary dependencies. The implementation path is clear and well-defended.' v5 awaits Codex round-5 verify; expecting PLAN-READY consensus. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * v6: address Codex round-5 PLAN-NEEDS-MINOR (2 final stale-residue text fixes) Codex round-5 (task-movpjn74) — 2 findings, both stale-residue text: 1. Line 101 'the 200 LOC is worth it' → '~250 LOC'. Same staleness class round-4 was meant to clean exhaustively; one occurrence slipped through. 2. Line 30 'v3 added the tempfile crate as a dev-dep' → 'v3 proposed adding...'. v4 superseded the proposal so 'added' was misleading. No design concerns; pure text. Gemini round-2 (task-movpbwzi) already returned PLAN-READY at v4. v6 just locks down the doc. Implementation begins in parallel. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * #547 implement: 7 black-box integration tests for fairness-eval Per the v6 plan (docs/pr/547-rss-skew-fixture/plan.md). Both reviewers PLAN-READY (Gemini round-2 explicit, Codex's round-5+6 only flagged text residue). userspace-dp/tests/fairness_eval_blackbox.rs (~600 LOC): - Hand-rolled TempGuard with Drop cleanup (no tempfile crate dep); uses SystemTime::now().as_nanos() + process::id() + per-test prefix for collision-resistant naming, matching fairness-eval.rs::tsv_tests pattern. - synth_iperf3_json: minimum schema fairness-eval consumes — connected sockets + per-interval streams. - synth_tsv_6col: 6-column TSV matching what fairness-harness.sh emits. - run_eval: env!('CARGO_BIN_EXE_fairness-eval') subprocess invocation. - Black-box discipline (v6 §3.5): NO compute_cstruct call, NO #[path] shortcut, NO internal-helper imports. Asserts only exit code, verdict string, failure_reasons class membership, distribution_a_i values, and broad numeric relationships (gap = observed_cov - cstruct). 7 tests: - pass_case_skew_with_iface_noise: 6-stream balanced PASS, iface filter drops ge-0-0-3 noise, distribution_a_i = [1;6]. - gate1_starved_flow_fails: 1 starved flow (0 bps), Gate 1 in failure_reasons, exit 1. - gate2_cov_gap_exceeds_epsilon_fails: heavy per-stream skew, no starved flow, gap > 0.05, Gate 2 in failure_reasons, exit 1. - guard_sum_mismatch_fails: TSV reports 100 flows on worker 0, 0 elsewhere; sum guard fires, a_i_sum_check_ok=false, exit 1. - guard_empty_tsv_fails_via_sum_guard: header-only TSV, sum=0, Guard FAIL — observed_cov=cstruct=0 so Gate 2 must NOT fire (per Codex round-3 finding #4). - exit2_out_of_range_worker_id: worker_id=99 vs n_workers=6, exit 2, no verdict JSON, stderr explains the error. - verdict_emits_required_keys: schema test pinning the 10 required JSON keys (distribution_a_i, n_active, cstruct, observed_cov, gap, saturated, a_i_sum_check_ok, starved_flow_count, verdict, failure_reasons). A rename of any required key fails this loudly; additive changes to the diagnostic 6 don't break the fixture. Each fixture uses ≥60 second steady-state windows (fairness-eval has a hardcoded MIN_STEADY_STATE_SECS=60 guard). Test verification: - cargo test --release --test fairness_eval_blackbox: 7/7 pass. - 5x flake check: 5/5 clean. - Full cargo test --release: 1006 + 32 + 8 + 7 all pass; no regressions. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * address Codex code review MEDIUM + LOW + Copilot inline (round-1) Codex code review (task-movpzjbq): - MEDIUM (exit-2 stdout): the run_with_inputs helper sets verdict=None for any exit code other than 0/1, so 'verdict.is_none()' alone doesn't actually prove no JSON was emitted. Added a direct assertion that stdout contains no '{'. - LOW (stale comments): 4 sites updated: - line 263: '5 1-second intervals' → '60 1-second intervals' (we use 60 to clear MIN_STEADY_STATE_SECS=60). - line 380: '{a_i}=[2;6]' → '{a_i}=[1;6]' (count=1 per worker since we have 6 streams, not 12). - line 388: stale CoV arithmetic ('1.41/2.5 ≈ 0.56') replaced with the actual CoV ≈ 1.29 calculation for the one-flow-at-10Gbps + five-at-1Gbps fixture. - line 477: empty-TSV expected_sum was wrong ('expected ~6'); with no iface labels iface_filter_active=false → direction_multiplier=2 → expected_sum=12. Comment now explains the actual guard math. - make_balanced_tsv docstring (line 600+): old '[2,2,2,2,2,2] / 12 streams' → '[1; n_workers] / 6 streams'. - LOW (schema test): contains_key alone wouldn't catch a type break (e.g. saturated changed from bool to string). Added type assertions on all 10 required keys: is_array, is_u64, is_f64, is_boolean, is_string. Copilot inline (6 comments): - 4 stale-comment sites — same as Codex LOW (overlapping findings on lines 265, 382, 600, 477; all fixed above). - plan.md updates: - line 173 ('6 black-box integration tests' vs 7 total) → rewored as '6 black-box integration test cases plus 1 required-keys schema test (7 tests total)'. - line 105 ('~250 LOC') → '~600 LOC' to reflect actual test code + helpers + comments. Test verification: - cargo test --release --test fairness_eval_blackbox: 7/7 pass. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> * address Codex round-2 code review LOW (final stale ~250 LOC refs) Codex round-2 (task-movqa4hj) MERGE-NEEDS-MINOR — 2 stale '~250 LOC' references at plan.md:231 and :489 that the round-1 fix missed (only line 101 was caught). Both replaced with '~640 LOC' (actual wc -l of userspace-dp/tests/fairness_eval_blackbox.rs). Gemini code review (task-movq0cvw) verdict: **MERGE-READY**. All 9 review dimensions PASSED: - Black-box discipline (no internal-helper imports verified by grep) - TempGuard correctness (per-test prefix neutralises intra-process collisions despite shared PID/clock) - Synthetic input fidelity matches HEAD parser exactly - Test mathematics traced and confirmed for all 7 cases - MIN_STEADY_STATE_SECS=60 boundary handled correctly - Sub-second test runtime - Drop-on-panic guarantee - Idiomatic Rust, no dead code - Plan vs implementation match Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
psaab
added a commit
that referenced
this pull request
May 7, 2026
…_bytes-based rate
Both round-2 reviewers converged on the same fix; v3 adopts both:
Codex round-2 (task-mow38bom, PLAN-NEEDS-MAJOR) — 5 findings:
1. active_flow_count is binding-wide, not per (egress_ifindex, queue_id).
2. Naive cap-check on MQFQ front HOL-blocks; need eligible-bucket scan.
3. observed_bps NOT existing flow_cache state.
4. SharedCoSQueueLease doesn't auto-free tokens for nonempty queues.
5. class_rate concrete source needs definition (exact vs surplus phase).
Gemini round-2 (task-mow3904l, PLAN-KILL) — narrow but valid:
- C. observed_bps via TX completion path crosses worker boundaries
→ would re-introduce v1's contention problem. FATAL for v2.
- B. SharedCoSQueueLease behavior 'elegantly correct' for goal.
- F. PerClassFairnessState belongs in CoSQueueConfigState (Arc'd
cross-worker), NOT FlowFairState (per-worker Box).
- Concrete v3 alternative: track bytes natively via existing
flow_bucket_bytes [u64; COS_FLOW_FAIR_BUCKETS] in FlowFairState
(verified at types/cos.rs:563).
User calibration ('gemini can be wrong a lot') applied: this PLAN-KILL
is the substantive case — narrow point with code-cited grounds and
a concrete fix. Adopted, not capitulated.
v3 design:
1. PerClassFairnessState in CoSQueueConfigState (per egress_ifindex,
queue_id) — Codex #1, Gemini F.
2. Per-bucket rate via flow_bucket_bytes diff + 10ms local timestamp.
No flow_cache touch. No cross-worker write — Gemini C fix.
3. Cap-aware MQFQ selector scans active buckets, skips over-cap,
falls back to lowest-finish if all capped — Codex #2.
4. Per-queue active_flow_count via extension of count_active_flows scan
(Option A: single extra pass on owner-only path) — Codex #1.
5. class_rate concrete: exact-phase = transmit_rate / N_active_workers;
surplus-phase = root_shaping_rate × local_share — Codex #5.
6. Conditional shared_lease.release_local_tokens() when locally-capped
for N consecutive batches — Codex #4.
Acceptance unchanged: per-flow CoV ≤ Cstruct + 0.10 on the user's exact
command, no aggregate regression > 5%.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Session/state GC was doing avoidable per-sweep allocations in hot paths (
toDelete*,snatExpired*), which adds overhead under high session churn. This change keeps GC behavior intact while reducing allocator pressure in periodic sweeps.GC hot-path memory reuse
conntrack.GCfor:sweep()from fresh zero-capacity slices to[:0]reuse of preallocated buffers.Constructor-level preallocation
NewGC(...)now initializes default capacities for the scratch buffers, so the first steady-state sweeps avoid repeated growth.Focused regression coverage
TestGCScratchBuffersReusedto assert:🔒 GitHub Advanced Security automatically protects Copilot coding agent pull requests. You can protect all pull requests by enabling Advanced Security for your repositories. Learn more about Advanced Security.