Context
Multiple analyze-module hot paths clone the full input Vec<TurnRecord> before they aggregate, doubling the working-set memory of the most expensive verbs. Every TurnRecord carries a Vec<ToolCall> plus optional file lists — these are not cheap clones.
Concrete sites:
crates/relayburn-sdk/src/analyze/overhead.rs:131-136 — clones every applicable TurnRecord per overhead file, 3×. With three default candidates and overlapping applies_to, every Claude turn ends up cloned ~2× and every Codex/OpenCode turn ~1×. Biggest unforced alloc in the analyzer.
crates/relayburn-sdk/src/analyze/hotspots.rs:208-216 — the entire turn stream cloned into IndexMap<String, Vec<TurnRecord>> via entry(t.session_id.clone()).or_default().push(t.clone()).
crates/relayburn-sdk/src/analyze/quality.rs:113-124 — parallel Vec<(String, Vec<TurnRecord>)> + HashMap<String, usize> index, cloning every turn.
crates/relayburn-sdk/src/analyze/compare.rs:127-152 — model id cloned 4× per turn inside the hot accumulation loop (lines 132, 140, 143, 147).
Proposed fix
Switch the per-session/per-file aggregations to Vec<&TurnRecord> (or IndexMap<String, Vec<&TurnRecord>>). Nothing downstream of these sites mutates the turns; they only read them. The lifetime should already trivially be the input slice.
For compare.rs:127-152 specifically: hoist the model-name clone above the entry/or_insert_with calls (one clone per new model, not per turn).
Also fold in:
crates/relayburn-sdk/src/analyze/overhead.rs:170-177 describe_applies_to — three serde_json::to_value round-trips on every call to discover that SourceKind::ClaudeCode serializes to \"claude-code\". A 4-arm match returning &'static str removes 3 allocs per file render. (Belongs with the enum-string-conversion follow-up too.)
Verification
The conformance gate (deep-equal results vs TS @relayburn/sdk) is the existing safety net; behavior must not change.
References
- Analyze review notes from the May 2026 Rust review.
Context
Multiple analyze-module hot paths clone the full input
Vec<TurnRecord>before they aggregate, doubling the working-set memory of the most expensive verbs. EveryTurnRecordcarries aVec<ToolCall>plus optional file lists — these are not cheap clones.Concrete sites:
crates/relayburn-sdk/src/analyze/overhead.rs:131-136— clones every applicableTurnRecordper overhead file, 3×. With three default candidates and overlappingapplies_to, every Claude turn ends up cloned ~2× and every Codex/OpenCode turn ~1×. Biggest unforced alloc in the analyzer.crates/relayburn-sdk/src/analyze/hotspots.rs:208-216— the entire turn stream cloned intoIndexMap<String, Vec<TurnRecord>>viaentry(t.session_id.clone()).or_default().push(t.clone()).crates/relayburn-sdk/src/analyze/quality.rs:113-124— parallelVec<(String, Vec<TurnRecord>)>+HashMap<String, usize>index, cloning every turn.crates/relayburn-sdk/src/analyze/compare.rs:127-152— model id cloned 4× per turn inside the hot accumulation loop (lines 132, 140, 143, 147).Proposed fix
Switch the per-session/per-file aggregations to
Vec<&TurnRecord>(orIndexMap<String, Vec<&TurnRecord>>). Nothing downstream of these sites mutates the turns; they only read them. The lifetime should already trivially be the input slice.For
compare.rs:127-152specifically: hoist the model-name clone above theentry/or_insert_withcalls (one clone per new model, not per turn).Also fold in:
crates/relayburn-sdk/src/analyze/overhead.rs:170-177 describe_applies_to— threeserde_json::to_valueround-trips on every call to discover thatSourceKind::ClaudeCodeserializes to\"claude-code\". A 4-armmatchreturning&'static strremoves 3 allocs per file render. (Belongs with the enum-string-conversion follow-up too.)Verification
The conformance gate (deep-equal results vs TS
@relayburn/sdk) is the existing safety net; behavior must not change.References