feat(session_store): add session_store_flush option for eager mirroring#905
Conversation
Adds ClaudeAgentOptions.session_store_flush ("batched" | "eager",
default "batched"). With "eager", build_mirror_batcher() zeroes the
TranscriptMirrorBatcher pending thresholds so every transcript_mirror
frame schedules a background flush, delivering entries to
SessionStore.append() in near real time instead of coalescing until the
end-of-turn result message. Appends remain serialized in enqueue order;
a slow adapter does not stall the read loop (frames coalesce while it
is busy).
Exports the SessionStoreFlushMode type alias.
|
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #905 +/- ##
=======================================
Coverage ? 88.24%
=======================================
Files ? 23
Lines ? 3904
Branches ? 0
=======================================
Hits ? 3445
Misses ? 459
Partials ? 0 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
/sdk-e2e-proof — PR #905Manual end-to-end proof against the real CLI for Setup: in-memory Case A: query() eager — PASS
Case B: query() batched — PASS
Case C: client eager multi-turn — PASS
Overall: PASS Eager mode produced multiple |
Summary
Adds
ClaudeAgentOptions.session_store_flush: Literal["batched", "eager"](default"batched") so callers can opt into near-real-timeSessionStore.append()delivery instead of waiting for the end-of-turn flush.Today the
TranscriptMirrorBatcherbufferstranscript_mirrorframes and only flushes when theresultmessage arrives (or on 500-entry / 1 MiB overflow). That keeps adapter latency off the streaming hot path, but it means an external store can't observe a turn until it's finished — a problem for live-tailing UIs, cross-process resume, or crash-durability use cases that want the mirror to track the on-disk JSONL closely.With
"eager",build_mirror_batcher()zeroes both pending thresholds so every enqueued frame schedules a background drain. The drain still runs off the read loop viaasyncio.ensure_future, so a slow adapter does not stall message streaming — it just sees frames coalesced while it's busy. Append ordering is preserved by the existing batcher lock.Also exports the
SessionStoreFlushModetype alias for callers that thread the value through their own config.API
Tests
6 new tests in
tests/test_transcript_mirror.py:TestBuildMirrorBatcherFlushMode::test_flush_mode_sets_thresholds[default|batched|eager]— parametrized: omitted/"batched"keepMAX_PENDING_*defaults,"eager"zeroes bothTestBuildMirrorBatcherFlushMode::test_eager_mode_flushes_per_frame— two enqueues → two separateappend()calls without an explicitflush()TestBuildMirrorBatcherFlushMode::test_options_default_is_batched—ClaudeAgentOptions()defaultTestReceiveLoopFramePeeling::test_eager_flush_mode_appends_per_frame_before_result— end-to-end throughquery(): withsession_store_flush="eager"and a transport that yields between frames, the store sees oneappend()per frame before theAssistantMessageis yielded (vs. a single coalesced batch in the default mode)_make_mock_transport()gains ayield_betweenkwarg so the integration test can model the await on real stdout I/O between frames.Test plan
python -m ruff check src/ tests/python -m ruff format src/ tests/python -m mypy src/python -m pytest tests/(734 passed, 4 skipped)