Codspeed benchmark for backend state event processing#6290
Conversation
Greptile SummaryThis PR adds a CodSpeed benchmark (
Confidence Score: 4/5Benchmark results may silently reflect a broken pipeline until the missing delta-count assertion is added A single P1 finding (missing assertion hiding silent processing failures) prevents a score of 5; all other issues are P2 tests/benchmarks/test_event_processing.py β specifically the run_events async helper and the module docstring Important Files Changed
Sequence DiagramsequenceDiagram
participant T as test_process_event
participant B as @benchmark
participant RE as run_events()
participant P as process(app, event)
participant SM as StateManagerMemory
participant H as BenchmarkState.increment
T->>B: wrap _() with benchmark timing
loop 3 CodSpeed iterations
B->>RE: await run_events(num_events=3, num_expected_deltas=3)
loop num_events=3
RE->>P: async-gen process(app, event, sid)
P->>SM: get_state(token)
SM-->>P: state instance
P->>H: call increment()
H-->>P: state.counter += 1
P-->>RE: yield StateUpdate (delta)
RE->>RE: delta_count += 1
end
Note over RE: No assertion: delta_count == expected_deltas
RE-->>B: return
end
Reviews (1): Last reviewed commit: "Codspeed benchmark for backend state eve..." | Re-trigger Greptile |
Merging this PR will not alter performance
Performance Changes
Comparing |
- Add assertion that delta_count matches expected_deltas (P1) - Fix module and fixture docstrings to match actual implementation (P2) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Prior to the backend event loop, add a benchmark for event processing pipeline.