Description
ServerEventParser.parse() has O(n²) performance characteristics when receiving large SSE events delivered in small chunks (e.g., URLSession's ~8KB didReceiveData callbacks). This causes severe delays and can result in events being lost when combined with application-level timeouts.
Root Cause
In the current implementation:
mutating func parse(_ data: Data) -> [EVEvent] {
let (separatedMessages, remainingData) = splitBuffer(for: buffer + data)
// ...
}
Two issues:
-
buffer + data creates a new Data allocation on every call, copying the entire buffer contents. As the buffer grows (e.g., 8KB → 16KB → 24KB → ... → 1.5MB), each call copies more data.
-
splitBuffer scans the entire buffer for the \n\n separator on every chunk, even when the separator is unlikely to be present in the new data.
For a 1.5MB SSE event arriving in ~180 chunks of 8KB:
- Total work: sum of scanning 8KB + 16KB + 24KB + ... + 1.5MB ≈ 135MB of data copying/scanning
- This is O(n²) where n = total event size
Impact
In our production app, SSE search responses contain 200-400 flight inventories per event (~1.5MB). The parser takes 15-20 seconds to consume all chunks, causing our 30-second search timeout to fire before the final event's \n\n delimiter is processed — resulting in lost events.
Proposed Fix
- Use
buffer.append(data) instead of buffer + data — in-place append with amortized O(1) when capacity is sufficient
- Only scan the newly added tail region (+ small overlap for boundary-crossing separators) to detect if a separator is present
- Only call
splitBuffer when a separator is actually detected in the new data
This reduces the per-chunk work from O(buffer_size) to O(chunk_size), making the overall complexity O(n) instead of O(n²).
Environment
- EventSource version: 0.1.7
- iOS 16+
- Payload: ~1.5MB SSE events arriving in ~8KB URLSession chunks
Description
ServerEventParser.parse()has O(n²) performance characteristics when receiving large SSE events delivered in small chunks (e.g., URLSession's ~8KBdidReceiveDatacallbacks). This causes severe delays and can result in events being lost when combined with application-level timeouts.Root Cause
In the current implementation:
Two issues:
buffer + datacreates a newDataallocation on every call, copying the entire buffer contents. As the buffer grows (e.g., 8KB → 16KB → 24KB → ... → 1.5MB), each call copies more data.splitBufferscans the entire buffer for the\n\nseparator on every chunk, even when the separator is unlikely to be present in the new data.For a 1.5MB SSE event arriving in ~180 chunks of 8KB:
Impact
In our production app, SSE search responses contain 200-400 flight inventories per event (~1.5MB). The parser takes 15-20 seconds to consume all chunks, causing our 30-second search timeout to fire before the final event's
\n\ndelimiter is processed — resulting in lost events.Proposed Fix
buffer.append(data)instead ofbuffer + data— in-place append with amortized O(1) when capacity is sufficientsplitBufferwhen a separator is actually detected in the new dataThis reduces the per-chunk work from O(buffer_size) to O(chunk_size), making the overall complexity O(n) instead of O(n²).
Environment