-
Notifications
You must be signed in to change notification settings - Fork 5.3k
Description
Description
Using some measurements, BrotliStream seems to more efficiently when the written data is in larger chunks. This is also described in issue #36245
Hence, it seems reasonable to combined writing with BufferedStream, that can buffer data in chunks to the brotli compression.
Based on measurements with a certain type of data, Brotli stream was the most efficient with writes of ~300KB chunks.
However, when such value is set as the buffer size in BufferedStream ctor, the buffer is allocated on the LOH. This results that after a dozen or so compressed streams written (each with a new BufferedStream), a Gen2 GC compaction is triggered. On a hot path, eventually causing performance issues as the GC spends a lot of time collecting garbage.
Configuration
.NET 9, x64, JIT
Regression?
No
Analysis
It seems each new BufferedStream is allocating a byte[] as a backing buffer. Ideally this could be backed by ArrayPool<byte>.Shared instead of new allocations - at least for the case when the buffer is allocated on the LOH. It seems BufferedStream already has knowledge about SOH and LOH as noted in some source code comments.
Workaround
- Pooling
BufferedStreamobjects. - Creating a custom stream object that buffers backed by ArrayPool. However, having a general implementation seems to be relatively complicated.