Description
Shoutout to David Hopkins for bringing this up in Slack. He was looking to efficiently create snapshots of a collection, at regular intervals, and deduced that the effective way to do this would be with .Throttle()
followed by .ToCollection()
, except that there's no DD-native .Throttle()
and the RX-native version would throw away changesets.
After looking around, we do have some support for doing this kinda thing, but it's rather disjointed. To summarize:
The cache side of the house has...
.Batch()
accepting aTimeSpan
.BatchIf()
acceptingIObservable<bool>
.BufferInitial()
accepting aTimeSpan
, which is equivalent to.Batch()
but only batches once, during an initial startup window.FlattenBufferResult()
that can follow an RX-native.Buffer()
The list side of the house has...
- No
.Batch()
.BufferIf()
equivalent to.BatchIf()
on the cache side.BufferInitial()
equivalent to the cache side.FlattenBufferResult()
equivalent to the cache side
With .FlattenBufferResult()
available, it looks like we don't really have any functional holes in our API, but I was curious whether a custom operator might be able to improve performance.
Method | Mean | Ratio | Allocated | Alloc Ratio |
---|---|---|---|---|
NativeBufferFlattened | 2.208 ms | 1.00 | 4.26 MB | 1.00 |
CustomBatch_UnitBufferBoundary | 2.266 ms | 1.03 | 4.04 MB | 0.95 |
CustomBatch_AnyBufferBoundary | 2.277 ms | 1.03 | 4.04 MB | 0.95 |
CustomBatch_Optimized | 2.057 ms | 0.93 | 4.04 MB | 0.95 |
Not terribly significant, but measurable.
So, outstanding questions:
- Do we want to add a
.Batch()
to the list side, to match the cache side? - Do we want to try and clean up the naming discrepancies between the other list and cache operators?
- Do we want to add more variations on the
.Batch()
operator, or just stick with the functionality given by.FlattenBufferResult()
?- Is it worth it for the performance boost?
- Is it worth it for better discoverability in the API?