Skip to content

Releases: bitfaster/BitFaster.Caching

v2.4.1

11 Dec 03:03
2d08830
Compare
Choose a tag to compare

What's changed

  • Fixed a race condition in ConcurrentLfu for add-remove-add of the same key.
  • MpscBoundedBuffer.Clear() is now thread safe, fixing a race in ConcurrentLfu clear.
  • Fixed ConcurrentLru Count and IEnumerable<KeyValuePair<K,V>> to filter out expired items when used with time-based expiry.
  • BitFaster.Caching is now compiled with <nullable>enable</nullable>, and APIs are annotated to support null reference type static analysis.

Full changelog: v2.4.0...v2.4.1

v2.4.0

24 Nov 22:26
87aad5b
Compare
Choose a tag to compare

What's changed

  • Provide two new time-based expiry schemes for ConcurrentLru:
    • Expire after access: evict after a fixed duration since an entry's most recent read or write. This is equivalent to MemoryCache's sliding expiry, and is useful for data bound to a session that expires due to inactivity.
    • Per item expiry time: evict after a duration calculated for each item using the specified IExpiryCalculator. Expiry time may be set independently at creation, after a read and after a write.
  • Align TryRemove overloads with ConcurrentDictionary for IAsyncCache and AsyncAtomicFactory, matching the implementation for ICache added in v2.3.0. This adds two new overloads:
    • bool TryRemove(K key, out V value) - enables getting the value that was removed.
    • bool TryRemove(KeyValuePair<K, V> item) - enables removing an item only when the key and value are the same.
  • Add extension methods to make it more convenient to use AsyncAtomicFactory with a plain ConcurrentDictionary. This is similar to storing an AsyncLazy<T> instead of T, but with the same exception propagation semantics and API as ConcurrentDictionary.GetOrAdd.
  • BitFaster.Caching assembly marked as trim compatible to enable trimming when used in native AOT applications.
  • AtomicFactory value initialization logic modified to mitigate lock convoys, based on the approach given here.
  • Fixed ConcurrentLru.Clear to correctly handle removed items present in the internal bookkeeping data structures.

Full changelog: v2.3.3...v2.4.0

v2.3.3

11 Nov 21:36
532db75
Compare
Choose a tag to compare

What's changed

  • Eliminated all races in ConcurrentLru eviction logic, and the transition between the cold cache and warm cache eviction routines. This prevents a variety of rare 'off by one item count' situations that could needlessly evict items when the cache is within bounds.
  • Fix ConcurrentLru.Clear() to always clear the cache when items in the warm queue are marked as accessed.
  • Optimize ConcurrentLfu drain buffers logic to give ~5% better throughput (measured by the eviction throughput test).
  • Cache the ConcurrentLfu drain buffers delegate to prevent allocating a closure when scheduling maintenance.
  • BackgroundThreadScheduler and ThreadPoolScheduler now use TaskScheduler.Default, instead of implicitly using TaskScheduler.Current (fixes CA2008).
  • ScopedAsyncCache now internally calls ConfigureAwait(false) when awaiting tasks (fixes CA2007).
  • Fix ConcurrentLru debugger display on .NET Standard.

Full changelog: v2.3.2...v2.3.3

v2.3.2

25 Oct 00:31
1352584
Compare
Choose a tag to compare

What's changed

  • Fix ConcurrentLru NullReferenceException when expiring and disposing null values (i.e. the cached value is a reference type, and the caller cached a null value).
  • Fix ConcurrentLfu handling of updates to detached nodes, caused by concurrent reads and writes. Detached nodes could be re-attached to the probation LRU pushing out fresh items prematurely, but would eventually expire since they can no longer be accessed.

Full changelog: v2.3.1...v2.3.2

v2.3.1

22 Oct 23:50
60e78bf
Compare
Choose a tag to compare

What's changed

  • Introduce a simple heuristic to estimate the optimal ConcurrentDictionary bucket count for ConcurrentLru/ConcurrentLfu/ClassicLru based on the capacity constructor arg. When the cache is at capacity, the ConcurrentDictionary will have a prime number bucket count and a load factor of 0.75.
    • When capacity is less than 150 elements, start with a ConcurrentDictionary capacity that is a prime number 33% larger than cache capacity. Initial size is large enough to avoid resizing.
    • For larger caches, pick ConcurrentDictionary initial size using a lookup table. Initial size is approximately 10% of the cache capacity such that 4 ConcurrentDictionary grow operations will arrive at a hash table size that is a prime number approximately 33% larger than cache capacity.
  • SingletonCache sets the internal ConcurrentDictionary capacity to the next prime number greater than the capacity constructor argument.
  • Fix ABA concurrency bug in Scoped by changing ReferenceCount to use reference equality (via object.ReferenceEquals).
  • .NET6 target now compiled with SkipLocalsInit. Minor performance gains.
  • Simplified AtomicFactory/AsyncAtomicFactory/ScopedAtomicFactory/ScopedAsyncAtomicFactory by removing redundant reads, reducing code size.
  • ConcurrentLfu.Count now does not lock the underlying ConcurrentDictionary, matching ConcurrentLru.Count.
  • Use CollectionsMarshal.AsSpan to enumerate candidates within ConcurrentLfu.Trim on .NET6.

Full changelog: v2.3.0...v2.3.1

v2.3.0

06 Oct 01:22
4330c16
Compare
Choose a tag to compare

What's changed

  • Align TryRemove overloads with ConcurrentDictionary for ICache (including WithAtomicGetOrAdd). This adds two new overloads:
    • bool TryRemove(K key, out V value) - enables getting the value that was removed.
    • bool TryRemove(KeyValuePair<K, V> item) - enables removing an item only when the key and value are the same.
  • Fix ConcurrentLfu.Clear() to remove all values when using BackgroundThreadScheduler. Previously values may be left behind after clear was called due to removed items present in window/protected/probation polluting the list of candidates to remove.
  • Fix ConcurrentLru.Clear() to reset the isWarm flag. Now cache warmup behaves the same for a new instance of ConcurrentLru vs an existing instance that was full then cleared. Previously ConcurrentLru could have reduced capacity during warmup after calling clear, depending on the access pattern.
  • Add extension methods to make it more convenient to use AtomicFactory with a plain ConcurrentDictionary. This is similar to storing a Lazy<T> instead of T, but with the same exception propagation semantics and API as ConcurrentDictionary.GetOrAdd.

Full changelog: v2.2.1...v2.3.0

v2.2.1

22 Aug 02:00
929b2cf
Compare
Choose a tag to compare

What's changed

  • Fix a ConcurrentLru bug where a repeated pattern of sequential key access could lead to unbounded growth.
  • Use Span APIs within MpscBoundedBuffer/StripedMpscBuffer/ConcurrentLfu on .NET6/.NETCore3.1 build targets. Reduces ConcurrentLfu lookup latency by about 5-7% in the lookup benchmark.

Full changelog: v2.2.0...v2.2.1

v2.2.0

29 Apr 22:42
73c4e1b
Compare
Choose a tag to compare

What's changed

  • Provide a new overload for ICache.GetOrAdd enabling the value factory delegate to accept an input argument.

    TValue GetOrAdd<TArg> (TKey key, Func<TKey,TArg,TValue> valueFactory, TArg factoryArgument)

    If additional data is required to construct/fetch cached values, this provides a mechanism to pass data into the factory without allocating a new closure on the heap. Passing a CancellationToken into to an async value factory delegate is a common use case.
  • Implement equivalent factory arg functionality for IAsyncCache, IScopedCache and IAsyncScopedCache.
  • To support different factory signatures without downstream code duplication, provide IValueFactory and IAsyncValueFactory value types.
  • Implement build time package validation to prevent breaking changes going forward. Fixed all breaking changes introduced since v2.0.0. The v2.2.0 NET6 and NET3.1 build targets are fully compatible with v2.0.0 and v2.1.0 without recompilation. Intermediate point updates since v2.1.0 may require recompilation. The NET Standard 2.0 target is fully compatible with v2.0.0, but the updated event and metric are no longer included since they break compatibility.

Full changelog: v2.1.3...v2.2.0

v2.1.3

17 Mar 01:05
8b62b7b
Compare
Choose a tag to compare

What's changed

  • Fix bug preventing ConcurrentTLru from expiring items if the host machine runs for longer than 49 days (on .NET Core 3/.NET6 only). This was a regression introduced in v2.1.2.
  • TLRU TimeToLive is now validated for each policy implementation. This is a behavior change, invalid TTL values now throw ArgumentOutOfRangeException rather than silently setting an incorrect and/or negative TTL.

Full changelog: v2.1.2...v2.1.3

v2.1.2

11 Mar 01:32
96c5345
Compare
Choose a tag to compare

What's changed

  • Added an ItemUpdated event for all LRU classes, including the scoped and atomic cache decorators.
  • ConcurrentTLru/FastConcurrentTLru now use a clock based on Environment.TickCount64 for .NET Core 3 and .NET6 build targets, instead of Stopwatch.GetTimestamp. The smallest reliable time to live increases from about 1us to about 16ms (so precision is now worse), but the overhead of the TLRU policy drops significantly from about 170% to 20%. This seems like a good tradeoff, since expiring items faster than 16ms is not common. .NET Standard continues to use the previous high resolution clock since TickCount is 32 bit only on .NET Framework.
  • On .NET Core 3 and .NET6 LruBuilder will automatically fall back to the previous higher resolution clock if the specified TTL is less than 32ms.
  • Fixed Atomic cache count and enumeration methods such that partially created items are not visible externally. Count, enumerate and TryGet methods now all return consistent results if a factory delegate throws during item creation.
  • Fixed Atomic cache debug view, all caches now have a consistent debugger experience.

Full changelog: v2.1.1...v2.1.2