Skip to content

ConcurrentTLru

Alex Peck edited this page Nov 24, 2023 · 17 revisions

ConcurrentTLru implements expire after write. Items are evicted a fixed duration after creation or most recent update. As such, ConcurrentTLru is a time aware least recently used (TLRU) cache.

Quickstart

The ConcurrentTLru API is identical to ConcurrentLru, but with a couple of small differences to facilitate time-based expiry.

Constructor

int capacity = 128;
TimeSpan ttl = TimeSpan.FromMinutes(5); // items expire after 5 minutes
var lru = new ConcurrentTLru<int, SomeItem>(capacity, ttl);

Removing Items

lru.Policy.ExpireAfterWrite.Value.TrimExpired(); // remove all expired items

How it works

ConcurrentTLru uses the same core algorithm as ConcurrentLru, described here. In addition, the TLru item eviction policy evaluates the age of an item on each lookup (i.e. TryGet/GetOrAdd/GetOrAddAsync), and if the item has expired it will be discarded at lookup time. Expired items can also be discarded if the cache is at capacity and new items are added. When a new item is added, existing items may transition from hot to warm to cold, and expired items can be evicted at these transition points.

Note that expired items are not eagerly evicted when the cache is below capacity, since there is no background thread performing cleanup. Thus, TLru provides a mechanism to implement eventual consistency, and there is no forceful eviction of stale items until capacity is reached.

Why is ConcurrentTLru slower than ConcurrentLru?

On every lookup, item age is calculated and compared to the time to live (TTL). If the item has expired it is discarded, and in the case of GetOrAdd the value factory is invoked to create a new item. This results in an API call to determine the current time.

To get the lowest lookup latency without sacrificing correctness, each build target uses a different underlying time API:

  • On the .NET Core 3.0 and .NET 6.0 build targets, Environment.TickCount64 is the default clock. This has significantly lower latency but is less precise than Stopwatch.GetTimestamp.
  • On the .NET Standard build target, Stopwatch.GetTimestamp is used by default since the 32-bit Environment.TickCount will become unreliable after 49.8 days.

These defaults were chosen based on the characteristics of the four different time APIs in .NET:

  • DateTime.UtcNow/DateTimeOffset.UtcNow, which is based on GetSystemTimeAsFileTime() WinAPI function and has a precision of 15ms and call latency of about 25ns. The system can periodically refresh the time by synchronizing with a time source, so this is subject to being reset.
  • Environment.TickCount, which is based on GetTickCount() WinAPI function and has a precision of 15.6ms and about 1.5 ns overhead per call, but is represented by 32-bit unsigned integer (so will wrap after 49 days).
  • Environment.TickCount64, the 64-bit version of Environment.TickCount added in .NET Core 3.0. This has the low latency of the 32-bit API, but will take 292,277,266 years to wrap around, so for practical purposes it will never wrap.
  • Stopwatch.GetTimestamp, which is based on QueryPerformanceCounter() WinAPI function, has a precision of <1us and a call latency of about 15ns. QPC is independent of, and isn't synchronized to, any external time reference.

This benchmark result gives an overview of relative call latency:

BitFaster Caching Benchmarks TimeBenchmarks

On .NET Standard, it is possible to switch to Environment.TickCount by creating a TLRU with TickCountLruItem and TlruTicksPolicy like this:

public sealed class CustomTLru<K, V> : ConcurrentLruCore<K, V, TickCountLruItem<K, V>, TLruTicksPolicy<K, V>, TelemetryPolicy<K, V>>
{
    public CustomTLru(int capacity, TimeSpan timeToLive)
        : base(concurrencyLevel, capacity, EqualityComparer<K>.Default, new TLruTicksPolicy<K, V>(timeToLive), default)
    { 
    }
...
}