Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Separate cache items by source? #101

Open
pprochnau-ipipeline opened this issue Mar 20, 2024 · 1 comment
Open

Separate cache items by source? #101

pprochnau-ipipeline opened this issue Mar 20, 2024 · 1 comment
Labels
question Further information is requested

Comments

@pprochnau-ipipeline
Copy link

Just started using your library and like it a lot. However, found a scenario that I'm not sure how to handle.

Environment: We have a number of APIs that service multiple databases, based on data in the request. We are caching certain data in memory from each of the databases. Currently we are adding the database identifier as part of the key into the cache.

The issue arises when we find from the DB that we need to flush our cache. I can't think of a way to clear the cache for only entries specific to that DB, other than iterating through the entire cache and checking each key to see if it matches the DB I'm interested in.

Is there a way to isolate (or identify) entries from a specific source? E.g., have a cache for DB1, another for DB2, etc., so that I can easily flush one of them.

Thanks for your work on this library!

@neon-sunset
Copy link
Owner

neon-sunset commented Apr 10, 2024

Hi Pete,

Because this is a static memory-side random access cache, where cache entries are stored in a "flat" way in the underlying store (NonBlocking.ConcurrentDictionary), there is no way to offer querying capabilities without them being a performance trap.

Overall, I think the static nature of the cache was a design mistake - it solved a specific problem this cache was written to solve well but, as evidenced by your issue and others, proves to be problematic in scenarios involving e.g. multi-tenancy. Were that not the case, you would have been able to simply separate cache instances by tenant, access domain, etc. But alas, at some point I want to go back and do a version 2 which would incorporate lessons learned from library adoption.

For the time being, there are a two options with different tradeoffs that might work:

Use separate per-DB ConcurrentQueue with entries, which are dequeued and used for lookup + remove on per-DB cache flush

using FastCache.Collections;

// ConcurrentQueue is the fastest concurrent list-like type, you can also use a while (...TryDequeue(...)) loop
var db1Entries = new ConcurrentQueue<EntryKey>();
//...
CachedRange<EntryValue>.Remove(db1Entries);
db1Entries.Clear();

Enumerate all cache entries and delete the ones that match the criteria

using FastCache.Services;

foreach (var cached in CacheManager.EnumerateEntries<EntryKey, EntryValue>())
{
    if (cached.Key.TenantId == tenantId)
    {
        cached.Remove();
    }
}

The first one trades off extra memory usage for the ability to not have to traverse the entire cache.

@neon-sunset neon-sunset added the question Further information is requested label May 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants