Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MemoryCache] Add possibility to disable linked cache entries #45592

Closed
adamsitnik opened this issue Dec 4, 2020 · 6 comments · Fixed by #57648
Closed

[MemoryCache] Add possibility to disable linked cache entries #45592

adamsitnik opened this issue Dec 4, 2020 · 6 comments · Fixed by #57648
Assignees
Labels
api-approved API was approved in API review, it can be implemented area-Extensions-Caching tenet-performance Performance related issue
Milestone

Comments

@adamsitnik
Copy link
Member

Background and Motivation

MemoryCache by default tracks linked cache entries to allow for propagating options.

In the following example, the expirationToken added for child is also applied to parent:

using (var parent = cache.CreateEntry(key))
{
    parent.SetValue(obj);

    using (var child = cache.CreateEntry(key1))
    {
        child.SetValue(obj);
        child.AddExpirationToken(expirationToken);
    }
}

Tracking internally uses AsyncLocal<T>, which is expensive and adds non-trivial overhead. The cost of using it has popped out in the performance reports that we have received from our customers (#45436).

Currently, this feature can not be disabled.

Proposed API

As suggested by @Tratcher in #45436 (comment), we can extend the existing MemoryCacheOptions with a flag that disables this behaviour.

Naming is hard and the best idea I currently have is TrackLinkedCacheEntries:

namespace Microsoft.Extensions.Caching.Memory
{
    public class MemoryCacheOptions
    {
+        public bool TrackLinkedCacheEntries { get ; init; } = true;
    }
}

I am open to suggestions.

Usage Examples

var options = new MemoryCacheOptions
{
    TrackLinkedCacheEntries = false
};

var cache = new MemoryCache(options);

Alternative Designs

As an alternative, we could add a new method to MemoryCache that would allow for creating cache entries without tracking:

namespace Microsoft.Extensions.Caching.Memory
{
    public class MemoryCache
    {
        public ICacheEntry CreateEntry(object key); // existing method
+        public ICacheEntry CreateUnlinkedEntry(object key);
    }

But there is a lot of existing extension methods that allow for adding new cache entries to the cache and we would most probably need to add new overloads for them...

Risks

Introducing this API has no risks as long as we don't change the default settings (enabled by default).

cc @eerhardt @maryamariyan @davidfowl

@adamsitnik adamsitnik added api-suggestion Early API idea and discussion, it is NOT ready for implementation tenet-performance Performance related issue area-Extensions-Caching labels Dec 4, 2020
@adamsitnik adamsitnik added this to the 6.0.0 milestone Dec 4, 2020
@adamsitnik adamsitnik self-assigned this Dec 4, 2020
@ghost
Copy link

ghost commented Dec 4, 2020

Tagging subscribers to this area: @eerhardt, @maryamariyan
See info in area-owners.md if you want to be subscribed.

Issue Details

Background and Motivation

MemoryCache by default tracks linked cache entries to allow for propagating options.

In the following example, the expirationToken added for child is also applied to parent:

using (var parent = cache.CreateEntry(key))
{
    parent.SetValue(obj);

    using (var child = cache.CreateEntry(key1))
    {
        child.SetValue(obj);
        child.AddExpirationToken(expirationToken);
    }
}

Tracking internally uses AsyncLocal<T>, which is expensive and adds non-trivial overhead. The cost of using it has popped out in the performance reports that we have received from our customers (#45436).

Currently, this feature can not be disabled.

Proposed API

As suggested by @Tratcher in #45436 (comment), we can extend the existing MemoryCacheOptions with a flag that disables this behaviour.

Naming is hard and the best idea I currently have is TrackLinkedCacheEntries:

namespace Microsoft.Extensions.Caching.Memory
{
    public class MemoryCacheOptions
    {
+        public bool TrackLinkedCacheEntries { get ; init; } = true;
    }
}

I am open to suggestions.

Usage Examples

var options = new MemoryCacheOptions
{
    TrackLinkedCacheEntries = false
};

var cache = new MemoryCache(options);

Alternative Designs

As an alternative, we could add a new method to MemoryCache that would allow for creating cache entries without tracking:

namespace Microsoft.Extensions.Caching.Memory
{
    public class MemoryCache
    {
        public ICacheEntry CreateEntry(object key); // existing method
+        public ICacheEntry CreateUnlinkedEntry(object key);
    }

But there is a lot of existing extension methods that allow for adding new cache entries to the cache and we would most probably need to add new overloads for them...

Risks

Introducing this API has no risks as long as we don't change the default settings (enabled by default).

cc @eerhardt @maryamariyan @davidfowl

Author: adamsitnik
Assignees: adamsitnik
Labels:

api-suggestion, area-Extensions-Caching, tenet-performance

Milestone: 6.0.0

@Dotnet-GitSync-Bot Dotnet-GitSync-Bot added the untriaged New issue has not been triaged by the area owner label Dec 4, 2020
@Tratcher
Copy link
Member

Tratcher commented Dec 7, 2020

Note @davidfowl asked for this to be disabled by default and allow the few people that actually need the feature to opt-in with the new setting. Not sure if that meets the compat bar.

@NickCraver
Copy link
Member

NickCraver commented Dec 9, 2020

For what it's worth, at scale we found a lot of the weight of our cache (somewhere in the 30-40% range) was overhead of the CacheEntry object itself. The per-entry storage on CacheEntry currently is:

        // static
        private static readonly Action<object> ExpirationCallback = ExpirationTokensExpired;

        // per entry
        private readonly object _lock = new object();
        private readonly MemoryCache _cache;
        private IList<IDisposable> _expirationTokenRegistrations;
        private IList<PostEvictionCallbackRegistration> _postEvictionCallbacks;
        private IList<IChangeToken> _expirationTokens;
        private TimeSpan? _absoluteExpirationRelativeToNow;
        private TimeSpan? _slidingExpiration;
        private long? _size;
        private CacheEntry _previous;
        private object _value;
        private int _state;
        public DateTimeOffset? AbsoluteExpiration { get; set; }

In our slimmed down version removing a lot of the callback weight we weren't using anywhere (in the vein of this issue), we narrowed down to:

        // static
        private static long s_currentDateIshTicks = DateTime.UtcNow.Ticks;
        private static readonly Timer ExpirationTimeUpdater = new Timer(state => s_currentDateIshTicks = DateTime.UtcNow.Ticks, null, 1000, 1000);

        // per entry
        private long _absoluteExpirationTicks;
        private int _accessCount;
        private readonly uint _slidingSeconds;
        public object Value { get; }

Note: we actually added _accessCount here for evaluation of unused cache, which was a valuable feature for seeing where lot of unused waste was. Happy to share code here on all those bits (cc @mgravell) - it's something we found very valuable to have a dashboard for, in the running application, to see where we can easily slim down further.

When you're storing a lot of things like integers or bools or any small data type, the weight of the reference overhead is quite large. What is the thinking on how we can reduce that? Note that most of it is inherent to the ICacheEntry interface and so a break means a much larger change...which is the route we ended up taking. Happy to help however we can here, and appreciate the kick on this issue!

@adamsitnik
Copy link
Member Author

@NickCraver thanks for providing very valuable feedback! I've already removed 4 fields from CacheEntry (#45410) but it looks like we still have work to do here. I'll try to come up with more ideas

@adamsitnik
Copy link
Member Author

For what it's worth, at scale we found a lot of the weight of our cache (somewhere in the 30-40% range) was overhead of the CacheEntry object itsel

@NickCraver I've sent #45962 to minimize the overhead (280 -> 224 bytes). I'll try to wrap my head if it would be possible to replace the nullable date-related fields with a single struct. It would be much easier if I could introduce breaking changes ;)

@maryamariyan maryamariyan removed this from the 6.0.0 milestone Feb 3, 2021
@maryamariyan maryamariyan added api-ready-for-review API is ready for review, it is NOT ready for implementation and removed api-suggestion Early API idea and discussion, it is NOT ready for implementation untriaged New issue has not been triaged by the area owner labels Mar 4, 2021
@maryamariyan maryamariyan added this to the Future milestone Mar 4, 2021
@ghost ghost moved this from Untriaged to Future in ML, Extensions, Globalization, etc, POD. Mar 4, 2021
@adamsitnik adamsitnik modified the milestones: Future, 7.0.0 Jul 22, 2021
@ghost ghost moved this from Future to Untriaged in ML, Extensions, Globalization, etc, POD. Jul 22, 2021
@maryamariyan maryamariyan moved this from Untriaged to 7.0.0 in ML, Extensions, Globalization, etc, POD. Aug 4, 2021
@bartonjs
Copy link
Member

Since this is proposing a breaking change (making the default be non-tracked), going with one centralized option seems the most sensible.
Adding an overload of MemoryCache.CreateEntry(object key, bool linked) (or something like that) can be done in the future if entry-level support feels warranted.

namespace Microsoft.Extensions.Caching.Memory
{
    partial class MemoryCacheOptions
    {
        public bool TrackLinkedCacheEntries { get ; set; } = false;
    }
}

@bartonjs bartonjs added api-approved API was approved in API review, it can be implemented and removed api-ready-for-review API is ready for review, it is NOT ready for implementation labels Aug 10, 2021
@ghost ghost added the in-pr There is an active PR which will close this issue when it is merged label Aug 18, 2021
ML, Extensions, Globalization, etc, POD. automation moved this from 7.0.0 to Done Sep 14, 2021
@ghost ghost removed the in-pr There is an active PR which will close this issue when it is merged label Sep 14, 2021
@dotnet dotnet locked as resolved and limited conversation to collaborators Nov 3, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
api-approved API was approved in API review, it can be implemented area-Extensions-Caching tenet-performance Performance related issue
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants