PROPOSAL: Cache policy #136

Closed
reisenberger opened this Issue Jul 11, 2016 · 76 comments

Comments

Projects
7 participants
@reisenberger
Member

reisenberger commented Jul 11, 2016

Proposal: Cache policy

Purpose

To provide the ability to serve results from a cache, instead of executing the governed func.

Scope

  • A non-exception handling policy.
  • All sync and async variants
  • Only result-returning (Func<TResult>), not void (Action) executions.

Configuration

The cache item expiry duration would be configured on the CachePolicy at configuration time, not passed at execution time:

  • This keeps CachePolicy in line with the Polly model, where the behavioural characteristics of policies are defined at configuration time, not call time. This makes for a cache policy with given behaviour which can be shared across calls.
  • Adding bespoke .Execute() overloads on CachePolicy would present problems for the integration of CachePolicy into PolicyWrap. (The PolicyWrap (was Pipeline) feature in its proposed from requires that all policies have common .Execute() etc overloads.)

Configuration syntax

For Policy<TResult> - default cache

CachePolicy<TResult> cachePolicy = Policy
  .Cache<TResult>(TimeSpan slidingExpirationTimespan);

For Policy<TResult> - advanced cache

Users may want more control over caching characteristics or to use an alternative cache provider (Http cache or third-party). The following overload is also proposed:

CachePolicy<TResult> cachePolicy = Policy
  .Cache<TResult>(IResultCacheProvider<TResult> cacheProvider);

where:

// namespace Polly.Cache
interface IResultCacheProvider<TResult>
{
   TResult Get(Context);
   void    Put(Context, TResult);
}
  • The Context itself is not the key; it is an execution context that travels with each Execute invocation on a Polly policy. Implementations should derive a cache key to use from elements in the Context. The usual cache key would be Context.ExecutionKey. See #139
  • Basing the IResultCacheProvider Get/Put signatures around Context rather than a string cacheKey allows implementers power to develop a more complex caching strategy around other keys or user information on the Context.

Example execution syntax

// TResult form
TResult result = cachePolicy
  .Execute(Func<TResult> executionFunc, new Context(executionKey)); // The executionKey is the cacheKey.  See keys proposal.

(and other similar existing overloads taking a Context parameter)

Default implementation

The proposed implementation for the simple cache configuration overload is to use System.Runtime.Memory.MemoryCache.Default and the configured Timespan slidingExpirationTimespan to create a Polly.Cache.MemoryCacheProvider<TResult> : Polly.Cache.IResultCacheProvider<TResult>.

Operation

  • Checks the cache for a value stored under the cache key; returns it if so.
    • Throws an InvalidCastException if the value in the cache cannot be cast to TResult
  • Invokes executionFunc if (and only if) a value could not be returned from cache.
  • Before returning a result from a non-faulting invoking executionFunc, caches it under the given cache key for the given timespan.

Comments?

Comments? Alternative suggestions? Extra considerations to bear in mind?

Have scenarios to share where you'd use this? (could inform development)

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Jul 11, 2016

Are the ExecuteAndCapture variants going to be supported?

Are the ExecuteAndCapture variants going to be supported?

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Jul 11, 2016

Member

Are the ExecuteAndCapture variants going to be supported?

Yes - good catch. And since the implementation obviously checks for a cached value before executing the user delegate, makes no difference to the operation of the CachePolicy whether any fault from the func is rethrown/passed to more outer layers, or captured.

Member

reisenberger commented Jul 11, 2016

Are the ExecuteAndCapture variants going to be supported?

Yes - good catch. And since the implementation obviously checks for a cached value before executing the user delegate, makes no difference to the operation of the CachePolicy whether any fault from the func is rethrown/passed to more outer layers, or captured.

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Jul 13, 2016

I'm happy to work on this, as we will need this for a project I'm working on.

I'm happy to work on this, as we will need this for a project I'm working on.

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Jul 13, 2016

Member

@SeanFarrow Sounds good! Many thanks for your involvement and offering to work up a PR on this! Please do come back to me questions / comments etc as they arise.

Member

reisenberger commented Jul 13, 2016

@SeanFarrow Sounds good! Many thanks for your involvement and offering to work up a PR on this! Please do come back to me questions / comments etc as they arise.

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Jul 13, 2016

Ok, I will. I’m able to start working on this at the end of the month, I’ve got a major project to finish first!
What caches do we want, I’m thinking, Redis/Memcached, and the .net memory cache as well as maybe a disc based cache, with the memory cache being the default. Any other caches/thoughts?

SeanFarrow commented Jul 13, 2016

Ok, I will. I’m able to start working on this at the end of the month, I’ve got a major project to finish first!
What caches do we want, I’m thinking, Redis/Memcached, and the .net memory cache as well as maybe a disc based cache, with the memory cache being the default. Any other caches/thoughts?

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Jul 14, 2016

Member

I’m able to start working on this at the end of the month

Sure @SeanFarrow . Thanks for all your support and involvement!

What caches do we want, I’m thinking, Redis/Memcached, and the .net memory cache
as well as maybe a disc based cache, with the memory cache being the default.
Any other caches/thoughts?

All sound like good options!

And with the proposed IResultCacheProvider<TResult> interface, people can of course easily implement others.

Given we'd likely want to avoid taking dependencies on all these in the main Polly package, the individual cache implementations (default memory cache excepted) would probably go out as separate nuget packages Polly.Cache.Redis, Polly.Cache.Memcached etc, do you think? Unless they were each individually so small (in terms of code lines) that providing a wiki page for each was just as much an option - to decide later?

Member

reisenberger commented Jul 14, 2016

I’m able to start working on this at the end of the month

Sure @SeanFarrow . Thanks for all your support and involvement!

What caches do we want, I’m thinking, Redis/Memcached, and the .net memory cache
as well as maybe a disc based cache, with the memory cache being the default.
Any other caches/thoughts?

All sound like good options!

And with the proposed IResultCacheProvider<TResult> interface, people can of course easily implement others.

Given we'd likely want to avoid taking dependencies on all these in the main Polly package, the individual cache implementations (default memory cache excepted) would probably go out as separate nuget packages Polly.Cache.Redis, Polly.Cache.Memcached etc, do you think? Unless they were each individually so small (in terms of code lines) that providing a wiki page for each was just as much an option - to decide later?

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Jul 14, 2016

Member

@community : other caches you'd like to see supported?

Member

reisenberger commented Jul 14, 2016

@community : other caches you'd like to see supported?

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Jul 14, 2016

Definitely separate NuGet packages. That makes more sense in terms of dependencies, as each cache is likely to depend on other NuGet packages. Also, they can then be updated out of band of the main poly package assuming of course that the interface doesn’t change!

I’m also thinking about a Poly.Cache.Core package containing the interface and the memory cache, as per my understanding, the memory cache is baked in to the .net framework—correct me if I’m wrong!

In terms of other caches we may want to support, maybe azure cache/amazon elastic cache, I need to check whether the latter has an api specifically, or whether it’s just memcached compatible.

SeanFarrow commented Jul 14, 2016

Definitely separate NuGet packages. That makes more sense in terms of dependencies, as each cache is likely to depend on other NuGet packages. Also, they can then be updated out of band of the main poly package assuming of course that the interface doesn’t change!

I’m also thinking about a Poly.Cache.Core package containing the interface and the memory cache, as per my understanding, the memory cache is baked in to the .net framework—correct me if I’m wrong!

In terms of other caches we may want to support, maybe azure cache/amazon elastic cache, I need to check whether the latter has an api specifically, or whether it’s just memcached compatible.

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Jul 14, 2016

Member

@SeanFarrow All sounds good.

Only thought: maybe the default MemoryCache option (and IResultCacheProvider<TResult> interface) can just be part of the main Polly package, so that the CachePolicy (based on the MemoryCache default) works out-of-the-box with just the main Polly package. Yes, MemoryCache is part of the runtime at System.Runtime.Caching.MemoryCache

Member

reisenberger commented Jul 14, 2016

@SeanFarrow All sounds good.

Only thought: maybe the default MemoryCache option (and IResultCacheProvider<TResult> interface) can just be part of the main Polly package, so that the CachePolicy (based on the MemoryCache default) works out-of-the-box with just the main Polly package. Yes, MemoryCache is part of the runtime at System.Runtime.Caching.MemoryCache

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Jul 14, 2016

Ok, fair point, we’ll go with that then!

SeanFarrow commented Jul 14, 2016

Ok, fair point, we’ll go with that then!

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Jul 17, 2016

What version of .net are we supporting? I notice that poly supports .net 3.5, but the memory cache is 4.0+.
Do people see this as an issue? if so, what should we do for caching in .net 3.5?

What version of .net are we supporting? I notice that poly supports .net 3.5, but the memory cache is 4.0+.
Do people see this as an issue? if so, what should we do for caching in .net 3.5?

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Jul 17, 2016

Member

Good question. Per #142 we will discontinue .NET3.5 support from Polly v5.0.0, as a number of the other new policies require facilities not in .NET3.5 either.

Member

reisenberger commented Jul 17, 2016

Good question. Per #142 we will discontinue .NET3.5 support from Polly v5.0.0, as a number of the other new policies require facilities not in .NET3.5 either.

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Jul 17, 2016

Ok, cool, can you assign me to the cache policy then?

SeanFarrow commented Jul 17, 2016

Ok, cool, can you assign me to the cache policy then?

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Jul 17, 2016

Member

Hey @SeanFarrow . Hmm. Seems from the github instructions I can't add you with the assignees button (same for Jerome and Bruno) because its scope is limited to AppvNext org members (since AppvNext also broader than Polly, not my position just to add you to AppvNext). No reflection on the importance of your contribution to Polly (great to have you involved!). Consider this assigned. Added the in progress label to indicate that it is spoken for!

Member

reisenberger commented Jul 17, 2016

Hey @SeanFarrow . Hmm. Seems from the github instructions I can't add you with the assignees button (same for Jerome and Bruno) because its scope is limited to AppvNext org members (since AppvNext also broader than Polly, not my position just to add you to AppvNext). No reflection on the importance of your contribution to Polly (great to have you involved!). Consider this assigned. Added the in progress label to indicate that it is spoken for!

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Jul 17, 2016

Ok, thanks!

SeanFarrow commented Jul 17, 2016

Ok, thanks!

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Jul 17, 2016

Given we're supporting async variants of execute, should we have an async cache provider as well?
Also, if the cache doesn't support synchronous functionality, what should our default position be?

Given we're supporting async variants of execute, should we have an async cache provider as well?
Also, if the cache doesn't support synchronous functionality, what should our default position be?

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Jul 18, 2016

Member

Hey @SeanFarrow Great questions. What were your thoughts?

Some thoughts:

Given we're supporting async variants of execute, should we have an async cache provider as well?

[1] Yes good call. Polly having an async cache provider so that async executions through Polly can take advantage of where 3rd-party caches have (mature/stable) async APIs –> we should do that. Feels like two separate interfaces in Polly for sync and async providers, ie IResultCacheProvider<TResult> and IResultCacheProviderAsync<TResult>? That what you thinking? Async interface sthg like:

// namespace Polly.Cache
interface IResultCacheProviderAsync<TResult>
{
   TResult GetAsync(Context); // should return: Task<TResult>
   void    PutAsync(Context, TResult); // should return: Task
}

NB If you think the design of the IResultCacheProvider/Async<TResult> interfaces can be refined, feel free to say.

[2] The config overloads .CacheAsync<TResult>(...) configuring Polly’s async cache policies should probably provide options to take either a sync or an async cache provider tho. Because there might be some cache providers which only have sync APIs but we still want to offer them to async cache policies? (MemoryCache.Default is in this category?)

So:

// Async policy taking async cache provider
CachePolicy<TResult> asyncCachePolicy = Policy
  .CacheAsync<TResult>(IResultCacheProviderAsync<TResult> cacheProviderAsync);
// Async policy taking sync cache provider
CachePolicy<TResult> asyncCachePolicy = Policy
  .CacheAsync<TResult>(IResultCacheProvider<TResult> cacheProvider);

Re:

if the cache doesn't support synchronous functionality, what should our default position be?

Again, really interested to hear your views. Thinking aloud from my side:

[3] The config overloads configuring Polly's sync cache policy should probably only take sync cache providers. IE just:

// Sync policy taking sync cache provider
CachePolicy<TResult> cachePolicy = Policy
  .Cache<TResult>(IResultCacheProvider<TResult> cacheProvider);

(The opposite – providing an overload for a sync cache policy taking an async provider – feels like creating a potentially confusing API. Particularly, there’s a risk people would mistake that syntax for giving them the benefits of async behaviour, when it’d not be: it’d have to be blocking on the calls to the async cache provider to bring it into a sync policy/sync call, no?)

[4] But we could allow the use of third-party caches with async-only interfaces, in Polly’s sync CachePolicys, if desired, by providing an implementation fulfilling Polly’s IResultCacheProvider<TResult> sync interface, just .Wait()-ing (or equiv) on the async calls. (And NB documenting that this is what it does!). Arguments for / against doing that? Doing it that way round, at least there’s no mistaking from the API we provide that you’re getting blocking/sync behaviour.

Hmm. The choice of C# clients for some of these caches has moved on since I was last involved, in some cases. Which of the 3rd-party cache's are currently offering an async-only (no sync) API?

Thoughts on all this?

Member

reisenberger commented Jul 18, 2016

Hey @SeanFarrow Great questions. What were your thoughts?

Some thoughts:

Given we're supporting async variants of execute, should we have an async cache provider as well?

[1] Yes good call. Polly having an async cache provider so that async executions through Polly can take advantage of where 3rd-party caches have (mature/stable) async APIs –> we should do that. Feels like two separate interfaces in Polly for sync and async providers, ie IResultCacheProvider<TResult> and IResultCacheProviderAsync<TResult>? That what you thinking? Async interface sthg like:

// namespace Polly.Cache
interface IResultCacheProviderAsync<TResult>
{
   TResult GetAsync(Context); // should return: Task<TResult>
   void    PutAsync(Context, TResult); // should return: Task
}

NB If you think the design of the IResultCacheProvider/Async<TResult> interfaces can be refined, feel free to say.

[2] The config overloads .CacheAsync<TResult>(...) configuring Polly’s async cache policies should probably provide options to take either a sync or an async cache provider tho. Because there might be some cache providers which only have sync APIs but we still want to offer them to async cache policies? (MemoryCache.Default is in this category?)

So:

// Async policy taking async cache provider
CachePolicy<TResult> asyncCachePolicy = Policy
  .CacheAsync<TResult>(IResultCacheProviderAsync<TResult> cacheProviderAsync);
// Async policy taking sync cache provider
CachePolicy<TResult> asyncCachePolicy = Policy
  .CacheAsync<TResult>(IResultCacheProvider<TResult> cacheProvider);

Re:

if the cache doesn't support synchronous functionality, what should our default position be?

Again, really interested to hear your views. Thinking aloud from my side:

[3] The config overloads configuring Polly's sync cache policy should probably only take sync cache providers. IE just:

// Sync policy taking sync cache provider
CachePolicy<TResult> cachePolicy = Policy
  .Cache<TResult>(IResultCacheProvider<TResult> cacheProvider);

(The opposite – providing an overload for a sync cache policy taking an async provider – feels like creating a potentially confusing API. Particularly, there’s a risk people would mistake that syntax for giving them the benefits of async behaviour, when it’d not be: it’d have to be blocking on the calls to the async cache provider to bring it into a sync policy/sync call, no?)

[4] But we could allow the use of third-party caches with async-only interfaces, in Polly’s sync CachePolicys, if desired, by providing an implementation fulfilling Polly’s IResultCacheProvider<TResult> sync interface, just .Wait()-ing (or equiv) on the async calls. (And NB documenting that this is what it does!). Arguments for / against doing that? Doing it that way round, at least there’s no mistaking from the API we provide that you’re getting blocking/sync behaviour.

Hmm. The choice of C# clients for some of these caches has moved on since I was last involved, in some cases. Which of the 3rd-party cache's are currently offering an async-only (no sync) API?

Thoughts on all this?

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Jul 18, 2016

Ok, thinking out loud:
In my mind the AsyncCachePolicy should return a Task and Task for get/put methods respectively.
Agree with 2, 3 and 4.
I haven’t checked specifics as yet, but generally anything cloud based will offer async and may offer sync, but they are moving towards the former only fairly rapidly.

Also, whilst I think about it, how do we want to handle the conversion from the cache to the TResult type?
Sometimes it may not be as straightforward as doing new T, should we offer the capability to define a delegate/lambda, or a conversion interface?

I’ve got a situation for example where I’m storing a base64-encoded compressed file (zip in this case), so I can’t just do new ZipArchive, or the equivalent, it needs an extra processing step!
Also, this may be valid, if you are storing the content of a web response a an array of bytes.
Thoughts…?

SeanFarrow commented Jul 18, 2016

Ok, thinking out loud:
In my mind the AsyncCachePolicy should return a Task and Task for get/put methods respectively.
Agree with 2, 3 and 4.
I haven’t checked specifics as yet, but generally anything cloud based will offer async and may offer sync, but they are moving towards the former only fairly rapidly.

Also, whilst I think about it, how do we want to handle the conversion from the cache to the TResult type?
Sometimes it may not be as straightforward as doing new T, should we offer the capability to define a delegate/lambda, or a conversion interface?

I’ve got a situation for example where I’m storing a base64-encoded compressed file (zip in this case), so I can’t just do new ZipArchive, or the equivalent, it needs an extra processing step!
Also, this may be valid, if you are storing the content of a web response a an array of bytes.
Thoughts…?

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Jul 18, 2016

Member

In my mind the AsyncCachePolicy should
return a Task and Task for get/put methods respectively.

Oops on my part: yes definitely!

(more on other q later)

Member

reisenberger commented Jul 18, 2016

In my mind the AsyncCachePolicy should
return a Task and Task for get/put methods respectively.

Oops on my part: yes definitely!

(more on other q later)

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Jul 18, 2016

Member

Hey @SeanFarrow . Great to have all this on the cache policy!

Re:

how do we want to handle the conversion from the cache to the TResult type?
[...] should we offer the capability to define a delegate/lambda, or a conversion interface?

Where were you thinking this would sit in the architecture? As part of the CachePolicy configuration overloads, or in the IResultCacheProvider implementations?

My instinct is to keep the main CachePolicy configuration overloads simple-as-possible, ie we have the TimeSpan varieties plus:

.Cache<TResult>(IResultCacheProvider<TResult> cacheProvider) [and]
.CacheAsync<TResult>(IResultCacheProviderAsync<TResult> cacheProviderAsync)

rather than extend those with additional:

.Cache<TResult, TCachedFormat>(IResultCacheProvider<TResult> cacheProvider, ICacheValueFormatter<TResult, TCachedFormat> cacheValueFormatter) [etc]

(The formatter probably makes sense for some kinds of cache but not others. And: IResultCacheProvider<TResult> cacheProvider feels like the correct scope of interface to configure a CachePolicy ... for the policy to use the cache, all you need to know is that you can get and put in and out of it ... if some cache implementations prefer to compress or map to a more cloud-friendly format, that feels like a cache implementation concern. )

So thinking of it structurally as a cache implementation concern, my instinct is for the transform-for-caching functionality being part of IResultCacheProvider/Async implementations where needed, config'd on them where needed.

Sound sensible? / can you see disadvantages? / or just stating the obvious??

👍 re conversion interface. If we went as above ... and if there were a group of cache implementations (cloud caches?) where this approach might be particularly useful, one could still eg structure that with an abstract base class taking a conversion interface like you say, and some cache implementations deriving from that ...

Further thoughts? (You deep in and may see other angles! )

Member

reisenberger commented Jul 18, 2016

Hey @SeanFarrow . Great to have all this on the cache policy!

Re:

how do we want to handle the conversion from the cache to the TResult type?
[...] should we offer the capability to define a delegate/lambda, or a conversion interface?

Where were you thinking this would sit in the architecture? As part of the CachePolicy configuration overloads, or in the IResultCacheProvider implementations?

My instinct is to keep the main CachePolicy configuration overloads simple-as-possible, ie we have the TimeSpan varieties plus:

.Cache<TResult>(IResultCacheProvider<TResult> cacheProvider) [and]
.CacheAsync<TResult>(IResultCacheProviderAsync<TResult> cacheProviderAsync)

rather than extend those with additional:

.Cache<TResult, TCachedFormat>(IResultCacheProvider<TResult> cacheProvider, ICacheValueFormatter<TResult, TCachedFormat> cacheValueFormatter) [etc]

(The formatter probably makes sense for some kinds of cache but not others. And: IResultCacheProvider<TResult> cacheProvider feels like the correct scope of interface to configure a CachePolicy ... for the policy to use the cache, all you need to know is that you can get and put in and out of it ... if some cache implementations prefer to compress or map to a more cloud-friendly format, that feels like a cache implementation concern. )

So thinking of it structurally as a cache implementation concern, my instinct is for the transform-for-caching functionality being part of IResultCacheProvider/Async implementations where needed, config'd on them where needed.

Sound sensible? / can you see disadvantages? / or just stating the obvious??

👍 re conversion interface. If we went as above ... and if there were a group of cache implementations (cloud caches?) where this approach might be particularly useful, one could still eg structure that with an abstract base class taking a conversion interface like you say, and some cache implementations deriving from that ...

Further thoughts? (You deep in and may see other angles! )

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Jul 18, 2016

I agree with you re scoping.
It may be that certain keys are compressed/others are serialized in different ways, so we may not be able to use a base class here, we could put an ICacheOutputConverter interface as part of the get/put calls, defaulting to null. If the converter is null we just use the default which does a new T. That way it’s up to the user to decide/implement converters. We could provide some converters out of the box, such as serializing to/from JSon. If no converter is passed to put, we just use the caches native put call.

Finally, Bear in mind that converting a value might not be straightforward, take the case where you have cached some compressed data, to decompress this data might require more than just calling a class constructor, you may need to read from a memory stream for example.
Thoughts…?

SeanFarrow commented Jul 18, 2016

I agree with you re scoping.
It may be that certain keys are compressed/others are serialized in different ways, so we may not be able to use a base class here, we could put an ICacheOutputConverter interface as part of the get/put calls, defaulting to null. If the converter is null we just use the default which does a new T. That way it’s up to the user to decide/implement converters. We could provide some converters out of the box, such as serializing to/from JSon. If no converter is passed to put, we just use the caches native put call.

Finally, Bear in mind that converting a value might not be straightforward, take the case where you have cached some compressed data, to decompress this data might require more than just calling a class constructor, you may need to read from a memory stream for example.
Thoughts…?

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Jul 19, 2016

Member

hey @SeanFarrow Great qs. Completely with you about needing conversion funcs rather than new-ing items out of cache. (Defined on an ICacheOutputConverter<TResult> interface or similar like you suggest sounds good!)

How do you see this:

we could put an ICacheOutputConverter interface as part of the get/put calls,
defaulting to null. If the converter is null we just use the default which does a new T.

looking in actual code? We'd need to avoid the various gotchas flowing from having optional parameters in interfaces (like the default values in the interface taking precedence over values in any implementations of the interface, if the call is being made against the interface not an implementation), but maybe that is not what you were thinking anyway?

Member

reisenberger commented Jul 19, 2016

hey @SeanFarrow Great qs. Completely with you about needing conversion funcs rather than new-ing items out of cache. (Defined on an ICacheOutputConverter<TResult> interface or similar like you suggest sounds good!)

How do you see this:

we could put an ICacheOutputConverter interface as part of the get/put calls,
defaulting to null. If the converter is null we just use the default which does a new T.

looking in actual code? We'd need to avoid the various gotchas flowing from having optional parameters in interfaces (like the default values in the interface taking precedence over values in any implementations of the interface, if the call is being made against the interface not an implementation), but maybe that is not what you were thinking anyway?

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Jul 19, 2016

Hadn’t thought of that!
OK, how about having a SetCacheOutputConverter on the cache interface?

SeanFarrow commented Jul 19, 2016

Hadn’t thought of that!
OK, how about having a SetCacheOutputConverter on the cache interface?

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Jul 19, 2016

Member

how about having a SetCacheOutputConverter on the cache interface?

Mutable policies by property-injection/setter-method injection a possible trap for the unwary in highly concurrent / multi-threaded scenario? (Setting output converter then executing not atomic; risk some thread sets the cache output converter while another thread is mid executing?). (Might not be the way we envisage it being used, but opens up the possibility)

Constructor-injection somewhere (resulting in immutable policy) safer? ICacheOutputConverter<TResult> could perhaps be constructor-injected into the class fulfilling IResultCacheProvider/Async? What do you think?

Member

reisenberger commented Jul 19, 2016

how about having a SetCacheOutputConverter on the cache interface?

Mutable policies by property-injection/setter-method injection a possible trap for the unwary in highly concurrent / multi-threaded scenario? (Setting output converter then executing not atomic; risk some thread sets the cache output converter while another thread is mid executing?). (Might not be the way we envisage it being used, but opens up the possibility)

Constructor-injection somewhere (resulting in immutable policy) safer? ICacheOutputConverter<TResult> could perhaps be constructor-injected into the class fulfilling IResultCacheProvider/Async? What do you think?

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Jul 19, 2016

Possibly, yes, but what if, I want a different converter per type?
We will need to support passing in an Ienumerable of converters.

SeanFarrow commented Jul 19, 2016

Possibly, yes, but what if, I want a different converter per type?
We will need to support passing in an Ienumerable of converters.

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Jul 19, 2016

Member

what if I want a different converter per type?

Good point!

We will need to support passing in an Ienumerable of converters.

Hmm. Were you thinking this maybe isn't ideal? Could we do better somehow?

.

I am wondering if one of us throwing together a quick class/interface diagram (as a 'straw man' ...) would help at this point? (getting hard to envision sentence-wise). A class diag to pull about could help sort out the different components, role they play and their multiplicity/scope.

Great to have all your deep input on these caching qs!

Member

reisenberger commented Jul 19, 2016

what if I want a different converter per type?

Good point!

We will need to support passing in an Ienumerable of converters.

Hmm. Were you thinking this maybe isn't ideal? Could we do better somehow?

.

I am wondering if one of us throwing together a quick class/interface diagram (as a 'straw man' ...) would help at this point? (getting hard to envision sentence-wise). A class diag to pull about could help sort out the different components, role they play and their multiplicity/scope.

Great to have all your deep input on these caching qs!

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Jul 20, 2016

Member

@SeanFarrow Stepping back and thinking about converters-for-caches, I wonder if we are missing something simple: just a minimal adapter pattern?

Assuming the previous IResultCacheProvider<TResult> and (for discussion; say if you think different) ... a format-conversion interface like below:

interface ICacheFormatConverter<TResult, TCacheFormat> {
    TCacheFormat Encode(TResult result);
    TResult      Decode(TCacheFormat cachedValue);   
}  

Format converters could be provided via a lightweight adapter:

public class CacheWithConversion<TResult, TCacheFormat>(ICacheFormatConverter<TResult, TCacheFormat> converter, IResultCacheProvider<TCacheFormat> wrappedCache) : IResultCacheProvider<TResult>
{
    public TResult Get(Context context)
    {
        return converter.Decode(wrappedCache.Get(context));
    };

    public void Put(Context context, TResult result)
    {
        wrappedCache.Put(Context, converter.Encode(result)); 
    }
}  

?

If this still doesn't seem to join with your thoughts on cache providers so far, then possibly we have a different vision of how the contributing classes interact and their multiplicity: let's dig deeper if so.

(Not saying this is the only solution - there could be others - but one way to deal with the multiplicity problem ("different converter per type") without any enumerable?)

Member

reisenberger commented Jul 20, 2016

@SeanFarrow Stepping back and thinking about converters-for-caches, I wonder if we are missing something simple: just a minimal adapter pattern?

Assuming the previous IResultCacheProvider<TResult> and (for discussion; say if you think different) ... a format-conversion interface like below:

interface ICacheFormatConverter<TResult, TCacheFormat> {
    TCacheFormat Encode(TResult result);
    TResult      Decode(TCacheFormat cachedValue);   
}  

Format converters could be provided via a lightweight adapter:

public class CacheWithConversion<TResult, TCacheFormat>(ICacheFormatConverter<TResult, TCacheFormat> converter, IResultCacheProvider<TCacheFormat> wrappedCache) : IResultCacheProvider<TResult>
{
    public TResult Get(Context context)
    {
        return converter.Decode(wrappedCache.Get(context));
    };

    public void Put(Context context, TResult result)
    {
        wrappedCache.Put(Context, converter.Encode(result)); 
    }
}  

?

If this still doesn't seem to join with your thoughts on cache providers so far, then possibly we have a different vision of how the contributing classes interact and their multiplicity: let's dig deeper if so.

(Not saying this is the only solution - there could be others - but one way to deal with the multiplicity problem ("different converter per type") without any enumerable?)

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Jul 21, 2016

Hi,

That makes sense, I had visions of people being able to supply the cache to wrap the converter with at runtime.
With your proposal, wouldn’t we need converters for each cache type?

SeanFarrow commented Jul 21, 2016

Hi,

That makes sense, I had visions of people being able to supply the cache to wrap the converter with at runtime.
With your proposal, wouldn’t we need converters for each cache type?

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Jul 21, 2016

Member

Hi @SeanFarrow . Re:

I had visions of people being able to supply the cache to wrap the converter with at runtime.

(I haven’t quite followed here. Do you think the lightweight wrapper proposal is good in this respect, or is there an at-runtime capability we are missing?)

Re:

With your proposal, wouldn’t we need converters for each cache type?

I am wondering if some of the questions around multiplicity are stemming from the original proposal for the IResultCacheProvider<TResult> to be strongly-typed.

Are you thinking that rather than IResultCacheProvider, the cache provider interface could be non-generic?

interface IResultCacheProvider
{
   object Get(Context);
   void    Put(Context, object);
}

and thus the converter interface only ICacheFormatConverter<TResult>?

(just wanting to understand clearly at this point- can then think through in context of rest of design)

Thanks!

Member

reisenberger commented Jul 21, 2016

Hi @SeanFarrow . Re:

I had visions of people being able to supply the cache to wrap the converter with at runtime.

(I haven’t quite followed here. Do you think the lightweight wrapper proposal is good in this respect, or is there an at-runtime capability we are missing?)

Re:

With your proposal, wouldn’t we need converters for each cache type?

I am wondering if some of the questions around multiplicity are stemming from the original proposal for the IResultCacheProvider<TResult> to be strongly-typed.

Are you thinking that rather than IResultCacheProvider, the cache provider interface could be non-generic?

interface IResultCacheProvider
{
   object Get(Context);
   void    Put(Context, object);
}

and thus the converter interface only ICacheFormatConverter<TResult>?

(just wanting to understand clearly at this point- can then think through in context of rest of design)

Thanks!

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Jul 21, 2016

I think if we use a non-generic cache interface, this would make the converters easier. That way, people could have a converter if they want, What I’m thinking is that a converter would know what type it converts to, and has a CanHandle message.

SeanFarrow commented Jul 21, 2016

I think if we use a non-generic cache interface, this would make the converters easier. That way, people could have a converter if they want, What I’m thinking is that a converter would know what type it converts to, and has a CanHandle message.

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Jul 22, 2016

Member

Re:

I think if we use a non-generic cache interface, this would make the converters easier

Thanks @SeanFarrow, this is well worth thinking about . I now need to spend some time on the PolicyWrap, to explore mixing strongly-typed/generic and non-generic policies, which is related.

Member

reisenberger commented Jul 22, 2016

Re:

I think if we use a non-generic cache interface, this would make the converters easier

Thanks @SeanFarrow, this is well worth thinking about . I now need to spend some time on the PolicyWrap, to explore mixing strongly-typed/generic and non-generic policies, which is related.

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Jul 22, 2016

Member

@SeanFarrow , re:

What I’m thinking is that a converter would know what type it converts to,
and has a CanHandle message

How does the CanHandle method(?) come into play? Does the CanHandle method imply a list of converters registered somewhere (for example on the CacheProvider), that some cache logic tries in turn until it finds one that 'can handle' in the context of the execution-in-process?

(If the converter was wrapping the cache provider supplied to CachePolicy, per my example a day or so ago, a CanHandle method wouldn't be necessary? - it just applies because it is part of the cache policy for that call.)

Member

reisenberger commented Jul 22, 2016

@SeanFarrow , re:

What I’m thinking is that a converter would know what type it converts to,
and has a CanHandle message

How does the CanHandle method(?) come into play? Does the CanHandle method imply a list of converters registered somewhere (for example on the CacheProvider), that some cache logic tries in turn until it finds one that 'can handle' in the context of the execution-in-process?

(If the converter was wrapping the cache provider supplied to CachePolicy, per my example a day or so ago, a CanHandle method wouldn't be necessary? - it just applies because it is part of the cache policy for that call.)

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Jul 23, 2016

Yep, that’s what I was thinking.
Per your example a day or so ago if the cache policy was wrapped by a converter, we could have only one converter per cache policy? Correct me if I’m wrong?
That’s the limitation I’m trying to avoid as I see myself and others storing items needing more than one conversion strategy in the same cache.

SeanFarrow commented Jul 23, 2016

Yep, that’s what I was thinking.
Per your example a day or so ago if the cache policy was wrapped by a converter, we could have only one converter per cache policy? Correct me if I’m wrong?
That’s the limitation I’m trying to avoid as I see myself and others storing items needing more than one conversion strategy in the same cache.

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Jul 28, 2016

Member

@SeanFarrow I'm feeling the need for some code samples / a 'straw man', to get a clearer shared understanding of any proposal being discussed.

I am putting something together in a repo entirely separate from Polly.

Feel free also to set any architectural sketch down in code! (it would give comparative perspectives to discuss)

(All Polly activity has to fall outside work time for me, but pushing this along - it would be good to progress this feature!).

Member

reisenberger commented Jul 28, 2016

@SeanFarrow I'm feeling the need for some code samples / a 'straw man', to get a clearer shared understanding of any proposal being discussed.

I am putting something together in a repo entirely separate from Polly.

Feel free also to set any architectural sketch down in code! (it would give comparative perspectives to discuss)

(All Polly activity has to fall outside work time for me, but pushing this along - it would be good to progress this feature!).

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Jul 28, 2016

Let me see what you have first, snowed under work wise as you are, plus I’m disappearing for a month for another job!

SeanFarrow commented Jul 28, 2016

Let me see what you have first, snowed under work wise as you are, plus I’m disappearing for a month for another job!

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Sep 29, 2016

Member

@SeanFarrow I hadn’t seen the existence of the generic variants as creating code duplication. They just offer strongly-typed variants for those who want to code that way. For instance, they allow users to code a MappingCacheProvider<TNative,TMapped> if they want, rather than the mappings object<->object (or having to code if (TResult is TypeOfInterest) { /* do mapping stuff */ }) implied by a non-typed ICacheProvider.

  • You’d still only code one ICacheProvider implementation for each cache implementation. eg RedisCacheProvider : ICacheProvider
  • The .As<TResult> extension method would provide a lightweight wrapper to return a ICacheProvider<TResult>, if wanting to work with a strongly-typed variant, eg to work with strongly-typed mappers.
  • Similarly, the Policy.Cache<TResult>(ICacheProvider provider) config overload will create a strongly-typed CachePolicy<TResult> if desired.

Some users will prefer to work with a CachePolicy<TResult> (gives Visual Studio type-binding/intellisense between the various type-bound usages of Polly they might be combining).

Other users will prefer to work with a non-generic CachePolicy that they can use across all types, but they don’t get the IDE-time type-sensitivity, or strongly-typed mappers.

In branch Cache-architectureTypingExperiment, I pushed this code which allows for both non-generic and generic variants.
[ EDIT: branch shows changes only for sync forms while discussion; easy to add similar async. ]

What do you think? Where shall we go from here?

Very open to other suggestions, but I may need to see some more worked code sketches to understand what you may be thinking of?

Member

reisenberger commented Sep 29, 2016

@SeanFarrow I hadn’t seen the existence of the generic variants as creating code duplication. They just offer strongly-typed variants for those who want to code that way. For instance, they allow users to code a MappingCacheProvider<TNative,TMapped> if they want, rather than the mappings object<->object (or having to code if (TResult is TypeOfInterest) { /* do mapping stuff */ }) implied by a non-typed ICacheProvider.

  • You’d still only code one ICacheProvider implementation for each cache implementation. eg RedisCacheProvider : ICacheProvider
  • The .As<TResult> extension method would provide a lightweight wrapper to return a ICacheProvider<TResult>, if wanting to work with a strongly-typed variant, eg to work with strongly-typed mappers.
  • Similarly, the Policy.Cache<TResult>(ICacheProvider provider) config overload will create a strongly-typed CachePolicy<TResult> if desired.

Some users will prefer to work with a CachePolicy<TResult> (gives Visual Studio type-binding/intellisense between the various type-bound usages of Polly they might be combining).

Other users will prefer to work with a non-generic CachePolicy that they can use across all types, but they don’t get the IDE-time type-sensitivity, or strongly-typed mappers.

In branch Cache-architectureTypingExperiment, I pushed this code which allows for both non-generic and generic variants.
[ EDIT: branch shows changes only for sync forms while discussion; easy to add similar async. ]

What do you think? Where shall we go from here?

Very open to other suggestions, but I may need to see some more worked code sketches to understand what you may be thinking of?

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Sep 29, 2016

Member

Re:

Would it be better to have different types for input/output mapping from a cache[...]?

An advantage of both mappings (the mappings both ways) in the same class (as currently) is that - especially when strongly-typed - the combined interface forces users to write both mappings as a reciprocal pair. If mappings each way are in separate classes, users could write and unwittingly combine completely unrelated mappings? So for me, it feels less coherent to have them in separate classes? (unless I've misunderstood something)

Member

reisenberger commented Sep 29, 2016

Re:

Would it be better to have different types for input/output mapping from a cache[...]?

An advantage of both mappings (the mappings both ways) in the same class (as currently) is that - especially when strongly-typed - the combined interface forces users to write both mappings as a reciprocal pair. If mappings each way are in separate classes, users could write and unwittingly combine completely unrelated mappings? So for me, it feels less coherent to have them in separate classes? (unless I've misunderstood something)

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Sep 29, 2016

Member

@SeanFarrow Also meant to add: happy to do a skype call if that easier/quicker to work through some of this.

Member

reisenberger commented Sep 29, 2016

@SeanFarrow Also meant to add: happy to do a skype call if that easier/quicker to work through some of this.

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Sep 29, 2016

Possibly, when are you available, happy to be led by your time frames.
Cheers

SeanFarrow commented Sep 29, 2016

Possibly, when are you available, happy to be led by your time frames.
Cheers

@reisenberger reisenberger added the v5.x label Oct 16, 2016

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Oct 16, 2016

Member

@SeanFarrow Thx for the very productive skype call around caching. The CachePolicy architecture has now been rebased against the latest v5.0-alpha, as discussed, in this branch: https://github.com/App-vNext/Polly/tree/v5.x-cache-alpha

Community: We are initially planning targeting the following (pluggable) Cache providers for the CachePolicy:

  • in-memory (ie MemoryCache or similar)
  • on disk
  • Redis
  • MemCached

Other cache providers you would like to see supported? Please comment on this issue, or join the conversation on slack: www.thepollyproject.org

Member

reisenberger commented Oct 16, 2016

@SeanFarrow Thx for the very productive skype call around caching. The CachePolicy architecture has now been rebased against the latest v5.0-alpha, as discussed, in this branch: https://github.com/App-vNext/Polly/tree/v5.x-cache-alpha

Community: We are initially planning targeting the following (pluggable) Cache providers for the CachePolicy:

  • in-memory (ie MemoryCache or similar)
  • on disk
  • Redis
  • MemCached

Other cache providers you would like to see supported? Please comment on this issue, or join the conversation on slack: www.thepollyproject.org

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Oct 16, 2016

@reisenberger,

Thanks, I’ll start working on this, once the current backlog has cleared.
Cheers
Sean.

SeanFarrow commented Oct 16, 2016

@reisenberger,

Thanks, I’ll start working on this, once the current backlog has cleared.
Cheers
Sean.

@mfjerome

This comment has been minimized.

Show comment
Hide comment
@mfjerome

mfjerome Dec 7, 2016

Other cache providers you would like to see supported?

@reisenberger , Gemfire (based on an apache project I think) could be nice. Maybe some gals from Steeltoe/Cloudfoundry could chime in? I am considering using that technology for distributed caching.

mfjerome commented Dec 7, 2016

Other cache providers you would like to see supported?

@reisenberger , Gemfire (based on an apache project I think) could be nice. Maybe some gals from Steeltoe/Cloudfoundry could chime in? I am considering using that technology for distributed caching.

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Dec 7, 2016

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Dec 15, 2016

@perfectsquircle

This comment has been minimized.

Show comment
Hide comment
@perfectsquircle

perfectsquircle Apr 29, 2017

Hello,

I'm curious if you have a prediction of when this feature might land? It seems like there's been some promising work, but it's gone quiet recently. I have a strong interest in using the caching policy in combination with retry and circuit breaker for HTTP calls.

I'd also be happy to contribute if you need any help.

Hello,

I'm curious if you have a prediction of when this feature might land? It seems like there's been some promising work, but it's gone quiet recently. I have a strong interest in using the caching policy in combination with retry and circuit breaker for HTTP calls.

I'd also be happy to contribute if you need any help.

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger May 1, 2017

Member

@perfectsquircle Yes, among all the other features that got delivered at v5.0, this got left behind. I've wanted to take forward, but it's been behind other things: contribution would be very welcome!

We have quite a developed architecture (thanks also to @SeanFarrow !), so the main thing we need now is some first cache provider implementations to plug into that. I've just re-based the architecture against latest Polly / stuff I'm about to release. Mini tour:

You construct a CachePolicy specifying:

  • ITtlStrategy: defines TTL for the items being cached by the CachePolicy. Various implementations already written.
  • ICacheKeyStrategy: defines what key to use to Get/Put in the cache. The default strategy is based on a value in the Context passed when .Execute(...)-ing on the policy. Users can write more elaborate strategies if they want (I will blog examples).
  • ICacheProvider: a simple Get/Put interface for any cache provider Polly could use. ICacheProviderAsync, similar interface for async.

So a typical cache might be configured something like:

Policy.Cache(
    myCacheProvider, 
    TimeSpan.FromHours(1) // or more specific ITtlStrategy
     /*, custom cache key strategy if desired */)

This test shows the basic usage.


So we need to implement some ICacheProviders. @perfectsquircle Are you interested in in-process/local caching? (eg MemoryCache, disk cache), or more cloud-caching (eg Redis) or ...? Any contribution in any of these would be welcome! Even just an initial ICacheProvider implementation based on System.Runtime.Caching.MemoryCache, would be enough to launch the feature.

  • ICacheProvider implementations will often depend on third-party libraries, and we didn't want the main Polly package to take those dependencies, so each ICacheProvider would be delivered as a separate Nuget, built out of a separate github repo.
  • There are skeleton repos which you (/anyone interested in contributing!) can fork for MemoryCache, disk, Redis, etc. We can make new repos for any other cache provider people want to support.
  • We'd need a build script for each of those repos to run tests and make the nuget package (I can help if needed/useful).

The architecture also envisages support for serializers like Protobuf etc: let me know if you have any interest in that, and we can discuss further. Otherwise let's leave for now.

I am very available for further help / guidance, if you want to work on this! Any of the above you'd be interested in tackling? (And: thank-you!)

Member

reisenberger commented May 1, 2017

@perfectsquircle Yes, among all the other features that got delivered at v5.0, this got left behind. I've wanted to take forward, but it's been behind other things: contribution would be very welcome!

We have quite a developed architecture (thanks also to @SeanFarrow !), so the main thing we need now is some first cache provider implementations to plug into that. I've just re-based the architecture against latest Polly / stuff I'm about to release. Mini tour:

You construct a CachePolicy specifying:

  • ITtlStrategy: defines TTL for the items being cached by the CachePolicy. Various implementations already written.
  • ICacheKeyStrategy: defines what key to use to Get/Put in the cache. The default strategy is based on a value in the Context passed when .Execute(...)-ing on the policy. Users can write more elaborate strategies if they want (I will blog examples).
  • ICacheProvider: a simple Get/Put interface for any cache provider Polly could use. ICacheProviderAsync, similar interface for async.

So a typical cache might be configured something like:

Policy.Cache(
    myCacheProvider, 
    TimeSpan.FromHours(1) // or more specific ITtlStrategy
     /*, custom cache key strategy if desired */)

This test shows the basic usage.


So we need to implement some ICacheProviders. @perfectsquircle Are you interested in in-process/local caching? (eg MemoryCache, disk cache), or more cloud-caching (eg Redis) or ...? Any contribution in any of these would be welcome! Even just an initial ICacheProvider implementation based on System.Runtime.Caching.MemoryCache, would be enough to launch the feature.

  • ICacheProvider implementations will often depend on third-party libraries, and we didn't want the main Polly package to take those dependencies, so each ICacheProvider would be delivered as a separate Nuget, built out of a separate github repo.
  • There are skeleton repos which you (/anyone interested in contributing!) can fork for MemoryCache, disk, Redis, etc. We can make new repos for any other cache provider people want to support.
  • We'd need a build script for each of those repos to run tests and make the nuget package (I can help if needed/useful).

The architecture also envisages support for serializers like Protobuf etc: let me know if you have any interest in that, and we can discuss further. Otherwise let's leave for now.

I am very available for further help / guidance, if you want to work on this! Any of the above you'd be interested in tackling? (And: thank-you!)

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow May 1, 2017

SeanFarrow commented May 1, 2017

@perfectsquircle

This comment has been minimized.

Show comment
Hide comment
@perfectsquircle

perfectsquircle May 2, 2017

@reisenberger

Thank you for the comprehensive update. Maybe I'll get my feet wet and try to implement the memory or disk ICacheProvider. I suppose it would be sufficient to target .NET Standard 1.0 for these plugins?

@reisenberger

Thank you for the comprehensive update. Maybe I'll get my feet wet and try to implement the memory or disk ICacheProvider. I suppose it would be sufficient to target .NET Standard 1.0 for these plugins?

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger May 2, 2017

Member

@perfectsquircle Great!

I made a start on a skeleton Visual Studio solution, build file, Nuget Packager etc for MemoryCache repo early this morning. I can probably push that to github in about an hour's time ...

I suppose it would be sufficient to target .NET Standard 1.0 for these plugins?

As low a .Net Standard version as we can get away with. It looks from package search as if lowest .NET Standard for MemoryCache might be .NET Standard 1.3. Fine if that's the case. Although the core Polly targets .NetStandard 1.0 (soon to change to .NetStandard 1.1 when we release #231), it shouldn't be a problem to make MemoryCache repo target .NET Standard 1.3 instead. The range of cache providers we're targeting will inevitably mean some have differing target support - delivering them through separate nuget pkgs will let us deal with that.

Member

reisenberger commented May 2, 2017

@perfectsquircle Great!

I made a start on a skeleton Visual Studio solution, build file, Nuget Packager etc for MemoryCache repo early this morning. I can probably push that to github in about an hour's time ...

I suppose it would be sufficient to target .NET Standard 1.0 for these plugins?

As low a .Net Standard version as we can get away with. It looks from package search as if lowest .NET Standard for MemoryCache might be .NET Standard 1.3. Fine if that's the case. Although the core Polly targets .NetStandard 1.0 (soon to change to .NetStandard 1.1 when we release #231), it shouldn't be a problem to make MemoryCache repo target .NET Standard 1.3 instead. The range of cache providers we're targeting will inevitably mean some have differing target support - delivering them through separate nuget pkgs will let us deal with that.

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger May 2, 2017

Member

@perfectsquircle At https://github.com/App-vNext/Polly.Caching.MemoryCache, there is now a repo ready to fork and develop on.

TL;DR All we need to do now is start developing the Polly.Caching.MemoryCache.MemoryCacheProvider : Polly.Caching.ICacheProvider within the Polly.Caching.MemoryCache.Shared area of this repo, and specs in SharedSpecs.

I put in a dummy class and test only to test the build script (build.bat) was working: can be deleted.

The repo intentionally keeps the three-target layout (.NET4.0, .NET4.5 and .Net Standard) that Polly has, for now. Theoretically we could drop .NET4.5 as a separate target and have .NET4.5 consumers reference .Net Standard, but targeting NetStandard from NetFramework is very noisy until Microsoft (hopefully) fix this in .Net Standard 2.0.

For MemoryCache, you may have to change the .Net Standard 1.0 package to target .Net Standard 1.3, if package search was accurate. (I left it at .Net Standard 1.0, so that this commit could be a useful master for other cache providers).

Finally, to reference the interface Polly.Caching.ICacheProvider, you'd need to be able to reference a Polly nuget which includes it. Which obviously isn't public yet. So the procedure would be clone
https://github.com/reisenberger/Polly/tree/v5.1.x-cache-rebase locally, run its build script, and reference the Polly nugets the build script places in the artifacts\nuget-package directory.

Phew - but that gets us a baseline to develop on!

Let me know if makes sense / whatever questions - whether around tooling or MemoryCacheProvider intent.

Huge thank you for your contribution!

Member

reisenberger commented May 2, 2017

@perfectsquircle At https://github.com/App-vNext/Polly.Caching.MemoryCache, there is now a repo ready to fork and develop on.

TL;DR All we need to do now is start developing the Polly.Caching.MemoryCache.MemoryCacheProvider : Polly.Caching.ICacheProvider within the Polly.Caching.MemoryCache.Shared area of this repo, and specs in SharedSpecs.

I put in a dummy class and test only to test the build script (build.bat) was working: can be deleted.

The repo intentionally keeps the three-target layout (.NET4.0, .NET4.5 and .Net Standard) that Polly has, for now. Theoretically we could drop .NET4.5 as a separate target and have .NET4.5 consumers reference .Net Standard, but targeting NetStandard from NetFramework is very noisy until Microsoft (hopefully) fix this in .Net Standard 2.0.

For MemoryCache, you may have to change the .Net Standard 1.0 package to target .Net Standard 1.3, if package search was accurate. (I left it at .Net Standard 1.0, so that this commit could be a useful master for other cache providers).

Finally, to reference the interface Polly.Caching.ICacheProvider, you'd need to be able to reference a Polly nuget which includes it. Which obviously isn't public yet. So the procedure would be clone
https://github.com/reisenberger/Polly/tree/v5.1.x-cache-rebase locally, run its build script, and reference the Polly nugets the build script places in the artifacts\nuget-package directory.

Phew - but that gets us a baseline to develop on!

Let me know if makes sense / whatever questions - whether around tooling or MemoryCacheProvider intent.

Huge thank you for your contribution!

@perfectsquircle

This comment has been minimized.

Show comment
Hide comment
@perfectsquircle

perfectsquircle May 18, 2017

Hi @reisenberger,

I haven't gotten around to working on this. Things got crazy at work. I might try take another crack at it again soon.

Hi @reisenberger,

I haven't gotten around to working on this. Things got crazy at work. I might try take another crack at it again soon.

@JoeBrockhaus

This comment has been minimized.

Show comment
Hide comment
@JoeBrockhaus

JoeBrockhaus May 26, 2017

Hi @reisenberger
Is there any chance you could setup a beta/alpha myget/vso feed based off the v5.1x-cache-rebase (if that's still the latest) branch?

JoeBrockhaus commented May 26, 2017

Hi @reisenberger
Is there any chance you could setup a beta/alpha myget/vso feed based off the v5.1x-cache-rebase (if that's still the latest) branch?

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow May 26, 2017

SeanFarrow commented May 26, 2017

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger May 27, 2017

Member

@joelhulen I have pulled the latest Cache rebase down onto this branch on App-vNext/Polly. Build from this branch will publish an appropriately tagged pre-release Polly build which you can push to nuget.

@JoeBrockhaus : @joelhulen plans to push the above to Nuget as a pre-release.

@JoeBrockhaus We would welcome contributions if you are able to contribute to Polly cache implementation - let us know what you would be interested in doing!

[ I can get back to CachePolicy myself likely in the second half of June. ]

Member

reisenberger commented May 27, 2017

@joelhulen I have pulled the latest Cache rebase down onto this branch on App-vNext/Polly. Build from this branch will publish an appropriately tagged pre-release Polly build which you can push to nuget.

@JoeBrockhaus : @joelhulen plans to push the above to Nuget as a pre-release.

@JoeBrockhaus We would welcome contributions if you are able to contribute to Polly cache implementation - let us know what you would be interested in doing!

[ I can get back to CachePolicy myself likely in the second half of June. ]

@joelhulen

This comment has been minimized.

Show comment
Hide comment
@joelhulen

joelhulen Jun 2, 2017

Member

@reisenberger @JoeBrockhaus Sorry, the notification for this thread got lost amongst my piles of emails. Sometimes it's faster getting ahold of me on the Polly slack channel ;-)

I'll work toward releasing the pre-release NuGet and notify everyone here once it's up.

Member

joelhulen commented Jun 2, 2017

@reisenberger @JoeBrockhaus Sorry, the notification for this thread got lost amongst my piles of emails. Sometimes it's faster getting ahold of me on the Polly slack channel ;-)

I'll work toward releasing the pre-release NuGet and notify everyone here once it's up.

@joelhulen

This comment has been minimized.

Show comment
Hide comment
@joelhulen

joelhulen Jun 2, 2017

Member

@reisenberger @JoeBrockhaus I've published those pre-release NuGet packages. Please let me know if you have any issues finding or using them.

Member

joelhulen commented Jun 2, 2017

@reisenberger @JoeBrockhaus I've published those pre-release NuGet packages. Please let me know if you have any issues finding or using them.

@JoeBrockhaus

This comment has been minimized.

Show comment
Hide comment
@JoeBrockhaus

JoeBrockhaus Jun 8, 2017

@SeanFarrow Sorry for the super-delay on this feedback.

I was looking to incorporate a combination of Retry with a CircuitBreaker to proactively serve from Cache before failing on new requests whose dependencies would likely fail, but for which cached data would suffice.

JoeBrockhaus commented Jun 8, 2017

@SeanFarrow Sorry for the super-delay on this feedback.

I was looking to incorporate a combination of Retry with a CircuitBreaker to proactively serve from Cache before failing on new requests whose dependencies would likely fail, but for which cached data would suffice.

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Jun 8, 2017

SeanFarrow commented Jun 8, 2017

@JoeBrockhaus

This comment has been minimized.

Show comment
Hide comment
@JoeBrockhaus

JoeBrockhaus Jun 8, 2017

Would likely be async, though i'm not sure if would be a blocker either way.
I have had to move onto other priorities in the meantime, unfortunately.
I'll try to find some time to poke it in the next couple days. 😀

JoeBrockhaus commented Jun 8, 2017

Would likely be async, though i'm not sure if would be a blocker either way.
I have had to move onto other priorities in the meantime, unfortunately.
I'll try to find some time to poke it in the next couple days. 😀

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Jun 8, 2017

SeanFarrow commented Jun 8, 2017

@SeanFarrow

This comment has been minimized.

Show comment
Hide comment
@SeanFarrow

SeanFarrow Jun 22, 2017

All,

I've just been looking at the memory cache, we can't provide an async api, as one does not exist. Does anyone see a problem with this?

All,

I've just been looking at the memory cache, we can't provide an async api, as one does not exist. Does anyone see a problem with this?

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Jun 24, 2017

Member

Hi @SeanFarrow . Re:

I've just been looking at the memory cache, we can't provide an async api, as one does not exist. Does anyone see a problem with this?

I don't think this a significant problem. We can simply write an implementation for CacheAsync(...) that addresses a sync cache provider instead of an async one, at this line (and similar). It may mean a few extra configuration overloads, with the compiler selecting the right overload. We can add this when we next visit the cache architecture.

Member

reisenberger commented Jun 24, 2017

Hi @SeanFarrow . Re:

I've just been looking at the memory cache, we can't provide an async api, as one does not exist. Does anyone see a problem with this?

I don't think this a significant problem. We can simply write an implementation for CacheAsync(...) that addresses a sync cache provider instead of an async one, at this line (and similar). It may mean a few extra configuration overloads, with the compiler selecting the right overload. We can add this when we next visit the cache architecture.

@dweggemans

This comment has been minimized.

Show comment
Hide comment
@dweggemans

dweggemans Aug 28, 2017

Is there an ETA on the caching feature?

Is there an ETA on the caching feature?

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Aug 29, 2017

Member

@dweggemans The caching feature is expected to be released in September.

This branch https://github.com/App-vNext/Polly/tree/v5.3.x-cachebeta contains the latest caching architecture, ie the core classes within Polly to support CachePolicy. The build script will generate locally a nuget package for same.

This repo https://github.com/App-vNext/Polly.Caching.MemoryCache contains a beta-release of an ISyncCacheProvider and IAsyncCacheProvider implementation for MemoryCache. The build script will generate locally a beta nuget package for same. /cc @SeanFarrow

@dweggemans : Are there particular cache providers you are looking to support? Community contributions to support new cache providers will be welcome: The required interfaces to implement (ISyncCacheProvider and/or IAsyncCacheProvider) are relatively straightforward.

Polly contributors, eg @SeanFarrow , also already have a range of distributed cache providers in mind.

Member

reisenberger commented Aug 29, 2017

@dweggemans The caching feature is expected to be released in September.

This branch https://github.com/App-vNext/Polly/tree/v5.3.x-cachebeta contains the latest caching architecture, ie the core classes within Polly to support CachePolicy. The build script will generate locally a nuget package for same.

This repo https://github.com/App-vNext/Polly.Caching.MemoryCache contains a beta-release of an ISyncCacheProvider and IAsyncCacheProvider implementation for MemoryCache. The build script will generate locally a beta nuget package for same. /cc @SeanFarrow

@dweggemans : Are there particular cache providers you are looking to support? Community contributions to support new cache providers will be welcome: The required interfaces to implement (ISyncCacheProvider and/or IAsyncCacheProvider) are relatively straightforward.

Polly contributors, eg @SeanFarrow , also already have a range of distributed cache providers in mind.

@dweggemans

This comment has been minimized.

Show comment
Hide comment
@dweggemans

dweggemans Aug 30, 2017

@reisenberger thanks for your response. I might be able to wait a little, or else I'll build a package locally. No problem.

The MemoryCache suits my needs perfectly. I'm just looking for a simple way to reduce some traffic by caching results locally.

@reisenberger thanks for your response. I might be able to wait a little, or else I'll build a package locally. No problem.

The MemoryCache suits my needs perfectly. I'm just looking for a simple way to reduce some traffic by caching results locally.

@reisenberger reisenberger modified the milestones: 5.0.0, v5.4.0 Oct 22, 2017

@reisenberger

This comment has been minimized.

Show comment
Hide comment
@reisenberger

reisenberger Oct 22, 2017

Member

Closing via #332

CachePolicy has been merged into the master branch, for release shortly as part of Polly v5.4.0.

The first cache provider implementation to go with CachePolicy - based around .NET's in-built MemoryCache - is available at: https://github.com/App-vNext/Polly.Caching.MemoryCache.

The two will be released together to nuget, as soon as we hook up the build and nuget feed onto https://github.com/App-vNext/Polly.Caching.MemoryCache. /cc @joelhulen

Doco at: https://github.com/App-vNext/Polly/wiki/Cache

Member

reisenberger commented Oct 22, 2017

Closing via #332

CachePolicy has been merged into the master branch, for release shortly as part of Polly v5.4.0.

The first cache provider implementation to go with CachePolicy - based around .NET's in-built MemoryCache - is available at: https://github.com/App-vNext/Polly.Caching.MemoryCache.

The two will be released together to nuget, as soon as we hook up the build and nuget feed onto https://github.com/App-vNext/Polly.Caching.MemoryCache. /cc @joelhulen

Doco at: https://github.com/App-vNext/Polly/wiki/Cache

@reisenberger reisenberger added ready and removed in progress labels Oct 22, 2017

@reisenberger reisenberger moved this from In Progress to Completed in Polly v5.0 Oct 22, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment