Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

External cache stores #50

Closed
jfeltesse-mdsol opened this issue Mar 31, 2022 · 10 comments · Fixed by #61
Closed

External cache stores #50

jfeltesse-mdsol opened this issue Mar 31, 2022 · 10 comments · Fixed by #61

Comments

@jfeltesse-mdsol
Copy link

Hi,

We're looking at the feature set this library provides and we noticed that currently only in memory and on disk caching stores are supported.
Are there any plans to support external stores, e.g. redis, to name one?
Our use case is we typically have services running on multiple containers behind an ELB and with local cache stores the hit/miss ratio is not optimal. For services that run on other stacks that support external caching we usually have a redis cluster sitting on the side that serves as a central, shared place for all the containers to read/write cache entries.

Thanks!

@mizosoft
Copy link
Owner

mizosoft commented Apr 2, 2022

Hi @jfeltesse-mdsol, thanks for reporting.

Supporting custom cache stores has definitely been on my radar. However, I'm not yet sure what'd be the right way to do this. Ideally, this would be achieved by exposing the store API so clients could just pass custom implementations to the HttpCache and rely on existing HTTP logic. However, this is an internal API for a reason: it's always good to open room for extending HttpCache features and exposing the underlying Store API prevents many ways to do this (e.g. one feature I want to support in future is storing multiple response variants per entry to achieve higher hit counts for shared caches, this will require making modifications to the current Store API).

A less restrictive alternative is to provide black-box implementations for popular storage backends (e.g. redis as you mentioned) in separate modules without exposing the Store API. I think this could work. However, It's a bit restrictive for the user as some choices have to be made on their behalf. But in general it'd be a good start.

Starting with a redis-backed store implementation sounds good. I notice there're multiple redis clients for java, do you think one client is better than others for this use case?

@regbo
Copy link

regbo commented Jun 27, 2022

IMO redisson has the easiest to use API, but it might be overkill in this situation. It supports async operations and has great locking and expiration support. Lettuce is more lightweight and pretty easy to use as well. I have had issues keeping up with netty changes with both versions, so perhaps there is something even more lightweight for this purpose. Fantastic project BTW.

@jfeltesse-mdsol
Copy link
Author

@mizosoft sorry for the time it took me to get back to this. I asked our java and scala engineers and it looks like jedis has the more votes.

@mizosoft
Copy link
Owner

@regbo Thanks for sharing your experience. Redisson indeed seems like an overkill. I gave lettuce a quick look, and I think it's neat.

Fantastic project BTW.

Thanks! Glad you like it :)

I asked our java and scala engineers and it looks like jedis has the more votes.

It seems lettuce & jedis are the options I will look into. They both support redis clusters, so that's a plus. However, I think I'll first consider allowing multiple response variants per entry. This'll make the Store API more stable and less risky to extend. I'll look more into this when I have time.

@jfeltesse-mdsol
Copy link
Author

Anything we can do to help here?

@mizosoft
Copy link
Owner

mizosoft commented Sep 26, 2022

Hi @jfeltesse-mdsol. Sorry for not getting up to speed with this.

I was eager to first re-specify the Store API to allow storing multiple responses per entry (each mapped to by unique request header values as nominated by Vary). However, it seems this is not a feature that's widely implemented by browsers, not to mention it's not as straightforward as I've thought (some things become awkward to handle like entry locking and how to chose which response to update). So I'll put a hold on this one.

With regards to the redis store, I've been working out some implementation concerns. I'll be exercising an implementation here. I'll let you know when I need any feedback. Thanks for offering help!

@jfeltesse-mdsol
Copy link
Author

Thanks a lot for your efforts, we'll make sure to review then! 👍

@mizosoft
Copy link
Owner

Done. This will make the 1.8.0 release. I still want to add some tests to check the handling of awkward scenarios (e.g. expected keys dropped/expired).

In the meantime, you can try the store out from snapshot:

repositories {
  mavenCentral()
  maven {
    url 'https://oss.sonatype.org/content/repositories/snapshots'
  }
}

dependencies {
  implementation 'com.github.mizosoft.methanol:methanol:1.8.0-SNAPSHOT'
  implementation 'com.github.mizosoft.methanol:methanol-redis:1.8.0-SNAPSHOT'
}

You can install it as follows:

var redisStorage =
    RedisStorageExtension.newBuilder()
        .standalone(RedisURI.create("redis://localhost:55555"))
        .build();
try (var cache = HttpCache.newBuilder().cacheOn(redisStorage).build()) {
  var client = Methanol.newBuilder().cache(cache).build();
  var request = MutableRequest.GET("https://i.imgur.com/AD3MbBi.jpeg");
  var response =
      (CacheAwareResponse<Path>)
          client.send(request, BodyHandlers.ofFile(Path.of("money_cat.jpeg")));
  System.out.println("CacheStatus: " + response.cacheStatus());

  // Wait for some time as data is written in background.
  Thread.sleep(100);
  System.out.println("Done!");
}

@jfeltesse-mdsol
Copy link
Author

Awesome, thank you so much!!! 💯

@mizosoft
Copy link
Owner

Welcome! Feel free to provide any kind of feedback!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants