Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support caching POJO as Hash in Redis with Spring Cache [DATAREDIS-466] #1045

Closed
spring-projects-issues opened this issue Feb 23, 2016 · 2 comments

Comments

@spring-projects-issues
Copy link

@spring-projects-issues spring-projects-issues commented Feb 23, 2016

Zheng Li opened DATAREDIS-466 and commented

I tried to use Redis as a cache layer between my application and MySQL database, and currently I'm using RedisCacheManager with Spring's cache support. The POJO instance is serialized as JSON String for caching.

But due to the reasons for better space utilization as well as faster individual field update for an object, I'd prefer to save data as Hash in Redis. So I'm wondering if I can do this with cached data, if possibly we can configure RedisCacheManager to save cache value as Hash. Two things I"m not sure so far,

  1. If it's recommended practice to manipulate cache data from processes other than CachePut.
  2. Seems to me due to the nature of implementing Spring's Cache interfaces, org.springframework.data.redis.cache.RedisCache#put(java.lang.Object, java.lang.Object) will always be used for saving cache. Not sure if it's possible to configure CacheManager to use putAll instead of put.

No further details from DATAREDIS-466

@spring-projects-issues
Copy link
Author

@spring-projects-issues spring-projects-issues commented Apr 29, 2016

Christoph Strobl commented

looking at RedisCache and the RedisCacheManger switching to a HASH cannot be done without major breaking changes to the API. So, though the idea is good, we need to postpone this one for now

@spring-projects-issues
Copy link
Author

@spring-projects-issues spring-projects-issues commented Oct 22, 2019

Christoph Strobl commented

After spending quite some time trying to flesh out the details how this could potentially work we decided that we're not going to add this feature.

While mapping a single complex object (like a Person) to a Redis hash is pretty straight forward, it gets tricky real fast for simple and collection types which require some special treatment.

Collection / Simple Type as Hash
One way of dealing with collections / simple types would be to store them along with a type hint inside a Redis hash.
Simple types could be set via a synthetic payload.
Collection values could use the hash key for index values and the actual hash value to capture serialized data.

HMSET simple-type type_hint java.lang.Long payload 100
PEXPIRE simple-type 1000

HMSET collection-type type_hint java.util.list [1] "{ name : ..., }" [2] "{ name : ... }"
PEXPIRE collection-type 1000

(+) still stored in a hash
(+) easy deserialization via type hint
(-) Non atomic operation (HMSET does not support ttl) and requires a separate PEXPIRE call.
(-) Lists look way different than complex types and still requires some sort of serialization of the content.

Collection as List / Simple Type as String
Using Redis native data structure for storing eases the pain of having do deal with the type hint and using the hash key to preserve value ordering.

SET simple-type 100 PX 1000 

LPUSH cache-key "{ name : ..., }" "{ name : ... }"
PEXPIRE collection-type 1000

(+) idiomatic data structure use
(+) no explicit type key required.
(-) Non atomic operation (LPUSH does not support ttl) and requires a separate PEXPIRE call.
(-) Additional TYPE command required on read to determine object type for deserialization.


In case we’ve been missing something or there are convincing arguments to include this feature please feel free to comment on the issue or even better provide a PR with a working solution

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
2 participants