Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can Metadata be controlled? #273

Open
SteveHarveyUK opened this issue Apr 11, 2019 · 6 comments
Open

Can Metadata be controlled? #273

SteveHarveyUK opened this issue Apr 11, 2019 · 6 comments

Comments

@SteveHarveyUK
Copy link

I've just started evaluating this library and I have a couple of questions:

  1. Is there a way to control the amount of metadata created in a particular store?For caches of small objects the default metadata is quite large, being able to minimise or reduce the metadata if a feature is not used would be useful. For a typed cache is the fully qualified type name needed or could it be inferred from the cache type.

  2. It would be great to be able to register/inject a functor per type for key construction so that the key can be inferred from the type instance rather than having to explicitly provide it.

@SteveHarveyUK SteveHarveyUK changed the title Control Metadata Can Metadata be controlled? Apr 11, 2019
@MichaCo
Copy link
Owner

MichaCo commented Apr 11, 2019

Good question,
No, the meta data cannot be controlled right now.
Certain features require certain meta data I cannot really make those optional.

Yes the full type name is stored in e.g. Redis along with the key. That might seem redundant but is necessary in some scenarios.
For example, I allow ICacheManager where object can be anything, in that case, the type cannot be inferred.
The type could be an interface or sub class, too. In any of those cases, de-serialization might run into huge problems without knowing the actual type.

You are right that this could go through a resolver of some sort to hand over that responsibility to the user. But that would make the usage way more complicated and maybe even impractical.

I'll keep it in mind though, as a optional configuration hint somewhere, maybe? ;)

@SteveHarveyUK
Copy link
Author

Thanks for the fast response!
I'm evaluating this for use as a credential cache, so having a fully qualified class name plus the other metadata is likely to multiply the memory requirements significantly. I'm presuming from what you've said that creating my own ICacheSerializer or store implementation wouldn't help either. i.e. There's no sneaky, hacky trick you can think of the work around this.
Personally, I'd be OK with limited functionality on turning off portions of the metadata, but I get that even controlling that might be tricky. Here's another suggestion, how about being able use a serializer for the metadata?
Any thoughts on point 2 of my original question?
Again, thanks for the fast response and writing this library in the first place!

@MichaCo
Copy link
Owner

MichaCo commented Apr 11, 2019

Actually, you could write your own serializer.
All serializers care about the fields by implementing a serializer dependent CacheItem, e.g. for Json: https://github.com/MichaCo/CacheManager/blob/dev/src/CacheManager.Serialization.Json/JsonCacheItem.cs

You could role your own and only serialize fields you want and then care about de-/serialization yourself.

@SteveHarveyUK
Copy link
Author

Morning @MichaCo,
I pulled the repo this morning to have a root around. I'm not sure that rolling my own serializer will be enough. The Redis implementation appears to be tightly coupled to using a HashSet for the CacheItem<>. The best I'd likely be able to do would be to make some of the fields empty on serialization, and 'magically' fill them based on the cache generic type in on deserialization.
Is that just a constraint of the Redis implementation? Should/could it theoretically support a configurable serializer at that level?
The reason why this is important to me is that with my test dataset using StackExchange.Redis natively I was seeing a 5x decrease in memory usage when using a protobuf string value rather than a native HashSet.

@MichaCo
Copy link
Owner

MichaCo commented Apr 12, 2019

Ok, no, for Redis it is right now fixed to a hash set, that's correct.

That's primarily for performance reason.
If you cache a string, int, bytes..., it is much faster to store meta data and the value as hashset than de-/serializing the object with all meta data.

Also, the different operations run lua scripts. Those scripts need access to some of the meta data and that would't work with just a serialized blob.

Sure, storing some meta data might use more memory, but it isn't really that much. That's the trade-off for all the other features right now.

@SteveHarveyUK
Copy link
Author

Makes sense. Have you considered the hash-max-ziplist-value setting? Given that you're storing the value object as a single binary value I imagine that in most cases the created HashSet won't be ziplist compatible without increasing the value.
It would be great to be have the option to serialize the value object as additional HashEntries in the CacheValue<> HashSet. That would allow careful class manipulation to keep the HashSet within the ziplist size. Just an idea!
Thanks again for your time and answers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants