New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The default behaviour for in-memory cache should not include serialization #269
Comments
Hey @xdanmer you have a point there and I've thought about this already. For the current version my aim was to keep it simple and focus on given an input, give the same output no matter what backend you are using. Here are some points on why I didn't go for what you are proposing:
await cache.set("key", {a: 1, b: 2}) # here we store the dict in the cache
.... # random stuff
value = await cache.get("key") # dict is returned
value['c'] = 3 # dict in the cache will also contain the key 'c'!! I don't think this is expected behavior! In order for the above example to behave correctly in those cases, a copy of the mutable object should be produced and this also adds overhead. Because of this, I decided to not support this in the package. Anyway, In case you really want this, you can always create your own serializer and pass it to the decorator (or even use aliases) If you can think of a proposal which is simple and solves those edge cases cleanly, I'm happy to discuss it :) |
@argaen, thanks for detailed answer, but I disagree with some points, so let's discuss.
Overall, current solution looks like a limit of current architecture, so may be we should try to change it a bit. For example, make serialization optional or suppose that every backend has it's own default serializer. Of course if you will agree about objects mutability point, otherwise, everything is ok, but, in my opinion, rather not intuitive. |
You raise a good point here with asyncio and being a bottleneck for the reactor. I'm going to check other libraries how they deal with the in memory caching. If the majority go for not using deep copy I'll go for that which ends up being more intuitive for the rest of users. Anyway, if we end up doing this, this is how I imagine the end picture:
Makes sense? Also, just to have one more opinion on that, @pfreixes what do you think? |
@argaen yes, I agree. Let's check libraries. One example - popular caching library for .NET https://github.com/MichaCo/CacheManager. It's not Python, but anyway.
|
Yup and python standard library from functools import lru_cache
@lru_cache()
def what():
return [2]
def call():
res = what()
print(res) # output is [2]
res.append(3)
res.append(5)
res.append(8)
another_res = what()
print(another_res) # output is [2, 3, 5, 8]
call() If python standard does that, I'm going for it too :) |
So, TODO:
|
Hey @xdanmer, can you give master a try and see if the behavior now is the expected? |
@argaen, yes, behavior is as expected in the latest master. NullSerializer works by default for memory cache. Thank you! |
The basic use case for aiocache is just wrapping some async method with decorator to use in-memory caching. Like that:
Obviously, user doesn't expect any serialization in this case. It is redundant and makes additional performance impact. So default serializer should do nothing, like DefaultSerializer in previous versions. Currently JsonSerializer is used as default.
For not in-memory use cases user should explicitly specify what type of serialization he needs.
May be different default serializers should be used for different cache types.
The text was updated successfully, but these errors were encountered: