You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Feb 16, 2022. It is now read-only.
Hi. Thank you for this nice package. Had tried many 'repository' packages but none of them have a flexible/granular caching system.
It's ok to have a default storage and lifetime but that's not enough.
In a app I have different queries on the same repository that needs different lifetimes. The least accessed data can have a longer or infinite lifetime and can be stored in files. This cached data doesn't need to be deleted/regenerated every time the model changes.
Ex: Average product price data generated twice a month.
Ex: Product suplier list doesn't change often. This could be cached in memcached. The LRU algo from memcached doesn't cause performance issues if the this data is deleted from cache.
The most accessed data must be cached in redis with shorter lifetime.
Ex: Prices changes every week. Stock changes every day. Product descriptions are accessed every time.
A good caching system would allow to select the storage/lifetime for a specific query. If none is selected then the default is used.
Keep up the good work.
Cheers
The text was updated successfully, but these errors were encountered:
Hello Dear @rsdev000,
Thank you for this nice proposal, I can see the use case of more flexible and granular caching.
Currently I guarantee you to add lifetime support for every function call. That would be nice and won't add much overall complexity. Let me check the possibility of adding different driver support per method and get back again to you soon 👍
Regards, Omran
Hi. Thank you for this nice package. Had tried many 'repository' packages but none of them have a flexible/granular caching system.
It's ok to have a default storage and lifetime but that's not enough.
In a app I have different queries on the same repository that needs different lifetimes. The least accessed data can have a longer or infinite lifetime and can be stored in files. This cached data doesn't need to be deleted/regenerated every time the model changes.
Ex: Average product price data generated twice a month.
Ex: Product suplier list doesn't change often. This could be cached in memcached. The LRU algo from memcached doesn't cause performance issues if the this data is deleted from cache.
The most accessed data must be cached in redis with shorter lifetime.
Ex: Prices changes every week. Stock changes every day. Product descriptions are accessed every time.
A good caching system would allow to select the storage/lifetime for a specific query. If none is selected then the default is used.
Keep up the good work.
Cheers
The text was updated successfully, but these errors were encountered: