You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 4, 2023. It is now read-only.
You can invoke the middleware while leaving the models out which will cause all models to be cached based on cacheTime.
And if you leave the redis instance out this should fall back to using an in-memory LRU cache. This should make it easier for someone to test this in development without needing to configure or install Redis.
Thoughts: In such a case it should print out a warning stating that an in-memory cache is being used and that you should pass a Redis instance to persist the cache.
I like the 2nd one, but it also feels a slightly verbose... 🤔
Some other thoughts.
Should I just ignore cacheTime if there is a models value (which will include a caching value per model)? Or should I use it for any model that isn't specified?
Allowing someone to specify their own caching key per model...? yikes
Allowing an option called excludeModels which could be an Array of strings which will NOT be cached?
The text was updated successfully, but these errors were encountered:
At the moment this Prisma middleware isn't very flexible and I feel that the caching API isn't that nice to use.
I want to update the API to possibly look like this:
models
: Array of objects. Thekey
is themodel
to cache and thevalue
is thetime
to cache it.excludeModels
: Array of strings. These models will NOT be cached.redis
: Optional. Falls back to using an in-memoryLRU
cachecacheTime
: Still an integerexcludeCacheMethods
: Still an array of methods to ignore when cachingExample 1:
You can invoke the middleware while leaving the
models
out which will cause all models to be cached based oncacheTime
.And if you leave the
redis
instance out this should fall back to using an in-memoryLRU
cache. This should make it easier for someone to test this in development without needing to configure or install Redis.Thoughts: In such a case it should print out a
warning
stating that an in-memory cache is being used and that you should pass a Redis instance to persist the cache.Example 2:
You can specify the caching time per model.
The global
cacheTime
middleware option is then used for any other Model that isn't explicitly specified inmodels
.You can then specify a value of
0
tocacheTime
or leave thecacheTime
option out to stop it from caching any explicitmodels
you didn't specify.Some thoughts about the
models
APII don't know if I want the models to have an API that looks like this.
Or like this:
I like the 2nd one, but it also feels a slightly verbose... 🤔
Some other thoughts.
Should I just ignore
cacheTime
if there is amodels
value (which will include a caching value per model)? Or should I use it for any model that isn't specified?Allowing someone to specify their own caching key per model...? yikes
Allowing an option called
excludeModels
which could be an Array of strings which will NOT be cached?The text was updated successfully, but these errors were encountered: