-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support multiple caching mechanisms #16
Comments
Thanks @aequasi ! We tested Local Storage and IndexedDb when started implementing this on the ZEIT dashboard. And we learned many things from it:
But it might be a good idea to support cache fallback (e.g.: Memory -> IndexedDb), expiration and other features. Anyway, supporting multiple caching mechanisms is definitely an important option. 👍 |
I’d love to see a layered cache architecture. for speed and such, i think the memory cache should always be the top level. and then the actual network requests are the lowest level the ability to then add intermediate cache levels would be very powerful (for my usecases at least) in other words, the first cache that should be checked shall remain the in memory map, which is sync, will ensure the above list works as expected. but then adding something like indexdb or websql as a layer before making a network request would be great |
Fair points @quietshu ! Regarding IndexedDb, does it have to return synchronously? Presumably, if you are wanting to use IndexedDb, you would have to accept the fact that its asynchronous, and you would have to have the 'non-data' view. |
It's okay to have async cache, and we can also read from memory cache because it's always a stream 👍that's why a layered cache architecture (as @netspencer mentioned too 🙏) can be very helpful. |
In case it's helpful, here's useSWR with localforage as a persistent cache fallback that I use: https://gist.github.com/derindutz/179990f266e25306601dd53b8fbd8c6a. You can switch out localforage for whichever caching mechanism you want. I'm using it in production so if there are any suggestions you have for improvement I'm all ears 😊 |
As an anecdote, we did an extensive experiment with IndexedDB and it was awful from a product perspective. Disks are slow and customers tend to want strong reads from the kind of apps and dashboards that you use SWR for. There might be some use-cases for consumer apps where you're ok to restore to old data (like an Instagram feed) and then clearly designate as offline, where IndexedDB might make sense. But even then I'd carefully consider the "network-first" strategy with a timer and things like that, kinda like workbox is approaching SW. |
ideally we’d be able to specify a faster cache (like lru or redis) for SSR though, so the user never sees a flash of no data |
We also extensively experimented with SSR for our dashboard, and the calculus is that it's just not worth it. With Suspense + fast hydration + JS budgets + fast APIs, we'll never render a skeleton in most cases anyways. We'll suspend until we have data, then render it all at once in the ideal case. The benefit? You get rid of all the server complexity, you only have a single error tracking and observability surface (the client side), and your TTFB is always consistently fast globally thanks to ZEIT CDN :chefkiss: |
Hello to all, I created this library to manage the persistence of Relay & Apollo and I also used it to manage offline mutations. Let me know if you are interested in its integration. |
Hi guys, Premise: In order to manage all the storage it is necessary to provide a concept of asynchronous restore / hydration (indexeddb, asyncstorage etc ..). In this issue I described how this is managed in react-relay-offline In short, it would be necessary to provide two phases:
Both wora/cache-persist and the library mentioned in this issue #69 natively manage this behavior. It would be useful to have your indication / collaboration on how to create and configure the cache externally and how to better manage the two phases in useSWR. |
Hi @morrys, Thank you for your input! I think what you have proposed consists of these features:
Instead of making all the cache options (IndexedDB, ...) and network detectors built-in, using them I think the third feature of above will be the easist to implement (just a boolean in the options / config provider). But the first 2 things require a lot of changes still. |
Hi @quietshu, I agree with points 1 and 2, while with regard to point 3, I would suggest that you think about a concept of refresh & fetch policy, as found in Relay & Apollo.
As for the integration of my library it is sufficient (points 1 & 2 are managed by the library):
I think the topic of caching in the web & in react-native is a common problem in all open source projects and that's why I created this library. So any feedback from you will be very important to me. Thanks |
Thanks @morrys! First of all Redux is very lightweight. It's just a 3kB lib (like SWR) but most importantly it's just a simple and powerful concept (Reducer). But there're a lot of plugins and libs around that basic concept. Same for this lib. "SWR" is the concept of stale-while-revalidate, in which:
That's why I think we need make each part customizable:
instead of extending the SWR concept with more features. And I believe the scenarios you provided can be implemented with those 2 APIs too. And for sure we can make it easier for plugins to extend this lib. |
Exactly @quietshu :) the important thing is to consider the possibility of extending the concept. I think some things should have been handled natively in redux (but that's another story). At this point I would say that the main theme is to make the cache customizable. Is it a work in progress? |
I'm working on a react native app that needs to be synced with server all the time but frequency of data changes for some of them are low so we can show the cached version while revalidating (even if the user is offline). SWR are storing data in memory and data will be lost in some situation like closing app. it's not offline-capable as well. I read the whole thread and there are some valid points. Unfortunately I had to switch to my SWR alternative solution which is initializing from storage (and put in memory), write/read from memory and sync with storage after revalidating (write), and then same API as SWR. I think SWR can fix this by separating the cache layer. because the only thing I had to change was changing the cache location. by providing some APIs, we can have the built-in memory cache and at the same time let others to implement their own cache layer like IndexedDB, ... as a separate package. |
The cache Layer is already separate, you can import it with What I do to keep the SWR cache in sync with an off-line available storage (in web with localStorage, in RN you can use AsyncStorage I think), is to subscribe to cache changes and update the storage when something changes there and before rendering the app I read from that cache and fill the SWR cache with the data. I built this library https://github.com/sergiodxa/swr-sync-storage to sync with WebStorage APIs, you can probably use something similar to sync with RN AsyncStorage or IndexedDB or another option. This is a great way to work actually because some of those storage options are async by nature but you want to be able to read from cache in a sync way instead, this is why SWR cache is completely sync to get data, this allow SWR to read immediately from cache when data is already cache and revalidate in an async way, if SWR had to read from an async cache it should have to always send What you want to add here is a second cache layer, used for more persistent cache (in your case offline), so your AsyncStorage could be this second layer, where you will update data from the first layer and you will use it to fill the first layer before starting your app. |
Interesting @sergiodxa, Thanks for sharing. I checked your code out, I only have one concern which is it seems it's not possible to keep them in sync efficiently right now because swr's As other guys mentioned IO is a heavy task and I'm not sure how the performance will be for applications with a large cache/data. What I noticed here is there is no way to :
|
There is a PR I opened it to allow subscribed to know the updated key #365, that will improve the way to do this kind of second cache layer support. |
Closing this issue as I believe most of our goals are covered by the 1.0 release: https://swr.vercel.app/blog/swr-v1. |
Frontend based:
Backend Based:
Suggestion
My suggestion would be to allow providing a class that implements a new interface that supports a similar API to Map (get, set, delete?), defaulting to a MemoryCache that just uses Map
Side Notes
Frontend based caches could help add support for persistence.
Backend based caches could help with adding support for SSR.
This could also add the possibility for supporting TTLs if that's desired, potentially solving #4
Can also solve #7 by allowing users the track their own namespaces.
As suggested by @netspencer, a layered cache architecture could be awesome too
After reading #4 it looks like you might already be planning this, which is awesome!
The text was updated successfully, but these errors were encountered: