New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Fragment-level caching #85
Comments
@mxstbr While you're involved in the list-issue, this is of course tangential, but would you think this and the smart list would be sufficient solutions for Spectrum, just as an example, as a whole? |
Yeah probably! I'm not 100% aware of the tradeoffs between this and the current cache, but we use fragments all over the place, so it should be fine? |
@mxstbr it's a concept to run alongside the current query-level cache that stores entire query results by a hash key. So we'd have a separate fragment-level cache with a separate component to just access the cached fragments. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Is this necessary for optimistic updates? Couldn't we provide an optimistic update resolver function to the mutation that would immediately resolve prior to the resolved response from the server? |
@blorenz I'd consider that sth that's quite simple to do with a component's local state, so probably not quite useful on its own, considering how specific urql's cache is to individual queries |
@kitten Thank you for the guidance. Do you have any best practice for utilizing urql to drive data from local state? I image this should be handled with componentDidUpdate() and an assortment of logic on the urql-injected props. I only ask this because when I was initially using Apollo, I tried to direct the data through Redux which in turn led to a mess. I then became aware that I did not to manage the data via Redux itself and I had a much better experience with Apollo. |
If no one is taking this up, I'd like to give it a try would be for next week though if anyone is taking it up feel free to say so |
@JoviDeCroock awesome! @jevakallio has basically come up with the same / very similar idea independently which is why I reopened this, but he's thought it through a lot more. So it's safer to wait since he'll post an extended and new RFC here which we can discuss first 🙌 |
@kitten @JoviDeCroock I've written up my fragment proposal here at #317 Any discussion regarding the proposal itself should live that RFC. Any discussion regarding fragment-level caching more broadly should remain in this issue. |
To fully inform the discussion, can I request a little more documentation about how the cacheExchange currently works? I went through the code but couldn't quite grok how the default is really designed to operate |
Closing in favour of #317 |
This means that `todo(x: null) {}` is assumed to be the same as `todo {}`
Instead of shipping a complex, normalising cache by default (it might still be implemented as a third-party exchange, I suppose) we can ship a simple fragment-level cache.
This caching strategy would live alongside the current query-level cache. It can also use the same store/cache entity and just use separate keys.
Strategy
Hence a most-recent version of a cached fragment will always be available.
Usage
Once this cache is in place we're able to integrate primitive APIs to access these cached fragments. On a high-level API, we can introduce a
FragmentCache
component, used like so:An example can be found here: https://gist.github.com/kitten/3a9136fa731678838013292a32f7daaa
Optimistic updates can also at this point be introduced. A mutation could specify
optimisticFragments
, which provide a list of optimistic responses from the mutation which only update the fragment-level cache.Why?
How?
We can maybe implement this as a separate package first, since it's relatively contained, so that it's possible to try this concept out without changing urql's public API at all.
It could be published as
urql-exchange-fragment-cache
or sthcc @kenwheeler
The text was updated successfully, but these errors were encountered: