Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: Fragment-level caching #85

Closed
kitten opened this issue May 25, 2018 · 13 comments
Closed

RFC: Fragment-level caching #85

kitten opened this issue May 25, 2018 · 13 comments

Comments

@kitten
Copy link
Member

kitten commented May 25, 2018

Instead of shipping a complex, normalising cache by default (it might still be implemented as a third-party exchange, I suppose) we can ship a simple fragment-level cache.

This caching strategy would live alongside the current query-level cache. It can also use the same store/cache entity and just use separate keys.

Strategy

  • Detect queries containing fragments
  • Inspect each response for each of the given fragments
  • Cache given fragments by "Fragment + Typename + ID" (default key extractor)
  • Don't invalidate these keys like cached queries are, instead keep them up-to-date with each incoming response

Hence a most-recent version of a cached fragment will always be available.

Usage

Once this cache is in place we're able to integrate primitive APIs to access these cached fragments. On a high-level API, we can introduce a FragmentCache component, used like so:

<FragmentCache fragment={fragmentStr} query={{ __typename: 'Item', id: 'test' }}>
  {({ data, stale }) => ()}
</FragmentCache>

An example can be found here: https://gist.github.com/kitten/3a9136fa731678838013292a32f7daaa

Optimistic updates can also at this point be introduced. A mutation could specify optimisticFragments, which provide a list of optimistic responses from the mutation which only update the fragment-level cache.

Why?

  • We'd avoid a complex cache by default and instead have two highly predictable caches
  • We still provide the ability to solve the "90%" case, accessing the cache explicitly when a query is still loading or other custom use-cases
  • The cache component can be adapted to also work for the query component, thus giving us some breathing-room for changing the default cache/network-control behaviour (cache-first instead of -only?)

How?

We can maybe implement this as a separate package first, since it's relatively contained, so that it's possible to try this concept out without changing urql's public API at all.

It could be published as urql-exchange-fragment-cache or sth

cc @kenwheeler

@kitten
Copy link
Member Author

kitten commented May 25, 2018

@mxstbr While you're involved in the list-issue, this is of course tangential, but would you think this and the smart list would be sufficient solutions for Spectrum, just as an example, as a whole?

@mxstbr
Copy link
Contributor

mxstbr commented May 25, 2018

Yeah probably! I'm not 100% aware of the tradeoffs between this and the current cache, but we use fragments all over the place, so it should be fine?

@kitten
Copy link
Member Author

kitten commented May 25, 2018

@mxstbr it's a concept to run alongside the current query-level cache that stores entire query results by a hash key. So we'd have a separate fragment-level cache with a separate component to just access the cached fragments.

@jpstrikesback

This comment has been minimized.

@kitten

This comment has been minimized.

@blorenz
Copy link

blorenz commented Jun 15, 2018

Is this necessary for optimistic updates? Couldn't we provide an optimistic update resolver function to the mutation that would immediately resolve prior to the resolved response from the server?

@kitten
Copy link
Member Author

kitten commented Jun 15, 2018

@blorenz I'd consider that sth that's quite simple to do with a component's local state, so probably not quite useful on its own, considering how specific urql's cache is to individual queries

@blorenz
Copy link

blorenz commented Jun 19, 2018

@kitten Thank you for the guidance. Do you have any best practice for utilizing urql to drive data from local state? I image this should be handled with componentDidUpdate() and an assortment of logic on the urql-injected props. I only ask this because when I was initially using Apollo, I tried to direct the data through Redux which in turn led to a mess. I then became aware that I did not to manage the data via Redux itself and I had a much better experience with Apollo.

@kitten kitten added the future 🔮 An enhancement or feature proposal that will be addressed after the next release label Jun 28, 2018
@kitten kitten closed this as completed Nov 19, 2018
@kitten kitten reopened this Jun 25, 2019
@JoviDeCroock
Copy link
Collaborator

If no one is taking this up, I'd like to give it a try would be for next week though if anyone is taking it up feel free to say so

@kitten
Copy link
Member Author

kitten commented Jun 25, 2019

@JoviDeCroock awesome! @jevakallio has basically come up with the same / very similar idea independently which is why I reopened this, but he's thought it through a lot more. So it's safer to wait since he'll post an extended and new RFC here which we can discuss first 🙌

@jevakallio
Copy link
Contributor

jevakallio commented Jun 25, 2019

@kitten @JoviDeCroock I've written up my fragment proposal here at #317

Any discussion regarding the proposal itself should live that RFC.

Any discussion regarding fragment-level caching more broadly should remain in this issue.

@allpwrfulroot
Copy link

To fully inform the discussion, can I request a little more documentation about how the cacheExchange currently works? I went through the code but couldn't quite grok how the default is really designed to operate

@kitten
Copy link
Member Author

kitten commented Aug 1, 2019

Closing in favour of #317

@kitten kitten closed this as completed Aug 1, 2019
@kitten kitten removed the future 🔮 An enhancement or feature proposal that will be addressed after the next release label Aug 1, 2019
kitten added a commit that referenced this issue Feb 5, 2020
This means that `todo(x: null) {}` is assumed to be
the same as `todo {}`
kitten pushed a commit that referenced this issue Feb 14, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants