Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it possible to persist in localstorage (or similar) until offline is ready? #699

Closed
frederikhors opened this issue Apr 15, 2020 · 15 comments

Comments

@frederikhors
Copy link
Contributor

frederikhors commented Apr 15, 2020

I read docs on website and offline RFC issue.

But then I saw this code in this file: https://github.com/FormidableLabs/urql/blob/master/exchanges/graphcache/src/cacheExchange.ts

export interface CacheExchangeOpts {
  updates?: Partial<UpdatesConfig>;
  resolvers?: ResolverConfig;
  optimistic?: OptimisticMutationConfig;
  keys?: KeyingConfig;
  schema?: IntrospectionQuery;
  storage?: StorageAdapter;
}

So the questionis:

Until #683 is ready, is it possible to use localstorage today at least temporarily?

Something like: https://github.com/apollographql/apollo-cache-persist?

@kitten
Copy link
Member

kitten commented Apr 15, 2020

The persistence is indeed already functional and you can find an example of this in our persisted-store sample app: https://github.com/FormidableLabs/urql/blob/master/exchanges/graphcache/examples/persisted-store/src/app/index.tsx

That persistence is fully functional but we may make small tweaks to the API as we’re adding full offline support.
Overall it should be stable and work correctly, but do treat it as experimental until it’s documented and announced please ♥️

When you use it, it should fully work as expected minus what’s mentioned in the RFC, eg optimistic mutations having special handling when a mutation fails due to being offline, or ignoring certain errored results.

@frederikhors
Copy link
Contributor Author

I'm rewriting a small app with @urql/svelte and not only does it work very well: I'm having a lot of fun!

I ask you to wait a little longer before closing this: I will try this today.

@frederikhors
Copy link
Contributor Author

I also added @urql/svelte to the official community repo: https://github.com/sveltejs/community/pull/167!
😄

@kitten
Copy link
Member

kitten commented Apr 18, 2020

I ask you to wait a little longer before closing this: I will try this today.

Well, the question itself is answered so the typically close the issue out after a response.

But don’t worry, you can keep commenting and ask more questions if you need, even when the issue is closed! 🙌

Btw, I’ve noticed you’re opening a lot of issues that end up being questions rather than bugs or actual issues. Maybe you’ll want to take a look at our Spectrum community? It may be a better place for us to help you with questions since it’s more of a message board / forum

@kitten kitten closed this as completed Apr 18, 2020
@frederikhors
Copy link
Contributor Author

I read all about the two types of cache (even the long issues on Github).

I saw the example you indicated and tried it.

Questions:

  1. Why do you use idb? Is it a momentary choice?

  2. Using that example (REPL here: https://codesandbox.io/s/urql-svelte-indexeddb-persistence-cache-not-readed-03f5g?file=/Todos.svelte, if it doesn't work please open in full window here: https://03f5g.csb.app):

    1. if you query the list of todos the first time (they are consequently saved in indexedDB)

    2. and you remove internet or change requestPolicy to cache-only it doesn't work at all (test in REPL, image below)

    3. you can see in console: $todos: data: null, ..., fetching: false, stale: false; why?

    4. shouldn't it gets them out of the cache? Isn't there a rehydration phase when I reload the page? Isn't the job of const storage = { read: async () => {... that?

  3. By adding both urql/exchange-graphcache and idb to a project that only needs urql/svelte there is an increase in the final bundle size of 61 KB (from 126 KB to 187 KB).

    1. This is wasted if all I want for this app is to use the "Document cache" because

    2. this is a small app which mostly queries few static contents updated about every month.

    3. is there a way today to persist the "Document cache" (in localStorage or similar, with idb or similar)?

I think these are common questions that sooner or later will have to be in some FAQ or in the docs (and I gladly offer myself to update it as soon as I know what to write), but the next questions (if any) I will ask on Spectrum, I promise, @kitten. 😄

image

@kitten
Copy link
Member

kitten commented Apr 18, 2020

Why do you use idb? Is it a momentary choice?

As you can see, that’s just code in the example. We don’t enforce any storage backend, so you can swap it out. IndexedDB is the obvious choice because its storage limitations aren’t as strict (at least on most platforms.

idb has been picked because it’s easy to get an example up and running.

I would advise not to store too much in local storage. On some platforms you’ll run into performance limitations and that’s also a reason why the cache only stores diffs over time, and never serialises its entire state.

Things can get big, and our cache is built to handle a large chunk of data, not small results only.

you can see in console: $todos: data: null, ..., fetching: false, stale: false; why?

I can take a look. That’s probably either a usage issue or a small detail that’s missing. I’ll see but it’s hard to tell.

That being said, this stuff is experimental 😅

Edit: ~There may be an issue with cache-only here, since it seems to work with cache-first. This may be a Graphcache issue and I'll look into it tomorrow 👍 That being said, cache-first isn't what you'd probably go for in this case, so if you stick to cache-first it'll be fine 🙌 ~

Edit 2: We've recently changed how our keys are serialized. It seems that this works in some browsers, but not others. At least I'm seeing mangled keys in my testing, so we'll explore a different methods. We're currently storing some separators with \t, i.e. tab-symbols, to avoid having to escape user input, so that's why this may be happening.

By adding both urql/exchange-graphcache and idb to a project

Yes, obviously each dependency will increase your bundle size, so there are trade offs.

With Graphcache you gain flexibility for about 10kB minzipped. With idb (there are other solutions) you gain persistence support.

There’s likely more gains to be made by having a leaner storage backing.

there is an increase in the final bundle size of 61 KB (from 126 KB to 187 KB).

Please use minified + gzipped sizes 😅 The ecosystem optimises for those.
If you don’t specify if this is minified or non-minified then the numbers don’t tell us much. Unminified code can contain a lot of comments and empty lines. Even polyfills may throw you off if they aren’t configured correctly.

The way to go is to specify minzipped (minified + gzipped) sizes for specific modules.

is there a way today to persist the "Document cache" (in localStorage or similar, with idb or similar)?

Yes, urql is customisable to allow for anything to happen that you may want to do 😛 Ok, this is a little cheeky, but what I mean is, take a look at the cscheExchange in core. Adding persistence to that shouldn’t be too complicated overall and we could help with that.

We also hope our docs on exchanges will help you to get started on that, if you’re interested.

But overall, offline doesn’t work well without a normalised cache. Persistence kind of does, but our ambitions are greater than that.

We can’t unfortunately account for every use-case so our extensibility is key here.

@kitten
Copy link
Member

kitten commented Apr 19, 2020

This PR fixes something unrelated, but there was a mistake in our example app. Just take a look at this diff and the fix will be rather obvious: d75c74a

The type signature here would've helped, but it was a little messed up (and forced anyway). But if you look at what SerializedEntries should be, it's an object of keys and values, and not just an array of IDB values.

@frederikhors
Copy link
Contributor Author

I tried with new code and the example works.

Unfortunately not good as I thought.

As you know I'm migrating an app from Apollo to URQL and cache persistence is the last issue left to migrate.

I'm testing it with a big list of players, in my current test now ~400 in a form like this:

{
  "accountID": "0",
  "score": 696694,
  "createdAt": "2020-04-19T10:22:15Z",
  "id": "4740",
  "color": "0",
  "description": null,
  "team": {
    "id": "8",
    "Name": "TeamName",
    "__typename": "Team"
  },
  "doctorID": null,
  "__typename": "Player"
}

Everytime I get my query result (using cache-and-network with a manual, fake delay of 5 seconds in my API server, just for test the cache-and-network) my UI freezes for about 5 seconds (same test with svelte-apollo and apollo-cache-persist not freezes at all).

I think the problem is the write() method:

write: async batch => {
  for (const key in batch) {
    const value = batch[key]
    if (value === undefined) {
      db.delete('keyval', key)
    } else {
      db.put('keyval', value, key)
    }
  }
}

and maybe we can optimize it to use requestIdleCallback() or something similar; I don't know how apollo is doing this but its functioning is invisible to the eyes of people like me who care about UI performances.

It must be said, however, that with Apollo I'm saving in localstorage and the cache is all JSON (but it's superfast!).

Side note about bundle sizes

I'm always referring to size after sveltejs/template rollup build with terser, anayzed with source-map-explorer (see npm script in my example project here: https://github.com/frederikhors/svelte-urql-app/blob/master/package.json#L5).

So I think my sizes are only minified not gzipped.

Thanks for your invaluable commitment.

@kitten
Copy link
Member

kitten commented Apr 19, 2020

Do you have more information here?

fake delay of 5 seconds in my API server [...] my UI freezes for about 5 seconds

So freezes is probably not the right term here, right? You're querying but you're not receiving data for five seconds; that seems about right to me, if your cache is stale.

same test with svelte-apollo and apollo-cache-persist not freezes at all

This seems off; you have a delay of 5 seconds, so the delay is identical, so I don't quite understand what you're asking or what the situation is 😅

and maybe we can optimize it to use requestIdleCallback() or something similar

It's already optimised and we already batch and defer cache storages; this sounds like your delay of five seconds, is the delay of the five seconds on the server. You should maybe try to double check the network tab against the operations.

@frederikhors
Copy link
Contributor Author

You underestimate me! 😄 LOL! 😄

To explain myself better I created this REPL (but please open it full screen if there are problems).

I installed async-render-toolbox and recorded a gif for you:


ui_freeze_2


Is the problem clearer now?

@kitten
Copy link
Member

kitten commented Apr 19, 2020

You can try swapping out the IDB stuff maybe. I mean 😅 we haven’t written any serious adaptor code yet

Edit: also it may be an issue with not having an IDB transaction there. So, maybe you should drill down what’s going on to the specific parts of the code. The batch we give to the storage adaptor is passed during a small defer, but that obviously won’t improve anything if writing with IDB takes 5s.

That’s not part of Graphcache though 😅

@frederikhors
Copy link
Contributor Author

Okay, now I'm lost.

@frederikhors
Copy link
Contributor Author

frederikhors commented Apr 20, 2020

I have good news!

I studied your graphcache "temp" APIs, IndexedDB (never used directly before) and idb.

I'm not a good Javascript programmer, for me it's just a hobby (very tiring but pleasant!).

I tried the solutions below; speeds were measured, retrying the same action 10+ times, with:

write: async batch => {
  const start = performance.now()
  // ... code...
  const end = performance.now()
  console.log((end - start) + 'ms')
}
  1. Speed ~970 ms
const storage = {
  read: async () => {
    db = await openDB('myApplication', 1, {
      upgrade: db => db.createObjectStore('keyval')
    })
    return Promise.all([
      db.getAllKeys('keyval'),
      db.getAll('keyval')
    ]).then(([keys, values]) => {
      return keys.reduce((acc, key, i) => {
        acc[key] = values[i]
        return acc
      }, {})
    })
  },
  write: async batch => {
    for (const key in batch) {
      const value = batch[key]
      if (value === undefined) {
        db.delete('keyval', key)
      } else {
        db.put('keyval', value, key)
      }
    }
  }
}
  1. ~775 ms
const storage = {
  read: async () => {
    db = await openDB('myApplication', 1, {
      upgrade: db => db.createObjectStore('keyval')
    })
    return Promise.all([
      db.getAllKeys('keyval'),
      db.getAll('keyval')
    ]).then(([keys, values]) => {
      return keys.reduce((acc, key, i) => {
        acc[key] = values[i]
        return acc
      }, {})
    })
  },
  write: async batch => {
    const tx = db.transaction('keyval', 'readwrite')
    const store = tx.objectStore('keyval')
    for (const key in batch) {
      const value = batch[key]
      if (value === undefined) {
        store.delete(key)
      } else {
        store.put(value, key)
      }
    }
    await tx.done
  }
}
  1. from 14 ms to 51 ms
const storage = {
  read: async () => {
    db = await openDB('myApplication', 1, {
      upgrade: db => db.createObjectStore('myCache')
    })
    return await db.get('myCache', 'general')
  },
  write: async batch => {
    const actual = (await db.get('myCache', 'general')) || {}
    Object.keys(batch).forEach(key => {
      if (batch[key] === undefined) {
        delete actual[key]
      } else {
        actual[key] = batch[key]
      }
    })
    await db.put('myCache', actual, 'general')
  }
}
  1. from 1 ms to 6 ms !!!
const storage = {
  read: async () => {
    const actual = JSON.parse(localStorage.getItem(myKey))
    if (!actual) {
      localStorage.setItem(myKey, JSON.stringify({}))
      return
    }
    return actual
  },
  write: async batch => {
    const actual = JSON.parse(localStorage.getItem(myKey))
    Object.keys(batch).forEach(key => {
      if (batch[key] === undefined) {
        delete actual[key]
      } else {
        actual[key] = batch[key]
      }
    })
    localStorage.setItem(myKey, JSON.stringify(actual))
  }
}
  1. from 1 ms to 6 ms !!! Roughly the same as before
...
write: async batch => {
  const actual = JSON.parse(localStorage.getItem(myKey))
  const keys = Object.keys(batch)
  const l = keys.length
  let i = 0
  for (i; i < l; i++) {
    const val = batch[keys[i]]
    if (val === undefined) {
      delete actual[keys[i]]
    } else {
      actual[keys[i]] = val
    }
  }
  localStorage.setItem(myKey, JSON.stringify(actual))
}
  1. from 1 ms to 5 ms !!! Roughly the same as before, negligibly better
...
write: async batch => {
  const actual = JSON.parse(localStorage.getItem(myKey))
  const keys = Object.keys(batch)
  let i = keys.length
  while (i--) {
    const val = batch[keys[i]]
    if (val === undefined) {
      delete actual[keys[i]]
    } else {
      actual[keys[i]] = val
    }
  }
  localStorage.setItem(myKey, JSON.stringify(actual))
}

Questions

  1. Am I wrong in the algorithm?

  2. How come despite JSON.stringify() the latest versions are faster? (I mean the ones with localStorage.)

  3. How come I also see an empty batch pass through write() for components that only use cache-only? And also for components with cache-and-network on cache "moment", waiting for network call...

@kitten
Copy link
Member

kitten commented Apr 20, 2020

I found some articles about IndexedDB and some single out performance issues in Chrome. They are quite surprising so I’ll look into that in the future.

It’s not too surprising that a single LocalStorage write is faster for the small amount of data you have. But it is blocking while IndexedDB shouldn’t be, so in theory we’d have to look at why it’s blocking for this long. We’ll probably look at that once we’re ready to have some adaptors for offline.

We’re currently allowing empty batches to be written, because we weren’t sure whether the same batch was to be used to write in-flight optimistic mutations as well. That’s probably going to go away though

@jdgamble555
Copy link

I can't find this anywhere in the docs. Is it still possible to persist the data using indexedDB?

J

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants