-
-
Notifications
You must be signed in to change notification settings - Fork 444
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it possible to persist in localstorage (or similar) until offline is ready? #699
Comments
The persistence is indeed already functional and you can find an example of this in our That persistence is fully functional but we may make small tweaks to the API as we’re adding full offline support. When you use it, it should fully work as expected minus what’s mentioned in the RFC, eg optimistic mutations having special handling when a mutation fails due to being offline, or ignoring certain errored results. |
I'm rewriting a small app with I ask you to wait a little longer before closing this: I will try this today. |
I also added |
Well, the question itself is answered so the typically close the issue out after a response. But don’t worry, you can keep commenting and ask more questions if you need, even when the issue is closed! 🙌 Btw, I’ve noticed you’re opening a lot of issues that end up being questions rather than bugs or actual issues. Maybe you’ll want to take a look at our Spectrum community? It may be a better place for us to help you with questions since it’s more of a message board / forum |
I read all about the two types of cache (even the long issues on Github). I saw the example you indicated and tried it. Questions:
I think these are common questions that sooner or later will have to be in some FAQ or in the docs (and I gladly offer myself to update it as soon as I know what to write), but the next questions (if any) I will ask on Spectrum, I promise, @kitten. 😄 |
As you can see, that’s just code in the example. We don’t enforce any storage backend, so you can swap it out. IndexedDB is the obvious choice because its storage limitations aren’t as strict (at least on most platforms.
I would advise not to store too much in local storage. On some platforms you’ll run into performance limitations and that’s also a reason why the cache only stores diffs over time, and never serialises its entire state. Things can get big, and our cache is built to handle a large chunk of data, not small results only.
I can take a look. That’s probably either a usage issue or a small detail that’s missing. I’ll see but it’s hard to tell. That being said, this stuff is experimental 😅 Edit: ~There may be an issue with Edit 2: We've recently changed how our keys are serialized. It seems that this works in some browsers, but not others. At least I'm seeing mangled keys in my testing, so we'll explore a different methods. We're currently storing some separators with
Yes, obviously each dependency will increase your bundle size, so there are trade offs. With Graphcache you gain flexibility for about 10kB minzipped. With idb (there are other solutions) you gain persistence support. There’s likely more gains to be made by having a leaner storage backing.
Please use minified + gzipped sizes 😅 The ecosystem optimises for those. The way to go is to specify minzipped (minified + gzipped) sizes for specific modules.
Yes, We also hope our docs on exchanges will help you to get started on that, if you’re interested. But overall, offline doesn’t work well without a normalised cache. Persistence kind of does, but our ambitions are greater than that. We can’t unfortunately account for every use-case so our extensibility is key here. |
This PR fixes something unrelated, but there was a mistake in our example app. Just take a look at this diff and the fix will be rather obvious: d75c74a The type signature here would've helped, but it was a little messed up (and forced anyway). But if you look at what |
I tried with new code and the example works. Unfortunately not good as I thought. As you know I'm migrating an app from Apollo to URQL and cache persistence is the last issue left to migrate. I'm testing it with a big list of players, in my current test now ~400 in a form like this: {
"accountID": "0",
"score": 696694,
"createdAt": "2020-04-19T10:22:15Z",
"id": "4740",
"color": "0",
"description": null,
"team": {
"id": "8",
"Name": "TeamName",
"__typename": "Team"
},
"doctorID": null,
"__typename": "Player"
} Everytime I get my query result (using I think the problem is the write: async batch => {
for (const key in batch) {
const value = batch[key]
if (value === undefined) {
db.delete('keyval', key)
} else {
db.put('keyval', value, key)
}
}
} and maybe we can optimize it to use It must be said, however, that with Apollo I'm saving in localstorage and the cache is all JSON (but it's superfast!). Side note about bundle sizesI'm always referring to size after So I think my sizes are only Thanks for your invaluable commitment. |
Do you have more information here?
So freezes is probably not the right term here, right? You're querying but you're not receiving data for five seconds; that seems about right to me, if your cache is stale.
This seems off; you have a delay of 5 seconds, so the delay is identical, so I don't quite understand what you're asking or what the situation is 😅
It's already optimised and we already batch and defer cache storages; this sounds like your delay of five seconds, is the delay of the five seconds on the server. You should maybe try to double check the network tab against the operations. |
You underestimate me! 😄 LOL! 😄 To explain myself better I created this REPL (but please open it full screen if there are problems). I installed async-render-toolbox and recorded a gif for you: Is the problem clearer now? |
You can try swapping out the IDB stuff maybe. I mean 😅 we haven’t written any serious adaptor code yet Edit: also it may be an issue with not having an IDB transaction there. So, maybe you should drill down what’s going on to the specific parts of the code. The batch we give to the storage adaptor is passed during a small defer, but that obviously won’t improve anything if writing with IDB takes 5s. That’s not part of Graphcache though 😅 |
Okay, now I'm lost. |
I have good news! I studied your graphcache "temp" APIs, IndexedDB (never used directly before) and I'm not a good Javascript programmer, for me it's just a hobby (very tiring but pleasant!). I tried the solutions below; speeds were measured, retrying the same action 10+ times, with: write: async batch => {
const start = performance.now()
// ... code...
const end = performance.now()
console.log((end - start) + 'ms')
}
const storage = {
read: async () => {
db = await openDB('myApplication', 1, {
upgrade: db => db.createObjectStore('keyval')
})
return Promise.all([
db.getAllKeys('keyval'),
db.getAll('keyval')
]).then(([keys, values]) => {
return keys.reduce((acc, key, i) => {
acc[key] = values[i]
return acc
}, {})
})
},
write: async batch => {
for (const key in batch) {
const value = batch[key]
if (value === undefined) {
db.delete('keyval', key)
} else {
db.put('keyval', value, key)
}
}
}
}
const storage = {
read: async () => {
db = await openDB('myApplication', 1, {
upgrade: db => db.createObjectStore('keyval')
})
return Promise.all([
db.getAllKeys('keyval'),
db.getAll('keyval')
]).then(([keys, values]) => {
return keys.reduce((acc, key, i) => {
acc[key] = values[i]
return acc
}, {})
})
},
write: async batch => {
const tx = db.transaction('keyval', 'readwrite')
const store = tx.objectStore('keyval')
for (const key in batch) {
const value = batch[key]
if (value === undefined) {
store.delete(key)
} else {
store.put(value, key)
}
}
await tx.done
}
}
const storage = {
read: async () => {
db = await openDB('myApplication', 1, {
upgrade: db => db.createObjectStore('myCache')
})
return await db.get('myCache', 'general')
},
write: async batch => {
const actual = (await db.get('myCache', 'general')) || {}
Object.keys(batch).forEach(key => {
if (batch[key] === undefined) {
delete actual[key]
} else {
actual[key] = batch[key]
}
})
await db.put('myCache', actual, 'general')
}
}
const storage = {
read: async () => {
const actual = JSON.parse(localStorage.getItem(myKey))
if (!actual) {
localStorage.setItem(myKey, JSON.stringify({}))
return
}
return actual
},
write: async batch => {
const actual = JSON.parse(localStorage.getItem(myKey))
Object.keys(batch).forEach(key => {
if (batch[key] === undefined) {
delete actual[key]
} else {
actual[key] = batch[key]
}
})
localStorage.setItem(myKey, JSON.stringify(actual))
}
}
...
write: async batch => {
const actual = JSON.parse(localStorage.getItem(myKey))
const keys = Object.keys(batch)
const l = keys.length
let i = 0
for (i; i < l; i++) {
const val = batch[keys[i]]
if (val === undefined) {
delete actual[keys[i]]
} else {
actual[keys[i]] = val
}
}
localStorage.setItem(myKey, JSON.stringify(actual))
}
...
write: async batch => {
const actual = JSON.parse(localStorage.getItem(myKey))
const keys = Object.keys(batch)
let i = keys.length
while (i--) {
const val = batch[keys[i]]
if (val === undefined) {
delete actual[keys[i]]
} else {
actual[keys[i]] = val
}
}
localStorage.setItem(myKey, JSON.stringify(actual))
} Questions
|
I found some articles about IndexedDB and some single out performance issues in Chrome. They are quite surprising so I’ll look into that in the future. It’s not too surprising that a single LocalStorage write is faster for the small amount of data you have. But it is blocking while IndexedDB shouldn’t be, so in theory we’d have to look at why it’s blocking for this long. We’ll probably look at that once we’re ready to have some adaptors for offline. We’re currently allowing empty batches to be written, because we weren’t sure whether the same batch was to be used to write in-flight optimistic mutations as well. That’s probably going to go away though |
I can't find this anywhere in the docs. Is it still possible to persist the data using indexedDB? J |
I read docs on website and offline RFC issue.
But then I saw this code in this file: https://github.com/FormidableLabs/urql/blob/master/exchanges/graphcache/src/cacheExchange.ts
So the questionis:
Until #683 is ready, is it possible to use localstorage today at least temporarily?
Something like: https://github.com/apollographql/apollo-cache-persist?
The text was updated successfully, but these errors were encountered: