-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: cache returns old data #26
Comments
Hi @mercmobily, This is expected behavior as I am surprised to see |
One reason I can think of for using |
Hi,
I know. It's the expected the result, but it's not really a "good" result...
OK. Evicting might be a solution.
I know, this was the easiest way to get it tested. The problem persists if you have a Rest store though. It was just harder to "show" (you need a functioning Rest store etc.) Merc. |
Hi,
See my other message. That is a good use case, but I wasn't as evolved as that :D
I tried that. But, it gets tricky because you can have queries like store.range(0,10).range(0,2). OK, it's a strange use-case, but it is possible. Cache would need to recursively update sub-entries -- feels like madness to me. (But, maybe I am missing something) |
@mercmobily, what would a good result look like? In my mind, either results are cached, or they are not. I might be missing something. |
Sorry, my fault. I didn't it mean it vague/nastly. I meant to say that if the main store changes its elements, then Cache really should be smart enough and return the right results. I am trying to think a common use-case: I have a cached Rest store. The app makes the first query, which gets cached. Then from then on results for that specific query are cached. If I then add an element, and want to display the data, the query is no longer cached and I will hit the server with another query to find out placement. What I wrote here #23 in the section "Observed, CACHED Rest store" (aiming at avoiding refresh, and keeping the cache "fresh") is not even doable anymore, because you'd have to reposition the element in the collection that holds the cached query, which realistically isn't gonna happen. Which takes us to #25 -- do we really want to cache this way? |
hi @mercmobily, I didn't think you were being mean at all. I asked what "good" meant because I didn't understand where you were coming from. In addition to evicting subcollections, we may want to consider updating |
I am looking into this now. |
I am afraid you can't. I only discovered this after implementing it, with a recursive function and everything. You cannot reliably do that because changing an element is likely to also change its placement. For example if you are sorting by The update only really works for deletion (which is the only operation that really is going to do the same thing across the board). For I am afraid I am still wondering #25 . I can totally see the point of creating subcollections for query caching --it's neat. But, we have to accept that anything other than a delete means zapping the cache. Which might be OK, but it's a pretty important assumption. |
BTW I discovered another problem with Memory.js: this.data and this.storage.fullData end up being different things when creating subCollections using Store._createSubCollection... I am putting a bug report together, although I am not sure if there is a reason to have |
About my previous comment, I have just figured out that it's not a bug -- it's the obvious way it should be: there is the full data (in storage), and then there is the partial, per-query data in data (which obviously isn't copied over when create a subCollection). This design makes it basically impossible for Cache.js to keep data up to date. |
Observable should maintain the correct placement if your store has a
Between #25 and #26, it seems like we are talking about three issues:
Is that a correct understanding? |
OK...
True. Sorry, I knew this, it just escaped me for a second.
OK let's see...
Correct.
Or to update their contents (which can only be done re-running the queries)
Yes. Love it when people sum it up. I wish I could have done that with my first Merc. On 7 June 2014 13:41, Brandon Payton notifications@github.com wrote:
|
About this one:
True, but this means that when "updating the cache", you need to effectively re-run every single query, rather than just updating the item in the query cached query result. Now, that would possibly become resource intensive... |
So, the possible solutions are:
Pros: no need to worry about pretty much anything, as the cache is effectively zapped
Pros: cache doesn't get zapped. When using a REST store, you keep querying the cache The old implementation of stores didn't have this issue because we only ever cached the data -- any query would then be run on the cache if available. It was a simpler implementation, although obviously much less powerful. In my framework, I do this:
Basically, this way there is never a need to query the main server for queries, since the "local" data is always up to date. I wonder if I can still do this with this new architecture. If we evict, the "no server queries" idea goes out of the window. If we update, we need to make sure that every sub-store is indeed updated (which can be tricky and time-consuming). |
Brandon, am I right when I state that lazy querying + range specified in fetch() would fix all of these problems? Let me see: Lazy queryingIf the memory store did lazy querying, just like the REST one does, we would end up with the following situation:
Range in fetch()This would ensure that situations like these: Brandon, is this 100% correct? How far is https://github.com/brandonpayton/dstore/compare/fetch-query-results from this end result? Please note that the fact that the cache returns old data is a pretty serious issue, which made me switch off any caching in my application (waiting for this to be addresses) |
I just merged the fetch-query-results branch, can you verify that this issue is fixed? |
My goodness, these changes are absolutely huge. It must have been a On 24 June 2014 01:20, Kris Zyp notifications@github.com wrote:
|
Do you have a branch of drgid (dev-0.4) which incorporates these changes? I On 24 June 2014 08:56, Tony Mobily merc@mobily1.com wrote:
|
I was using https://github.com/kriszyp/dgrid/tree/fetch-query-results to test. I doubt it is complete in conversion to the latest, but I did enough conversion to get it to work. |
Sorry I am being slow. I just moved house (!) and it's been a little hectic. I can start working again tomorrow, and will get things tested out. Thanks for your patience...! |
dgrid's dev-0.4 branch is now updated to work with the latest dstore changes. |
Thanks Ken! On 26 June 2014 22:29, Kenneth G. Franqueiro notifications@github.com
|
Have you had a chance to test this? Can we close this? |
Yesterday, actually. And yes, totally fixed on my end! |
Hi,
The fact that Memory returns subcollections, and the way Caching works, is creating an issue with results being cached even though the data has changed.
Here is from my Chrome console:
It's entirely possible I am using the Cache wrong.
Slightly related: #25 and SitePen/dgrid#950
The text was updated successfully, but these errors were encountered: