Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Client side caching: new developments pending #6867

Open
antirez opened this issue Feb 7, 2020 · 3 comments
Open

Client side caching: new developments pending #6867

antirez opened this issue Feb 7, 2020 · 3 comments

Comments

@antirez
Copy link
Contributor

antirez commented Feb 7, 2020

This is a list of things I'm changing in CSC:

  1. Try to rewrite the code to track keys instead of hash slots as per suggestion of @eliblight.
  2. Implement the CLIENT TRACKING on OPTIN to only track keys that are fetched after a "CACHING yes" command. Maybe also support OPTOUT and "CACHING no" before the next command. Note that this schema is about race conditions: we want to prepare the next command to be executed+tracked and we don't want to inform later the server that we are tracking a key, otherwise is a nightmare.
  3. Make sure keys are invalidated in expireIfNeeded(), @yossigo pinged about that and we need to verity.
  4. Implement the BCAST mode plus the PREFIX support as explained in https://www.youtube.com/watch?v=TW9uFIJ9xkc
  5. Implement a WITHTTL option for the GET command returning the milliseconds TTL of the fetched key as suggested by @eliblight.
  6. Change invalidation messages in order to have an array of elements and not a single element for invalidation messages, this is very important in the case of BCAST, but may also be used to improve the normal tracking feature, accumulating data for clients and sending them only at the end.
  7. Send invalidation messages when TTL is reduced, but not if changed otherwise.

Document the following things:

  1. Race condition between invalidation messages and data because of the two channels design:
Data: -> GET foo
Notifications: <- Invalidate foo (somebody else touched it)
Data: <- "bar" (foo value)

We need to inform client authors that they need to create the entry for "foo" before sending the command. If an invalidation message is received in the meantime, we need to discard the cached "foo" key (that would be set to "value will arrive" sentinel initially), and when the actual value arrives we don't do anything with it, since the entry is missing in the cache table of the client.

  1. Inform the client authors that invalidations because of TTLs may be delayed, but will surely be received, so they may implement either always a maximum TTL depending on the application demands, or implement proper TTL handling in the client, fetching the TTL with the value itself when doing caching.
@AlphaHot
Copy link

AlphaHot commented Feb 7, 2020

@antirez Thank you very much for your work. Post videos on YouTube more often 😺

@madolson
Copy link
Contributor

madolson commented Dec 29, 2020

It looks like of these changes we never implemented 5,6,7 of these. @eliblight Do you have any interest in following up here, you suggested many of these improvements?

@eliblight
Copy link
Contributor

Ack... lets see how we can fit it in.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants