Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Race condition in @cached #95

Closed
JelleZijlstra opened this issue Dec 29, 2016 · 2 comments
Closed

Race condition in @cached #95

JelleZijlstra opened this issue Dec 29, 2016 · 2 comments

Comments

@JelleZijlstra
Copy link

JelleZijlstra commented Dec 29, 2016

The @cached decorator sets its result in the cache as follows:

            result = await func(*args, **kwargs)

            try:
                await cache.set(cache_key, result, ttl=ttl)
            except Exception:
                logger.exception("Unexpected error with %s", cache)

This is vulnerable to a race condition. Assume that two processes, A and B, are both calling the same @cached function with the same arguments. Then the following sequence of events can happen:

  1. process A calls the function
  2. process B calls the function
  3. process A sees a cache miss and retrieves the ground truth data
  4. the ground truth data changes
  5. process B also sees a cache miss and retrieves ground truth
  6. process B gets scheduled again and sets the result in the cache
  7. process A gets scheduled and sets stale data in the cache

This can be fixed for at least memcache by using gets to get a CAS token, and then setting to the cache only if the CAS token is still valid.

@argaen
Copy link
Member

argaen commented Dec 29, 2016

Yeah, I had a thread in my mind thinking about this. Before implementing this I wanted to check something that will work for the 3 implemented backends. I believe for Redis using something like a token would work and in the memory one since its not distributed it should be easy.

In the same line, related to this is #27 (http://www.sobstel.org/blog/preventing-dogpile-effect/) which would solve this as a side effect. I need to think if it does worth it to offer the functionality explicitly for each backend or by just offering the "dogpile" plugin and let the user decide to use it or not would be enough.

Any opinion is welcome

@argaen
Copy link
Member

argaen commented Apr 17, 2017

Closing in favor of #187 where I'll keep track of possible cases until locking is implemented

@argaen argaen closed this as completed Apr 17, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants