-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Get/Set Races #15
Comments
I haven't touched this (or even Go) in a long time. But everything you say sounds right. Let me verbosely go through my thoughts: 1 - It sounds like you watch a 2 - There appears to be a race condition, yes. I racked my brain on this for a few minutes and didn't see a way out. But, what if on gc, we set the refCount to -1. We can use a CompareAndSwap to favor the
Now we just need to guard against -1:
??? |
FWIW I don't think the general problem (only-once initialization) is too specific at all, we have the same use-case, as I imagine a lot of other people do. |
This is one of our requirements, and I agree with @alecbz that it's a pretty common one— generally speaking it's just coalescing of requests, which is something you'd definitely want to do if you have many goroutines contending for the same expensive resource through the cache. We're currently implementing it using var myCacheStuff struct {
cache *ccache.Cache
ttl time.Duration
}
cacheEntry, err := myCacheStuff.Fetch(
key,
myCacheStuff.ttl,
func() (interface{}, error) {
return doExpensiveQuery(key)
),
) —we do this: var myCacheStuff struct {
cache *ccache.Cache
ttl time.Duration
requestCoalescer singleflight.Group
}
cacheEntry, err := myCacheStuff.Fetch(
key,
myCacheStuff.ttl,
func() (interface{}, error) {
value, err, _ := myCacheStuff.requestCoalescer.Do(key, func() (interface{}, error) {
return doExpensiveQuery(key)
})
return value, err
),
) This seems to work OK, but it is probably less efficient than it would be if this logic were built into the cache, since |
I should point out that my solution above isn't atomic— it does have a race condition where, if two goroutines see the initial cache miss at the same time, goroutine 1 calls |
I have some items whose values are very expensive to initialize. I want to initialize each of them on the first read of the corresponding key and then cache indefinitely (unless evicted due to cache size). I also want to do this atomically in such way that reads and writes to other keys in the cache can proceed while the expensive initialization occurs, but each value is initialized at most once.
I can work around the fact that the initialization is slow by setting a placeholder value with a mutex that will be released once the initialization is complete.
What I can't figure out how to do with this API is the atomic Get/Set (without using additional locks). Would you be open to adding a
TrackingGetOrSet(key string, defaultValue interface{}, duration time.Duration) (item TrackingItem item, didSet bool)
method which atomically gets the key if found, or sets it to the new value if not? By looking at the code it seems fairly straightforward to implement as it can occur inside a bucket's RWMutex.That said I can see it's a pretty specific request. I'm happy to make a PR.
PS: Is there a race condition in
TrackingGet
? It seems like the item could be removed from the cache between theget()
and thetrack()
calls... But maybe I'm missing something?Thanks!
The text was updated successfully, but these errors were encountered: