Adds StorePool#2286
Conversation
There was a problem hiding this comment.
This can go after Unmarshal.
f9c7a35 to
56b48bf
Compare
|
After a chat with @tamird, I'm going to change this to wip until I get the allocator integration done. |
58f68bb to
a8431eb
Compare
|
I think this is ready to go. @tamird PTAL |
|
Is there anything that regularly re-gossips the storeDescriptors? That seems like a fairly important component to this. |
|
@mrtracy, yes, the stores all gossip their storeDescriptors every |
a8431eb to
afa0646
Compare
|
Addressed all comments, PTAL |
There was a problem hiding this comment.
This loop needs a break after it reaches a live store (since every store after that will be live too). But I think this whole function could be simplified to remove the nested loop:
for {
var nextTimeout time.Duration
sp.mu.Lock()
detail := sp.queue.peek()
if detail == nil {
nextTimeout = sp.timeUntilStoreDead
} else if /*detail is dead */ {
// mark as dead, dequeue. nextTimeout is 0 so we check the next one immediately.
} else {
// detail is alive. schedule next check
nextTimeout = detail.lastUpdateTime.Add(sp.timeUntilStoreDead)
}
sp.mu.Unlock()
<-time.After(nextTtimeout) // in a select w/stopper
}
There was a problem hiding this comment.
Done. This is much simpler.
|
LGTM |
4d5fd4a to
28ff935
Compare
There was a problem hiding this comment.
possibly a dumb question: why does the allocator still need a mutex?
There was a problem hiding this comment.
Not a dumb question. I was going to try to remove it in a small follow up PR. But I just didn't want to pollute this one any further. I'll add a todo to consider its removal.
StorePool will keep a list of alive/dead nodes as per the RPC from #2191
The allocator uses StorePool to ensure that the stores being picked are indeed alive.
Next up will be: