Skip to content

Adds StorePool#2286

Merged
BramGruneir merged 3 commits intomasterfrom
bram/store-pool
Aug 31, 2015
Merged

Adds StorePool#2286
BramGruneir merged 3 commits intomasterfrom
bram/store-pool

Conversation

@BramGruneir
Copy link
Copy Markdown
Member

StorePool will keep a list of alive/dead nodes as per the RPC from #2191
The allocator uses StorePool to ensure that the stores being picked are indeed alive.

Next up will be:

  • get the scanner to sniff out replicas on failed stores

Comment thread storage/store_pool.go Outdated
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can go after Unmarshal.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@BramGruneir BramGruneir force-pushed the bram/store-pool branch 2 times, most recently from f9c7a35 to 56b48bf Compare August 28, 2015 15:00
@BramGruneir BramGruneir changed the title Adds StorePool Adds StorePool [WIP] Aug 28, 2015
@BramGruneir
Copy link
Copy Markdown
Member Author

After a chat with @tamird, I'm going to change this to wip until I get the allocator integration done.

@BramGruneir BramGruneir force-pushed the bram/store-pool branch 5 times, most recently from 58f68bb to a8431eb Compare August 28, 2015 21:02
@BramGruneir
Copy link
Copy Markdown
Member Author

I think this is ready to go. @tamird PTAL

@BramGruneir BramGruneir changed the title Adds StorePool [WIP] Adds StorePool Aug 28, 2015
@mrtracy
Copy link
Copy Markdown
Contributor

mrtracy commented Aug 28, 2015

Is there anything that regularly re-gossips the storeDescriptors? That seems like a fairly important component to this.

@BramGruneir
Copy link
Copy Markdown
Member Author

@mrtracy, yes, the stores all gossip their storeDescriptors every configGossipInterval, which is right now set to 1 min.

Comment thread storage/store_pool.go Outdated
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

doesn't need to be public

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@BramGruneir
Copy link
Copy Markdown
Member Author

Addressed all comments, PTAL

Comment thread storage/store_pool.go Outdated
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This loop needs a break after it reaches a live store (since every store after that will be live too). But I think this whole function could be simplified to remove the nested loop:

for {
    var nextTimeout time.Duration
    sp.mu.Lock()
    detail := sp.queue.peek()
    if detail == nil {
        nextTimeout = sp.timeUntilStoreDead
    } else if /*detail is dead */ {
        // mark as dead, dequeue. nextTimeout is 0 so we check the next one immediately.
    } else {
        // detail is alive. schedule next check
        nextTimeout = detail.lastUpdateTime.Add(sp.timeUntilStoreDead)
    }
    sp.mu.Unlock()
    <-time.After(nextTtimeout)  // in a select w/stopper
}

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. This is much simpler.

@mrtracy
Copy link
Copy Markdown
Contributor

mrtracy commented Aug 31, 2015

LGTM

@BramGruneir
Copy link
Copy Markdown
Member Author

Thanks for the review @mrtracy
@tamird, want to give it one last pass before I merge?

Comment thread storage/allocator.go Outdated
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

possibly a dumb question: why does the allocator still need a mutex?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not a dumb question. I was going to try to remove it in a small follow up PR. But I just didn't want to pollute this one any further. I'll add a todo to consider its removal.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants