New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kvcoord: add DistSender circuit breakers #118943
kvcoord: add DistSender circuit breakers #118943
Conversation
ff1bfdb
to
91eb907
Compare
I decided to remove the intermediate range-level tracking for now, and keep range-level circuit breakers out of scope -- it comes with a cost, and we should aggressively limit scope for 24.1 to the problem at hand (stalled/partitioned replicas). We can deal with range-level circuit breakers later. |
91eb907
to
1a263ba
Compare
1a263ba
to
c523167
Compare
5398b7c
to
e7e1a4c
Compare
I think this is starting to take shape. By default, we don't cancel in-flight requests when the breaker trips, only subsequent requests. Here are some rough benchmarks (single run) after some initial optimization, which show negligible overhead of about 110 ns per request in the common case, even at high concurrency. This incurs a single allocation due to a returned closure, which can possibly be eliminated.
If we enable cancellation of in-flight requests, the cost increases substantially due to the additional context injection and tracking, especially at high concurrency. Some of this can be optimized away, but it gives us an upper bound on what this tracking would cost.
For comparison, We'll need to decide whether cancellation is necessary or not. It may be possible to get the cost under high concurrency down by about half (to ~400 ns/op), which would give about 1% overhead on |
e7e1a4c
to
47224e2
Compare
Got rid of the closure allocation.
Might be possible to make the cancel tracking lock-free. However, I think
|
6d7faf1
to
ffffde2
Compare
Your pull request contains more than 1000 changes. It is strongly encouraged to split big PRs into smaller chunks. 🦉 Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf. |
ffffde2
to
2296f4e
Compare
74e65c1
to
d26a1be
Compare
c749999
to
f6730c1
Compare
@nvanbenschoten I added the context timeout check we discussed, such that we won't set up a cancellable context if the context already has a timeout below the probe threshold + timeout. I also enabled both the circuit breakers and cancellation by default. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 1 of 3 files at r2, 2 of 2 files at r3, all commit messages.
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @andrewbaptist and @erikgrinaker)
pkg/kv/kvclient/kvcoord/dist_sender_circuit_breaker.go
line 642 at r1 (raw file):
Previously, erikgrinaker (Erik Grinaker) wrote…
Yeah, this was bugging me as well. The complexity here comes from the desire to have the invariant that stallSince == 0 when inflightReqs == 0. This is only because the semantics seemed simpler, but we don't really need it -- we can simply keep bumping stallSince on first requests and any response, and additionally require checking inflightReqs before considering the replica stalled. I've made that change.
I think the broader question we should answer is whether we should always enable cancellation. If we do, then we'll likely have to use a mutex anyway, and we can just do all of this stuff under lock.
This is much nicer now. Agreed on not enforcing invariants between separate atomic values when doing so isn't strictly necessary.
pkg/kv/kvclient/kvcoord/dist_sender_circuit_breaker.go
line 289 at r2 (raw file):
key := cbKey{rangeID: cb.rangeID, replicaID: cb.desc.ReplicaID} _, ok := d.mu.replicas[key] delete(d.mu.replicas, key) // nolint:deferunlockcheck
Out of curiosity, why do these two lines need the nolint:deferunlockcheck
? Why isn't this on the Unlock
above?
pkg/kv/kvclient/kvcoord/dist_sender_circuit_breaker.go
line 295 at r2 (raw file):
// will only be closed once. if ok { close(cb.closedC) // nolint:deferunlockcheck
If the breaker was removed and a new one was re-inserted under the same key, shouldn't we be closing the channel on the new breaker? In other words, should we either have logic like:
cb2, ok := d.mu.replicas[key]
if ok && (cb == cb2) {
delete(d.mu.replicas, key)
close(cb.closedC)
}
Or logic like:
cb2, ok := d.mu.replicas[key]
...
close(cb2.closedC)
If the latter, then maybe gc
should be a list of cbKey
structs to avoid confusion.
pkg/kv/kvclient/kvcoord/dist_sender_circuit_breaker.go
line 37 at r3 (raw file):
"kv.dist_sender.circuit_breaker.enabled", "enable circuit breakers for failing or stalled replicas", true, // TODO(erikgrinaker): disable by default?
Minor suggestion: for your own sake, consider setting this to false until you get back from your week off, then switching it to true.
f6730c1
to
20444e0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @andrewbaptist and @nvanbenschoten)
pkg/kv/kvclient/kvcoord/dist_sender_circuit_breaker.go
line 289 at r2 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
Out of curiosity, why do these two lines need the
nolint:deferunlockcheck
? Why isn't this on theUnlock
above?
The linter gets confused by the periodic mutex release. It thinks this line is between an inline lock/unlock. The linter allows some operations between inline locks, but operations that aren't (e.g. function calls) needs the nolint
exception.
pkg/kv/kvclient/kvcoord/dist_sender_circuit_breaker.go
line 295 at r2 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
If the breaker was removed and a new one was re-inserted under the same key, shouldn't we be closing the channel on the new breaker? In other words, should we either have logic like:
cb2, ok := d.mu.replicas[key] if ok && (cb == cb2) { delete(d.mu.replicas, key) close(cb.closedC) }Or logic like:
cb2, ok := d.mu.replicas[key] ... close(cb2.closedC)If the latter, then maybe
gc
should be a list ofcbKey
structs to avoid confusion.
Good call, thanks -- made gc []cbKey
.
pkg/kv/kvclient/kvcoord/dist_sender_circuit_breaker.go
line 37 at r3 (raw file):
Previously, nvanbenschoten (Nathan VanBenschoten) wrote…
Minor suggestion: for your own sake, consider setting this to false until you get back from your week off, then switching it to true.
Sure, done. The guidance is to merge all major changes before the focus period, which I guess means we should have this enabled on master by Monday, but since noone will be around to follow up on any fallout anyway I guess it's fine to leave it off for another week.
20444e0
to
dc02c57
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 0 of 0 LGTMs obtained (waiting on @andrewbaptist and @nvanbenschoten)
pkg/kv/kvclient/kvcoord/dist_sender_circuit_breaker.go
line 37 at r3 (raw file):
Previously, erikgrinaker (Erik Grinaker) wrote…
Sure, done. The guidance is to merge all major changes before the focus period, which I guess means we should have this enabled on master by Monday, but since noone will be around to follow up on any fallout anyway I guess it's fine to leave it off for another week.
Reverted to enabled by default, since we didn't merge this. Should be good to go now?
Verified that this recovers |
dc02c57
to
34a44e8
Compare
This patch adds an initial implementation of DistSender replica circuit breakers. Their primary purpose is to prevent the DistSender getting stuck on non-functional replicas. In particular, the DistSender relies on receiving a NLHE from the replica to update its range cache and try other replicas, otherwise it will keep sending requests to the same broken replica which will continue to get stuck, giving the appearance of an unavailable range. This can happen if: - The replica stalls, e.g. with a disk stall or mutex deadlock. - Clients time out before the replica lease acquisition attempt times out, e.g. if the replica is partitioned away from the leader. If a replica has returned only errors in the past few seconds, or hasn't returned any responses at all, the circuit breaker will probe the replica by sending a `LeaseInfo` request. This must either return success or a NLHE pointing to a leaseholder. Otherwise, the circuit breaker trips, and the DistSender will skip it for future requests, optionally also cancelling in-flight requests. Currently, only replica-level circuit breakers are implemented. If a range is unavailable, the DistSender will continue to retry replicas as today. Range-level circuit breakers can be added later if needed, but are considered out of scope here. The circuit breakers are disabled by default for now. Some follow-up work is likely needed before they can be enabled by default: * Improve probe scalability. Currently, a goroutine is spawned per replica probe, which is likely too expensive at large scales. We should consider batching probes to nodes/stores, and using a bounded worker pool. * Consider follower read handling, e.g. by tracking the replica's closed timestamp and allowing requests that may still be served by it even if it's partitioned away from the leaseholder. * Improve observability, with metrics, tracing, and logging. * Comprehensive testing and benchmarking. This will be addressed separately. Epic: none Release note (general change): gateways will now detect faulty or stalled replicas and use other replicas instead, which can prevent them getting stuck in certain cases (e.g. with disk stalls). This behavior can be disabled via the cluster setting `kv.dist_sender.circuit_breaker.enabled`.
34a44e8
to
68a90a6
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 3 of 3 files at r6, all commit messages.
Reviewable status: complete! 1 of 0 LGTMs obtained (waiting on @andrewbaptist)
Merging on behalf of Erik. bors r+ |
Build succeeded: |
This patch adds an initial implementation of DistSender replica circuit breakers. Their primary purpose is to prevent the DistSender getting stuck on non-functional replicas. In particular, the DistSender relies on receiving a NLHE from the replica to update its range cache and try other replicas, otherwise it will keep sending requests to the same broken replica which will continue to get stuck, giving the appearance of an unavailable range. This can happen if:
The replica stalls, e.g. with a disk stall or mutex deadlock.
Clients time out before the replica lease acquisition attempt times out, e.g. if the replica is partitioned away from the leader.
If a replica has returned only errors in the past few seconds, or hasn't returned any responses at all, the circuit breaker will probe the replica by sending a
LeaseInfo
request. This must either return success or a NLHE pointing to a leaseholder. Otherwise, the circuit breaker trips, and the DistSender will skip it for future requests, optionally also cancelling in-flight requests.Currently, only replica-level circuit breakers are implemented. If a range is unavailable, the DistSender will continue to retry replicas as today. Range-level circuit breakers can be added later if needed, but are considered out of scope here.
Some follow-up work is needed:
Improve probe scalability. Currently, a goroutine is spawned per replica probe, which is likely too expensive at large scales. We should consider batching probes to nodes/stores, and using a bounded worker pool.
Consider follower read handling, e.g. by tracking the replica's closed timestamp and allowing requests that may still be served by it even if it's partitioned away from the leaseholder.
Improve observability, with metrics, tracing, and logging.
Comprehensive testing and benchmarking.
This will be addressed separately.
Resolves #105168.
Resolves #104262.
Resolves #81100.
Resolves #80713.
Epic: none
Release note (general change): gateways will now detect faulty or stalled replicas and use other replicas instead, which can prevent them getting stuck in certain cases (e.g. with disk stalls). This behavior can be disabled via the cluster setting
kv.dist_sender.circuit_breaker.enabled
.