Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release-19.2: storage: make queue timeouts controllable, snapshot sending queues dynamic #44952

Merged

Conversation

ajwerner
Copy link
Contributor

Backport:

Please see individual PRs for details.

/cc @cockroachdb/release

The background storage queues each carry a timeout. This timeout seems like a
good idea to unstick a potentially stuck queue. I'm not going to speculate as
to why these queues might find themselves getting stuck but, let's just say,
maybe it happens and maybe having a timeout is a good idea. Unfortunately
sometimes these timeouts can come to bite us, especially when things are
unexpectedly slow or data sizes are unexpectedly large. Failure to make
progress before a timeout expires in queue processing is a common cause of
long-term outages. In those cases it's good to have an escape hatch.

Another concern on the horizon is the desire to have larger range sizes. Today
our default queue processing timeout is 1m. The raft snapshot queue does not
override this. The default snapshot rate is 8 MB/s.

```
 (512MB / 8MB/s) = 64s
                 > 1m
```

This unfortunate fact means that ranges larger than 512 MB can never
successfully receive a snapshot from the raft snapshot queue. The next
commit will utilize this fact by adding a cluster setting to control the
timeout of the raft snapshot queue.

This commit changes the constant per-queue timeout to a function which can
consult both cluster settings and the Replica which is about to be processed.

Release note: None.
…nd size

This commit adds a hidden setting to control the minimum timeout of
raftSnapshotQueue processing. It is an escape hatch to deal with snapshots for
large ranges. At the default send rate of 8MB/s a range must stay smaller than
500MB to be successfully sent before the default 1m timeout. When this has been
hit traditionally it is has been mitigated by increasing the send rate. This
may not always be desirable.

In addition to the minimum timeout the change also now computes the timeout
on a per Replica basis based on the current snapshot rate limit and the size of
the snapshot being sent. This should prevent large ranges with slow send rates
from timing out.

Maybe there should be a release note but because the setting is hidden I opted
not to add it.

Release note: None.
Before this commit we would permit rate cluster settings to be set to
non-positive values. This would have caused crashes had anybody dared
to try.

Release note: None
@cockroach-teamcity
Copy link
Member

This change is Reviewable

@ajwerner ajwerner requested a review from knz February 11, 2020 02:11
Copy link
Contributor

@knz knz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. However:

  • on the one hand I appreciate that you don't want to advertise these change as a user-facing feature in release notes
  • OTOH the motivation for back-porting this is that there are real problem in deployments that this is intending to solve. At the very least you should produce a blurb of explanation towards the support engineer, what are the symptoms of the problem and how to apply your solution in practice.

Reviewed 4 of 4 files at r1, 1 of 1 files at r2, 1 of 1 files at r3, 7 of 7 files at r4.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained

Copy link
Contributor

@knz knz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe put this explanation in the PR description? Both this one and the original one that got merged. I know that TSEs go and look at PRs when they want to understand a change.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained

@ajwerner
Copy link
Contributor Author

@rmloveland what's the best way for me to document this for backport? Help on how to word the release note would additionally be appreciated.

tbg added a commit to tbg/cockroach that referenced this pull request Mar 2, 2020
If there is "a lot" of data in Sys{Bytes,Count}, then we are likely
experiencing a large abort span. The abort span is not supposed to
become that large, but it does happen and causes stability fallout,
usually due to a combination of shortcomings:

1. there's no trigger for GC based on abort span size alone (before
   this commit)
2. transaction aborts tended to create unnecessary abort span entries,
   fixed (and 19.2-backported) in:
   cockroachdb#42765
3. aborting transactions in a busy loop:
   cockroachdb#38088
   (and we suspect this also happens in user apps occasionally)
4. large snapshots would never complete due to the queue time limits
   (addressed in cockroachdb#44952).

In an ideal world, we would factor in the abort span into this method
directly, but until then the condition guarding this block will do.
At worst, there is some other reason for SysBytes become that large
while also incurring a large SysCount, but I'm not sure how this
would happen. The only other things in this span are the versioned
range descriptors (which follow regular GC and it's only ever adding
a SysCount of one), or transaction records (which expire like abort
span records).

Release note (bug fix): Range garbage collection will now trigger
based on a large abort span, adding defense-in-depth against ranges
growing large (and eventually unstable).
@rmloveland
Copy link
Collaborator

rmloveland commented Mar 2, 2020 via email

tbg added a commit to tbg/cockroach that referenced this pull request Mar 4, 2020
If there is "a lot" of data in Sys{Bytes,Count}, then we are likely
experiencing a large abort span. The abort span is not supposed to
become that large, but it does happen and causes stability fallout,
usually due to a combination of shortcomings:

1. there's no trigger for GC based on abort span size alone (before
   this commit)
2. transaction aborts tended to create unnecessary abort span entries,
   fixed (and 19.2-backported) in:
   cockroachdb#42765
3. aborting transactions in a busy loop:
   cockroachdb#38088
   (and we suspect this also happens in user apps occasionally)
4. large snapshots would never complete due to the queue time limits
   (addressed in cockroachdb#44952).

In an ideal world, we would factor in the abort span into this method
directly, but until then the condition guarding this block will do.
At worst, there is some other reason for SysBytes become that large
while also incurring a large SysCount, but I'm not sure how this
would happen. The only other things in this span are the versioned
range descriptors (which follow regular GC and it's only ever adding
a SysCount of one), or transaction records (which expire like abort
span records).

Release note (bug fix): Range garbage collection will now trigger
based on a large abort span, adding defense-in-depth against ranges
growing large (and eventually unstable).
…namic

In cockroachdb#42686 we made the raft snapshot queue timeout dynamic and based on the
size of the snapshot being sent. We also added an escape hatch to control
the timeout of processing of that queue. This change generalizes that
cluster setting to apply to all of the queues.

It so happens that the replicate queue and the merge queue also sometimes
need to send snapshots. This PR gives them similar treatment to the
raft snapshot queue.

The previous cluster setting was never released and is reserved so it does not
need a release note.

Release note (bug fix): Fixed a bug where large ranges with slow send rates
would hit the timeout in several storage system queues by making the timeout
dynamic based on the current rate limit and the size of the data being
sent. This affects several storage system queues: the Raft snapshot queue,
the replication queue, and the merge queue.
@ajwerner
Copy link
Contributor Author

ajwerner commented Mar 4, 2020

I added the second release note to the last commit. The setting is not public, so for the moment I'd like to punt on documenting it. I suspect that having it not be public isn't great but it's an escape hatch we shouldn't need to use. If we find ourselves using it, then we should document it.

@ajwerner ajwerner merged commit fa0f6c4 into cockroachdb:release-19.2 Mar 4, 2020
tbg added a commit to tbg/cockroach that referenced this pull request Mar 5, 2020
If there is "a lot" of data in Sys{Bytes,Count}, then we are likely
experiencing a large abort span. The abort span is not supposed to
become that large, but it does happen and causes stability fallout,
usually due to a combination of shortcomings:

1. there's no trigger for GC based on abort span size alone (before
   this commit)
2. transaction aborts tended to create unnecessary abort span entries,
   fixed (and 19.2-backported) in:
   cockroachdb#42765
3. aborting transactions in a busy loop:
   cockroachdb#38088
   (and we suspect this also happens in user apps occasionally)
4. large snapshots would never complete due to the queue time limits
   (addressed in cockroachdb#44952).

In an ideal world, we would factor in the abort span into this method
directly, but until then the condition guarding this block will do.
At worst, there is some other reason for SysBytes become that large
while also incurring a large SysCount, but I'm not sure how this
would happen. The only other things in this span are the versioned
range descriptors (which follow regular GC and it's only ever adding
a SysCount of one), or transaction records (which expire like abort
span records).

Release note (bug fix): Range garbage collection will now trigger
based on a large abort span, adding defense-in-depth against ranges
growing large (and eventually unstable).
craig bot pushed a commit that referenced this pull request Mar 5, 2020
45573: storage: trigger GC based on SysCount/SysBytes r=ajwerner a=tbg

If there is "a lot" of data in Sys{Bytes,Count}, then we are likely
experiencing a large abort span. The abort span is not supposed to
become that large, but it does happen and causes stability fallout,
usually due to a combination of shortcomings:

1. there's no trigger for GC based on abort span size alone (before
   this commit)
2. transaction aborts tended to create unnecessary abort span entries,
   fixed (and 19.2-backported) in:
   #42765
3. aborting transactions in a busy loop:
   #38088
   (and we suspect this also happens in user apps occasionally)
4. large snapshots would never complete due to the queue time limits
   (addressed in #44952).

In an ideal world, we would factor in the abort span into this method
directly, but until then the condition guarding this block will do.
At worst, there is some other reason for SysBytes become that large
while also incurring a large SysCount, but I'm not sure how this
would happen. The only other things in this span are the versioned
range descriptors (which follow regular GC and it's only ever adding
a SysCount of one), or transaction records (which expire like abort
span records).

Release note (bug fix): Range garbage collection will now trigger
based on a large abort span, adding defense-in-depth against ranges
growing large (and eventually unstable).

Co-authored-by: Tobias Schottdorf <tobias.schottdorf@gmail.com>
tbg added a commit to tbg/cockroach that referenced this pull request Mar 5, 2020
If there is "a lot" of data in Sys{Bytes,Count}, then we are likely
experiencing a large abort span. The abort span is not supposed to
become that large, but it does happen and causes stability fallout,
usually due to a combination of shortcomings:

1. there's no trigger for GC based on abort span size alone (before
   this commit)
2. transaction aborts tended to create unnecessary abort span entries,
   fixed (and 19.2-backported) in:
   cockroachdb#42765
3. aborting transactions in a busy loop:
   cockroachdb#38088
   (and we suspect this also happens in user apps occasionally)
4. large snapshots would never complete due to the queue time limits
   (addressed in cockroachdb#44952).

In an ideal world, we would factor in the abort span into this method
directly, but until then the condition guarding this block will do.
At worst, there is some other reason for SysBytes become that large
while also incurring a large SysCount, but I'm not sure how this
would happen. The only other things in this span are the versioned
range descriptors (which follow regular GC and it's only ever adding
a SysCount of one), or transaction records (which expire like abort
span records).

Release note (bug fix): Range garbage collection will now trigger
based on a large abort span, adding defense-in-depth against ranges
growing large (and eventually unstable).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants