Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storage: pull Store-level concurrency retry loop under Replica, clean up req validation #43138

Merged
merged 8 commits into from Dec 16, 2019

Conversation

nvanbenschoten
Copy link
Member

This PR contains a number of cleanup steps necessary to pave the way for the unified pkg/storage/concurrency package that we plan to introduce for #41720 (see prototype). Primarily, the PR cleans up Replica-level request validation, unifies some of the request synchronization logic between the read-only and read-write execution paths, and pulls the Store-level concurrency retry loop under the Replica. Future commits will pull parts of this retry loop down into the concurrency package's "concurrency manager".

@cockroach-teamcity
Copy link
Member

This change is Reviewable

Copy link
Member

@tbg tbg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 3 of 3 files at r1, 10 of 10 files at r2, 2 of 2 files at r3, 3 of 3 files at r4, 1 of 1 files at r5, 1 of 1 files at r6.
Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @ajwerner, @nvanbenschoten, @sumeerbhola, and @tbg)


pkg/storage/replica.go, line 1022 at r2 (raw file):

// checkExecutionCanProceed returns an error if a batch request cannot be
// executed by the Replica. An error indicates that the Replica is not live and
// able to serve traffic or that the request is not compatable with the state of

compatible


pkg/storage/replica.go, line 1028 at r2 (raw file):

// used to indicate whether the caller has acquired latches and checked the
// Range lease. The method will only check for a pending merge if both of these
// conditions are true.

Add that lg and st can be nil and under which conditions the caller would do that.


pkg/storage/replica.go, line 1054 at r2 (raw file):

}

// checkExecutionCanProceedForRangeFeed .returns an error if a rangefeed request

extra dot


pkg/storage/replica.go, line 1074 at r2 (raw file):

}

// checkSpanInRange returns an error if a request (identified by its

nit: name is off


pkg/storage/replica_raft.go, line 68 at r4 (raw file):

func (r *Replica) evalAndPropose(
	ctx context.Context,
	lease *roachpb.Lease,

It's fine to leak the lease pointer out of r.mu.state out from under the lock because we only ever replace that pointer, but never mutate into it, right? Might be worth a comment on ReplicaState.


pkg/storage/replica_write.go, line 122 at r1 (raw file):

	// Verify that the batch can be executed.
	if ec.lg != nil {
		if err := r.checkForPendingMerge(ctx, ba, ec.lg, &status); err != nil {

Isn't is potentially problematic to order this after redirectOnOrAcquireLease? That method may try to get the lease and in doing so, may stall (waiting for the intent on the range desc to go away). I don't know if there's anything wrong with the code per se, but it does seem that there's some thinking to be had about deadlocks. It might be safer to put this ahead of the lease check.


pkg/storage/replica_write.go, line 158 at r2 (raw file):

	// Checking the context just before proposing can help avoid ambiguous errors.
	if err := ctx.Err(); err != nil {
		log.Warningf(ctx, "%s before proposing: %s", err, ba.Summary())

not new code, but this shouldn't really be a warning, should it?

@tbg tbg self-requested a review December 13, 2019 12:41
This commit shuffles around the read-only Replica execution path to acquire
latches before checking the Replica's lease. This makes the read-only path
more like the write path and will allow for future simplification.

In order to permit this, the commit needed to pull out the logic to check
for a pending merge from beginCmds. This allowed us to clean it up a bit
and make its interaction with other checks more clear.

Release note: None
Copy link
Member Author

@nvanbenschoten nvanbenschoten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for giving this a pass!

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @ajwerner, @sumeerbhola, and @tbg)


pkg/storage/replica.go, line 1022 at r2 (raw file):

Previously, tbg (Tobias Grieger) wrote…

compatible

Done.


pkg/storage/replica.go, line 1028 at r2 (raw file):

Previously, tbg (Tobias Grieger) wrote…

Add that lg and st can be nil and under which conditions the caller would do that.

Done.


pkg/storage/replica.go, line 1054 at r2 (raw file):

Previously, tbg (Tobias Grieger) wrote…

extra dot

Done.


pkg/storage/replica.go, line 1074 at r2 (raw file):

Previously, tbg (Tobias Grieger) wrote…

nit: name is off

Done.


pkg/storage/replica_raft.go, line 68 at r4 (raw file):

Previously, tbg (Tobias Grieger) wrote…

It's fine to leak the lease pointer out of r.mu.state out from under the lock because we only ever replace that pointer, but never mutate into it, right? Might be worth a comment on ReplicaState.

This isn't coming straight from r.mu.state. We copy it into LeaseStatus in redirectOnOrAcquireLease.


pkg/storage/replica_write.go, line 122 at r1 (raw file):

Previously, tbg (Tobias Grieger) wrote…

Isn't is potentially problematic to order this after redirectOnOrAcquireLease? That method may try to get the lease and in doing so, may stall (waiting for the intent on the range desc to go away). I don't know if there's anything wrong with the code per se, but it does seem that there's some thinking to be had about deadlocks. It might be safer to put this ahead of the lease check.

The merge check actually needs to be after the lease check on the read-only path in order to get TestStoreRangeMergeRHSLeaseExpiration working. Without that, it's possible that a new leaseholder accepts a read while the range is meant to be frozen. This is because the mergeCompleteCh is set in leasePostApply when a frozen range transfers its lease, so if the mergeCompleteCh is checked before the lease then the read can miss the mergeCompleteCh but observe the new lease, allowing it to accidentally proceed.

No tests fail when I perform the re-order on the read-write path, but doing so in different places depending on the access method seems pretty bad to me. I'm also not convinced we wouldn't hit the same kind of issue with writes as we do with reads if we performed the re-order. I'll try extending TestStoreRangeMergeRHSLeaseExpiration to also perform writes and see what I find.


pkg/storage/replica_write.go, line 158 at r2 (raw file):

Previously, tbg (Tobias Grieger) wrote…

not new code, but this shouldn't really be a warning, should it?

Nope, changed to log.VEventf(2, ...)

This reveals that writes to the RHS of a merge used to be allowed during a merge's
critical phase if the RHS's lease expired at just the right time and raced with a
write to the new leaseholder. The previous commit fixed this bug by checking the
mergeCompleteCh after validating the replica's lease, instead of checking it before
validating the replica's lease.

Release note (bug fix): Fixed a bug where a well-timed write could slip in
on the right-hand side of a Range merge. This would allow it to improperly
synchronize with reads on the post-merged Range.
This commit cleans up the validation paths for BatchRequest and RaftFeed
requests. It unifies the code that determining whether requests are compatable
with the current state of the Replica it intends to execute on.

This has a nice side effect of addressing a performance concern that I've
had for a while -- on both the Replica read and write path, we were repeatedly
read-locking the Replica Mutex to perform each of the validation steps. This
commit addresses this by fusing all verification steps into a single critical
section.

Release note: None
Pass by reference, not value. Lease is fairly large and we just
want to pull out its sequence later on in the method.

Release note: None
We always set this and weren't prepared to handle the WriteIntentError
even if we didn't.

Release note: None
…l Txn

*TransactionPushError already implements the transactionRestartError interface,
so this wasn't needed.

Release note: None
This commit pulls the retry loop in `Store.Send` down under `Replica.Send`,
which provides a path to introduce a centralized concurrency manager as part
of addressing cockroachdb#41720. The new `Replica.executeBatchWithConcurrencyRetries`
method will call into the concurrency package and delegate some of its retry
handling logic to it once the package is introduced.

Release note: None
Copy link
Member Author

@nvanbenschoten nvanbenschoten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewable status: :shipit: complete! 0 of 0 LGTMs obtained (waiting on @ajwerner, @sumeerbhola, and @tbg)


pkg/storage/replica_write.go, line 122 at r1 (raw file):

Previously, nvanbenschoten (Nathan VanBenschoten) wrote…

The merge check actually needs to be after the lease check on the read-only path in order to get TestStoreRangeMergeRHSLeaseExpiration working. Without that, it's possible that a new leaseholder accepts a read while the range is meant to be frozen. This is because the mergeCompleteCh is set in leasePostApply when a frozen range transfers its lease, so if the mergeCompleteCh is checked before the lease then the read can miss the mergeCompleteCh but observe the new lease, allowing it to accidentally proceed.

No tests fail when I perform the re-order on the read-write path, but doing so in different places depending on the access method seems pretty bad to me. I'm also not convinced we wouldn't hit the same kind of issue with writes as we do with reads if we performed the re-order. I'll try extending TestStoreRangeMergeRHSLeaseExpiration to also perform writes and see what I find.

I think this was actually broken before this change. I added some writes to TestStoreRangeMergeRHSLeaseExpiration (see new commit) and that revealed that writes could race with a lease transfer during the critical section of a merge to allow them to be permitted. It doesn't look like the writes were lost, but I think the hazard is that they were improperly synchronized with reads on the post-merged range.

This issue goes away with the change here to check for merges after checking the lease.

Copy link
Contributor

@ajwerner ajwerner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really nice clean up! :lgtm:

Reviewed 1 of 10 files at r9, 1 of 11 files at r10, 4 of 10 files at r11, 1 of 3 files at r13, 8 of 8 files at r16.
Reviewable status: :shipit: complete! 1 of 0 LGTMs obtained (waiting on @sumeerbhola and @tbg)

@nvanbenschoten
Copy link
Member Author

TFTR! For reference, here's the benchdiff output that tests against any regressions:

$ benchdiff --new=nvanbenschoten/concPkg --old=master --post-checkout='make buildshort' --sheets ./pkg/sql/tests

name                                       old time/op    new time/op    delta
KV/Delete/Native/rows=1000-24                8.06ms ± 1%    7.89ms ± 3%  -2.12%  (p=0.001 n=10+10)
KV/Update/Native/rows=10-24                   418µs ± 1%     409µs ± 4%  -2.04%  (p=0.004 n=9+9)
KV/Update/Native/rows=1000-24                18.0ms ± 1%    17.8ms ± 2%  -1.43%  (p=0.012 n=7+9)
KV/Delete/Native/rows=10000-24               82.9ms ± 2%    81.7ms ± 1%  -1.40%  (p=0.028 n=10+9)
Bank/Cockroach/numAccounts=2-24              2.78ms ± 6%    2.71ms ±13%    ~     (p=0.315 n=10+10)
Bank/Cockroach/numAccounts=4-24              2.25ms ± 5%    2.30ms ± 6%    ~     (p=0.243 n=10+9)
Bank/Cockroach/numAccounts=8-24              1.30ms ± 7%    1.28ms ± 8%    ~     (p=0.393 n=10+10)
Bank/Cockroach/numAccounts=32-24              443µs ± 6%     446µs ± 3%    ~     (p=0.579 n=10+10)
Bank/Cockroach/numAccounts=64-24              301µs ± 2%     297µs ± 2%    ~     (p=0.133 n=9+10)
Bank/MultinodeCockroach/numAccounts=2-24     7.06ms ±10%    6.89ms ± 8%    ~     (p=0.353 n=10+10)
Bank/MultinodeCockroach/numAccounts=4-24     5.34ms ±11%    5.12ms ±10%    ~     (p=0.105 n=10+10)
Bank/MultinodeCockroach/numAccounts=8-24     2.78ms ± 3%    2.77ms ± 6%    ~     (p=0.364 n=7+10)
Bank/MultinodeCockroach/numAccounts=32-24     939µs ± 7%     917µs ± 3%    ~     (p=0.447 n=10+9)
Bank/MultinodeCockroach/numAccounts=64-24     621µs ± 6%     623µs ± 5%    ~     (p=0.720 n=9+10)
KV/Insert/Native/rows=1-24                    164µs ± 3%     162µs ± 3%    ~     (p=0.315 n=10+10)
KV/Insert/Native/rows=10-24                   244µs ± 4%     243µs ± 6%    ~     (p=1.000 n=10+10)
KV/Insert/Native/rows=100-24                  824µs ± 2%     821µs ± 2%    ~     (p=0.436 n=10+10)
KV/Insert/Native/rows=1000-24                5.98ms ± 3%    5.95ms ± 3%    ~     (p=0.280 n=10+10)
KV/Insert/Native/rows=10000-24               61.7ms ± 2%    61.6ms ± 3%    ~     (p=0.529 n=10+10)
KV/Insert/SQL/rows=1-24                       859µs ± 6%     842µs ± 4%    ~     (p=0.360 n=10+8)
KV/Insert/SQL/rows=10-24                     1.05ms ± 6%    1.04ms ± 3%    ~     (p=0.720 n=10+9)
KV/Insert/SQL/rows=100-24                    2.40ms ± 8%    2.35ms ± 4%    ~     (p=0.497 n=10+9)
KV/Insert/SQL/rows=1000-24                   13.6ms ± 2%    13.8ms ± 4%    ~     (p=0.218 n=10+10)
KV/Insert/SQL/rows=10000-24                   182ms ± 3%     182ms ± 3%    ~     (p=0.971 n=10+10)
KV/Update/Native/rows=1-24                    219µs ± 2%     220µs ± 3%    ~     (p=0.842 n=10+9)
KV/Update/Native/rows=100-24                 1.86ms ± 2%    1.86ms ± 3%    ~     (p=0.842 n=9+10)
KV/Update/Native/rows=10000-24                230ms ± 6%     228ms ± 4%    ~     (p=0.684 n=10+10)
KV/Update/SQL/rows=1-24                       970µs ± 5%     957µs ± 4%    ~     (p=0.190 n=10+10)
KV/Update/SQL/rows=10-24                     1.50ms ± 5%    1.47ms ± 5%    ~     (p=0.075 n=10+10)
KV/Update/SQL/rows=100-24                    4.16ms ± 3%    4.13ms ± 3%    ~     (p=0.780 n=10+9)
KV/Update/SQL/rows=1000-24                   27.3ms ± 3%    27.2ms ± 2%    ~     (p=0.218 n=10+10)
KV/Update/SQL/rows=10000-24                   352ms ± 4%     351ms ± 2%    ~     (p=0.842 n=10+9)
KV/Delete/Native/rows=1-24                    161µs ± 5%     158µs ± 1%    ~     (p=0.203 n=10+8)
KV/Delete/Native/rows=10-24                   258µs ± 5%     253µs ± 3%    ~     (p=0.065 n=10+9)
KV/Delete/Native/rows=100-24                 1.02ms ± 2%    1.01ms ± 2%    ~     (p=0.247 n=10+10)
KV/Delete/SQL/rows=1-24                       732µs ± 7%     725µs ± 3%    ~     (p=0.661 n=9+10)
KV/Delete/SQL/rows=10-24                      979µs ± 4%     974µs ± 4%    ~     (p=0.604 n=9+10)
KV/Delete/SQL/rows=100-24                    3.90ms ± 5%    3.84ms ± 4%    ~     (p=0.243 n=9+10)
KV/Delete/SQL/rows=1000-24                   29.3ms ± 4%    29.6ms ± 3%    ~     (p=0.315 n=10+10)
KV/Delete/SQL/rows=10000-24                   348ms ±30%     388ms ± 2%    ~     (p=0.182 n=10+9)
KV/Scan/Native/rows=1-24                     37.9µs ± 1%    37.6µs ± 2%    ~     (p=0.123 n=10+10)
KV/Scan/Native/rows=10-24                    40.9µs ± 2%    40.4µs ± 2%    ~     (p=0.053 n=10+9)
KV/Scan/Native/rows=100-24                   64.9µs ± 2%    64.6µs ± 1%    ~     (p=0.497 n=9+10)
KV/Scan/Native/rows=1000-24                   292µs ± 1%     290µs ± 2%    ~     (p=0.065 n=9+10)
KV/Scan/Native/rows=10000-24                 2.44ms ± 2%    2.42ms ± 1%    ~     (p=0.095 n=9+10)
KV/Scan/SQL/rows=1-24                         559µs ± 3%     550µs ± 2%    ~     (p=0.156 n=10+9)
KV/Scan/SQL/rows=10-24                        580µs ± 4%     586µs ± 8%    ~     (p=0.905 n=10+9)
KV/Scan/SQL/rows=100-24                       743µs ± 3%     749µs ± 2%    ~     (p=0.360 n=10+8)
KV/Scan/SQL/rows=1000-24                     2.01ms ± 2%    2.00ms ± 4%    ~     (p=0.529 n=10+10)
KV/Scan/SQL/rows=10000-24                    14.7ms ± 4%    14.7ms ± 1%    ~     (p=0.829 n=10+8)

name                                       old alloc/op   new alloc/op   delta
Bank/MultinodeCockroach/numAccounts=4-24     2.06MB ± 6%    1.97MB ± 4%  -4.24%  (p=0.013 n=10+9)
Bank/Cockroach/numAccounts=2-24               395kB ± 3%     382kB ± 8%  -3.44%  (p=0.023 n=10+10)
KV/Delete/SQL/rows=100-24                     317kB ± 0%     316kB ± 0%  -0.27%  (p=0.011 n=10+10)
KV/Insert/Native/rows=1000-24                1.33MB ± 0%    1.33MB ± 0%  -0.19%  (p=0.007 n=10+10)
Bank/Cockroach/numAccounts=4-24               420kB ± 6%     424kB ± 8%    ~     (p=0.529 n=10+10)
Bank/Cockroach/numAccounts=8-24               334kB ± 5%     326kB ± 4%    ~     (p=0.123 n=10+10)
Bank/Cockroach/numAccounts=32-24              212kB ± 2%     213kB ± 1%    ~     (p=0.631 n=10+10)
Bank/Cockroach/numAccounts=64-24              178kB ± 2%     177kB ± 1%    ~     (p=0.529 n=10+10)
Bank/MultinodeCockroach/numAccounts=2-24     2.59MB ±14%    2.51MB ±10%    ~     (p=0.315 n=10+10)
Bank/MultinodeCockroach/numAccounts=8-24     1.37MB ± 5%    1.36MB ± 5%    ~     (p=0.669 n=10+10)
Bank/MultinodeCockroach/numAccounts=32-24     660kB ± 1%     665kB ± 3%    ~     (p=0.274 n=8+10)
Bank/MultinodeCockroach/numAccounts=64-24     494kB ± 2%     495kB ± 3%    ~     (p=0.739 n=10+10)
KV/Insert/Native/rows=1-24                   11.1kB ± 0%    11.1kB ± 0%    ~     (p=0.505 n=9+9)
KV/Insert/Native/rows=10-24                  23.1kB ± 0%    23.1kB ± 0%    ~     (p=0.529 n=10+10)
KV/Insert/Native/rows=100-24                  149kB ± 0%     149kB ± 0%    ~     (p=0.447 n=10+9)
KV/Insert/Native/rows=10000-24               18.8MB ± 1%    18.8MB ± 1%    ~     (p=0.247 n=10+10)
KV/Insert/SQL/rows=1-24                      44.8kB ± 1%    44.7kB ± 1%    ~     (p=0.853 n=10+10)
KV/Insert/SQL/rows=10-24                     79.2kB ± 1%    79.1kB ± 1%    ~     (p=0.529 n=10+10)
KV/Insert/SQL/rows=100-24                     403kB ± 0%     403kB ± 0%    ~     (p=0.971 n=10+10)
KV/Insert/SQL/rows=1000-24                   4.16MB ± 0%    4.16MB ± 0%    ~     (p=0.739 n=10+10)
KV/Insert/SQL/rows=10000-24                  80.2MB ± 0%    80.2MB ± 0%    ~     (p=0.853 n=10+10)
KV/Update/Native/rows=1-24                   16.9kB ± 0%    16.9kB ± 1%    ~     (p=0.736 n=9+10)
KV/Update/Native/rows=10-24                  36.7kB ± 1%    36.7kB ± 1%    ~     (p=0.579 n=10+10)
KV/Update/Native/rows=100-24                  247kB ± 0%     247kB ± 0%    ~     (p=0.762 n=10+8)
KV/Update/Native/rows=1000-24                2.22MB ± 0%    2.23MB ± 0%    ~     (p=0.243 n=9+10)
KV/Update/Native/rows=10000-24               33.3MB ± 0%    33.3MB ± 0%    ~     (p=0.393 n=10+10)
KV/Update/SQL/rows=1-24                      59.9kB ± 0%    59.9kB ± 0%    ~     (p=0.648 n=10+8)
KV/Update/SQL/rows=10-24                      113kB ± 0%     113kB ± 0%    ~     (p=0.143 n=10+10)
KV/Update/SQL/rows=100-24                     483kB ± 0%     483kB ± 0%    ~     (p=0.247 n=10+10)
KV/Update/SQL/rows=1000-24                   4.26MB ± 0%    4.26MB ± 0%    ~     (p=0.481 n=10+10)
KV/Update/SQL/rows=10000-24                  79.4MB ± 0%    79.4MB ± 1%    ~     (p=0.796 n=10+10)
KV/Delete/Native/rows=1-24                   11.0kB ± 1%    11.0kB ± 0%    ~     (p=0.120 n=9+7)
KV/Delete/Native/rows=10-24                  20.8kB ± 0%    20.8kB ± 1%    ~     (p=0.782 n=10+10)
KV/Delete/Native/rows=100-24                  127kB ± 0%     127kB ± 0%    ~     (p=0.424 n=10+10)
KV/Delete/Native/rows=1000-24                1.10MB ± 0%    1.10MB ± 0%    ~     (p=0.579 n=10+10)
KV/Delete/Native/rows=10000-24               16.0MB ± 0%    16.0MB ± 0%    ~     (p=0.095 n=9+10)
KV/Delete/SQL/rows=1-24                      45.3kB ± 0%    45.3kB ± 0%    ~     (p=0.549 n=9+10)
KV/Delete/SQL/rows=10-24                     68.1kB ± 0%    68.2kB ± 0%    ~     (p=0.079 n=10+9)
KV/Delete/SQL/rows=1000-24                   3.33MB ± 0%    3.33MB ± 0%    ~     (p=0.075 n=10+10)
KV/Delete/SQL/rows=10000-24                  44.8MB ± 1%    44.5MB ± 0%    ~     (p=0.211 n=10+9)
KV/Scan/Native/rows=1-24                     7.54kB ± 0%    7.53kB ± 0%    ~     (p=0.196 n=10+10)
KV/Scan/Native/rows=10-24                    8.73kB ± 0%    8.72kB ± 0%    ~     (p=0.507 n=9+10)
KV/Scan/Native/rows=100-24                   20.6kB ± 0%    20.6kB ± 0%    ~     (p=0.888 n=9+10)
KV/Scan/Native/rows=1000-24                   147kB ± 0%     147kB ± 0%    ~     (p=0.117 n=10+9)
KV/Scan/Native/rows=10000-24                 1.34MB ± 0%    1.34MB ± 0%    ~     (p=0.888 n=10+9)
KV/Scan/SQL/rows=1-24                        34.9kB ± 0%    34.9kB ± 0%    ~     (p=0.838 n=10+10)
KV/Scan/SQL/rows=10-24                       37.3kB ± 1%    37.2kB ± 1%    ~     (p=0.393 n=10+10)
KV/Scan/SQL/rows=100-24                      54.7kB ± 1%    54.7kB ± 0%    ~     (p=0.483 n=9+10)
KV/Scan/SQL/rows=1000-24                      238kB ± 0%     238kB ± 0%    ~     (p=0.436 n=10+10)
KV/Scan/SQL/rows=10000-24                    2.38MB ± 0%    2.38MB ± 0%    ~     (p=0.971 n=10+10)

name                                       old allocs/op  new allocs/op  delta
Bank/MultinodeCockroach/numAccounts=4-24      17.2k ± 6%     16.4k ± 5%  -4.61%  (p=0.003 n=10+9)
Bank/Cockroach/numAccounts=2-24               3.69k ± 3%     3.56k ± 8%  -3.62%  (p=0.023 n=10+10)
KV/Scan/SQL/rows=10-24                          392 ± 0%       392 ± 0%  -0.10%  (p=0.046 n=10+8)
KV/Insert/Native/rows=1000-24                 11.2k ± 0%     11.2k ± 0%  -0.08%  (p=0.013 n=10+8)
KV/Delete/SQL/rows=100-24                     2.27k ± 0%     2.27k ± 0%  -0.04%  (p=0.011 n=7+8)
Bank/Cockroach/numAccounts=4-24               3.92k ± 6%     3.92k ± 4%    ~     (p=0.842 n=10+9)
Bank/Cockroach/numAccounts=8-24               3.09k ± 4%     3.03k ± 4%    ~     (p=0.211 n=9+10)
Bank/Cockroach/numAccounts=32-24              1.93k ± 2%     1.94k ± 2%    ~     (p=0.645 n=10+10)
Bank/Cockroach/numAccounts=64-24              1.59k ± 1%     1.58k ± 1%    ~     (p=0.271 n=10+10)
Bank/MultinodeCockroach/numAccounts=2-24      21.5k ±15%     20.8k ±10%    ~     (p=0.247 n=10+10)
Bank/MultinodeCockroach/numAccounts=8-24      11.5k ± 3%     11.3k ± 6%    ~     (p=0.447 n=9+10)
Bank/MultinodeCockroach/numAccounts=32-24     5.31k ± 1%     5.34k ± 1%    ~     (p=0.226 n=8+9)
Bank/MultinodeCockroach/numAccounts=64-24     4.06k ± 1%     4.05k ± 4%    ~     (p=0.928 n=10+10)
KV/Insert/Native/rows=1-24                      113 ± 0%       113 ± 0%    ~     (all equal)
KV/Insert/Native/rows=10-24                     225 ± 0%       224 ± 0%    ~     (p=1.000 n=10+10)
KV/Insert/Native/rows=100-24                  1.24k ± 0%     1.24k ± 0%    ~     (all equal)
KV/Insert/Native/rows=10000-24                 111k ± 0%      111k ± 0%    ~     (p=0.753 n=10+10)
KV/Insert/SQL/rows=1-24                         404 ± 0%       404 ± 0%    ~     (all equal)
KV/Insert/SQL/rows=10-24                        641 ± 1%       641 ± 1%    ~     (p=0.970 n=10+10)
KV/Insert/SQL/rows=100-24                     2.77k ± 0%     2.77k ± 0%    ~     (p=0.346 n=10+8)
KV/Insert/SQL/rows=1000-24                    23.8k ± 0%     23.8k ± 0%    ~     (p=0.846 n=8+10)
KV/Insert/SQL/rows=10000-24                    354k ± 0%      354k ± 0%    ~     (p=0.515 n=10+8)
KV/Update/Native/rows=1-24                      165 ± 0%       165 ± 0%    ~     (all equal)
KV/Update/Native/rows=10-24                     348 ± 0%       348 ± 0%    ~     (all equal)
KV/Update/Native/rows=100-24                  2.00k ± 0%     2.00k ± 0%    ~     (p=0.176 n=10+7)
KV/Update/Native/rows=1000-24                 18.4k ± 0%     18.4k ± 0%    ~     (p=0.323 n=9+10)
KV/Update/Native/rows=10000-24                 182k ± 0%      182k ± 0%    ~     (p=0.118 n=10+10)
KV/Update/SQL/rows=1-24                         535 ± 0%       535 ± 0%    ~     (all equal)
KV/Update/SQL/rows=10-24                        899 ± 0%       899 ± 0%    ~     (p=1.000 n=10+10)
KV/Update/SQL/rows=100-24                     3.14k ± 0%     3.14k ± 0%    ~     (p=0.170 n=10+10)
KV/Update/SQL/rows=1000-24                    25.5k ± 0%     25.5k ± 0%    ~     (p=0.329 n=9+8)
KV/Update/SQL/rows=10000-24                    409k ± 1%      409k ± 1%    ~     (p=0.968 n=10+9)
KV/Delete/Native/rows=1-24                      112 ± 0%       112 ± 0%    ~     (all equal)
KV/Delete/Native/rows=10-24                     194 ± 0%       194 ± 0%    ~     (all equal)
KV/Delete/Native/rows=100-24                    935 ± 0%       934 ± 0%    ~     (p=0.191 n=10+10)
KV/Delete/Native/rows=1000-24                 8.25k ± 0%     8.25k ± 0%    ~     (p=0.214 n=10+10)
KV/Delete/Native/rows=10000-24                81.5k ± 0%     81.5k ± 0%    ~     (p=0.345 n=10+9)
KV/Delete/SQL/rows=1-24                         392 ± 0%       392 ± 0%    ~     (all equal)
KV/Delete/SQL/rows=10-24                        585 ± 0%       585 ± 0%    ~     (p=0.471 n=9+9)
KV/Delete/SQL/rows=1000-24                    21.3k ± 0%     21.3k ± 0%    ~     (p=0.210 n=10+10)
KV/Delete/SQL/rows=10000-24                    276k ± 5%      271k ± 0%    ~     (p=0.203 n=10+9)
KV/Scan/Native/rows=1-24                       68.0 ± 0%      68.0 ± 0%    ~     (all equal)
KV/Scan/Native/rows=10-24                      68.0 ± 0%      68.0 ± 0%    ~     (all equal)
KV/Scan/Native/rows=100-24                     68.0 ± 0%      68.0 ± 0%    ~     (all equal)
KV/Scan/Native/rows=1000-24                    69.0 ± 0%      69.0 ± 0%    ~     (all equal)
KV/Scan/Native/rows=10000-24                   82.7 ± 2%      82.5 ± 2%    ~     (p=0.626 n=10+10)
KV/Scan/SQL/rows=1-24                           364 ± 0%       364 ± 0%    ~     (all equal)
KV/Scan/SQL/rows=100-24                         772 ± 0%       772 ± 0%    ~     (p=1.000 n=10+10)
KV/Scan/SQL/rows=1000-24                      4.55k ± 0%     4.55k ± 0%    ~     (p=0.247 n=8+9)
KV/Scan/SQL/rows=10000-24                     44.1k ± 0%     44.1k ± 0%    ~     (p=0.541 n=10+10)

generated sheet: https://docs.google.com/spreadsheets/d/1EoSnsRab5kglCIEqoTyvumguJ3Rym-_zKeoGxaMEFzs/edit

@nvanbenschoten
Copy link
Member Author

bors r+

craig bot pushed a commit that referenced this pull request Dec 16, 2019
43138: storage: pull Store-level concurrency retry loop under Replica, clean up req validation r=nvanbenschoten a=nvanbenschoten

This PR contains a number of cleanup steps necessary to pave the way for the unified `pkg/storage/concurrency` package that we plan to introduce for #41720 (see [prototype](https://github.com/nvanbenschoten/cockroach/commits/nvanbenschoten/lockTable)). Primarily, the PR cleans up Replica-level request validation, unifies some of the request synchronization logic between the read-only and read-write execution paths, and pulls the Store-level concurrency retry loop under the Replica. Future commits will pull parts of this retry loop down into the `concurrency` package's "concurrency manager".

Co-authored-by: Nathan VanBenschoten <nvanbenschoten@gmail.com>
@craig
Copy link
Contributor

craig bot commented Dec 16, 2019

Build succeeded

@craig craig bot merged commit 7103bb9 into cockroachdb:master Dec 16, 2019
@nvanbenschoten nvanbenschoten deleted the nvanbenschoten/concPkg branch December 27, 2019 22:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants