Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storage: scatter right after split leads to poor balance #35907

Open
tbg opened this issue Mar 18, 2019 · 1 comment

Comments

3 participants
@tbg
Copy link
Member

commented Mar 18, 2019

Run this script:

#!/bin/bash

set -euxo pipefail

pkill -9 roach || true
rm -rf cockroach-data* || true

./cockroach start --insecure --listen-addr 127.0.0.1 --background
./cockroach start --insecure --http-port 8081 --port 26258 --store cockroach-data2 --join 127.0.0.1:26257 --background
./cockroach start --insecure --http-port 8082 --port 26259 --store cockroach-data3 --join 127.0.0.1:26257 --logtostderr --background

sleep 10
./cockroach sql --insecure -e "create table foo(id int primary key, v string);"
./cockroach sql --insecure -e "SET CLUSTER SETTING kv.range_merge.queue_enabled = false;"
./cockroach sql --insecure -e "ALTER TABLE foo SPLIT AT (SELECT i*10 FROM generate_series(1, 999) AS g(i));"

# sleep 70
./cockroach sql --insecure -e "ALTER TABLE foo SCATTER;"

See this kind of graph:

image

The expectation is that the SCATTER leaves the leaseholders roughly balanced. The graph shows a >10x difference.

We think that much of the variability in durations of bulk i/o restore/import is due to this phenomenon.

When I looked at this last, I think it was caused by the allocator receiving updated replica counts only at some interval, but I just inserted a 20s sleep before the scatter and it's just as bad. Ditto 70s:

image

Scatters seems to .... just not be doing the right thing. It seems to drain the local node, giving equal shares of the leases to the other followers.

cc @danhhz this is much worse than I thought 😆

@tbg tbg added this to Incoming in Core via automation Mar 18, 2019

@tbg tbg self-assigned this Mar 18, 2019

@tbg

This comment has been minimized.

Copy link
Member Author

commented Mar 18, 2019

@darinpp this could be a baptism-by-fire debugging allocator/replication issue for you -- let's chat about it in 1:1

@awoods187 awoods187 added the C-bug label Apr 1, 2019

@tbg tbg assigned danhhz and unassigned tbg Apr 17, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.