Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

teamcity: failed test: TestStoreRangeMergeWatcher #31160

Closed
cockroach-teamcity opened this issue Oct 9, 2018 · 0 comments
Closed

teamcity: failed test: TestStoreRangeMergeWatcher #31160

cockroach-teamcity opened this issue Oct 9, 2018 · 0 comments
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.
Milestone

Comments

@cockroach-teamcity
Copy link
Member

The following tests appear to have failed on master (test): TestStoreRangeMergeWatcher/inject-failures=false, TestStoreRangeMergeWatcher, TestStoreRangeMergeWatcher/inject-failures=true

You may want to check for open issues.

#954552:

TestStoreRangeMergeWatcher/inject-failures=false
...6
I181009 19:33:50.133577 47131 storage/store_snapshot.go:657  [s1,r1/1:/M{in-ax}] streamed snapshot to (n2,s2):?: kv pairs: 49, log entries: 6, rate-limit: 2.0 MiB/sec, 1ms
I181009 19:33:50.134033 47659 storage/replica_raftstorage.go:803  [s2,r1/?:{-}] applying preemptive snapshot at index 16 (id=e7575a94, encoded size=8342, 1 rocksdb batches, 6 log entries)
I181009 19:33:50.134889 47659 storage/replica_raftstorage.go:809  [s2,r1/?:/M{in-ax}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I181009 19:33:50.135312 47131 storage/replica_command.go:812  [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n2,s2):2): read existing descriptor r1:/M{in-ax} [(n1,s1):1, next=2, gen=0]
I181009 19:33:50.136808 47131 storage/replica.go:3899  [s1,r1/1:/M{in-ax},txn=d18380cb] proposing ADD_REPLICA((n2,s2):2): updated=[(n1,s1):1 (n2,s2):2] next=3
I181009 19:33:50.137897 47131 storage/store_snapshot.go:615  [s1,r1/1:/M{in-ax}] sending preemptive snapshot 574f0e93 at applied index 18
I181009 19:33:50.138278 47131 storage/store_snapshot.go:657  [s1,r1/1:/M{in-ax}] streamed snapshot to (n3,s3):?: kv pairs: 52, log entries: 8, rate-limit: 2.0 MiB/sec, 1ms
I181009 19:33:50.138583 47664 storage/replica_raftstorage.go:803  [s3,r1/?:{-}] applying preemptive snapshot at index 18 (id=574f0e93, encoded size=9284, 1 rocksdb batches, 8 log entries)
I181009 19:33:50.139296 47664 storage/replica_raftstorage.go:809  [s3,r1/?:/M{in-ax}] applied preemptive snapshot in 1ms [clear=0ms batch=0ms entries=0ms commit=0ms]
I181009 19:33:50.139816 47131 storage/replica_command.go:812  [s1,r1/1:/M{in-ax}] change replicas (ADD_REPLICA (n3,s3):3): read existing descriptor r1:/M{in-ax} [(n1,s1):1, (n2,s2):2, next=3, gen=0]
I181009 19:33:50.143404 47131 storage/replica.go:3899  [s1,r1/1:/M{in-ax},txn=e3e13e2f] proposing ADD_REPLICA((n3,s3):3): updated=[(n1,s1):1 (n2,s2):2 (n3,s3):3] next=4
I181009 19:33:50.188320 47131 storage/replica_command.go:298  [s1,r1/1:/M{in-ax}] initiating a split of this range at key "b" [r2]
I181009 19:33:50.201842 47691 storage/replica_proposal.go:211  [s3,r2/3:{b-/Max}] new range lease repl=(n3,s3):3 seq=2 start=0.000000123,299 epo=1 pro=0.000000123,300 following repl=(n1,s1):1 seq=1 start=0.000000000,0 exp=0.900000123,5 pro=0.000000123,6
I181009 19:33:50.202419 47131 storage/replica_command.go:430  [s1,r1/1:{/Min-b}] initiating a merge of r2:{b-/Max} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=0] into this range
I181009 19:33:50.228232 47521 storage/store.go:2744  [s1,r1/1:{/Min-b},txn=ffd87bfc] removing replica r2/1
I181009 19:33:50.228507 47598 storage/store.go:2744  [s2,r1/2:{/Min-b}] removing replica r2/2
I181009 19:33:50.233326 47725 storage/store.go:2744  [s3,r1/3:{/Min-b}] removing replica r2/3
W181009 19:33:50.234140 47789 storage/raft_transport.go:281  unable to accept Raft message from (n3,s3):3: no handler registered for (n1,s1):1
W181009 19:33:50.234220 47789 storage/raft_transport.go:281  unable to accept Raft message from (n2,s2):2: no handler registered for (n1,s1):1
W181009 19:33:50.234717 47787 storage/raft_transport.go:436  no handler found for store 3 in response range_id:1 from_replica:<node_id:1 store_id:1 replica_id:1 > to_replica:<node_id:3 store_id:3 replica_id:3 > union:<error:<message:"store 1 was not found" transaction_restart:NONE origin_node:0 detail:<store_not_found:<store_id:1 > > now:<> > > 
W181009 19:33:50.234740 47819 storage/raft_transport.go:583  while processing outgoing Raft queue to node 2: EOF:
W181009 19:33:50.234753 47790 storage/raft_transport.go:583  while processing outgoing Raft queue to node 3: EOF:
W181009 19:33:50.234778 47787 storage/raft_transport.go:436  no handler found for store 2 in response range_id:1 from_replica:<node_id:1 store_id:1 replica_id:1 > to_replica:<node_id:2 store_id:2 replica_id:2 > union:<error:<message:"store 1 was not found" transaction_restart:NONE origin_node:0 detail:<store_not_found:<store_id:1 > > now:<> > > 



TestStoreRangeMergeWatcher
--- FAIL: test/TestStoreRangeMergeWatcher (3.190s)
Test ended in panic.




TestStoreRangeMergeWatcher/inject-failures=true
...0x2c3c2e0, 0xc421884a20)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:196 +0x7c
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x2c3c2e0, 0xc421884a20)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:165 +0x3e
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4202d5d60, 0xc4214bf4d0, 0xc4202d5d50)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:199 +0xe9
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:192 +0xad

goroutine 48104 [semacquire]:
sync.runtime_notifyListWait(0xc42111b490, 0x2251)
	/usr/local/go/src/runtime/sema.go:510 +0x10b
sync.(*Cond).Wait(0xc42111b480)
	/usr/local/go/src/sync/cond.go:56 +0x80
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420489180, 0x2c3c2e0, 0xc421582ed0)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:196 +0x7c
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x2c3c2e0, 0xc421582ed0)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:165 +0x3e
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421758010, 0xc4214bf4d0, 0xc421758000)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:199 +0xe9
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:192 +0xad

goroutine 48135 [semacquire]:
sync.runtime_notifyListWait(0xc42111b490, 0x2270)
	/usr/local/go/src/runtime/sema.go:510 +0x10b
sync.(*Cond).Wait(0xc42111b480)
	/usr/local/go/src/sync/cond.go:56 +0x80
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).worker(0xc420489180, 0x2c3c2e0, 0xc421583110)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:196 +0x7c
github.com/cockroachdb/cockroach/pkg/storage.(*raftScheduler).Start.func2(0x2c3c2e0, 0xc421583110)
	/go/src/github.com/cockroachdb/cockroach/pkg/storage/scheduler.go:165 +0x3e
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc421758490, 0xc4214bf4d0, 0xc421758480)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:199 +0xe9
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:192 +0xad

goroutine 48154 [chan receive, 7 minutes]:
github.com/cockroachdb/cockroach/pkg/util/netutil.ListenAndServeGRPC.func1(0x2c3c2e0, 0xc42082eb70)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/netutil/net.go:50 +0x5f
github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker.func1(0xc4217586d0, 0xc420add7a0, 0xc42141baa0)
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:199 +0xe9
created by github.com/cockroachdb/cockroach/pkg/util/stop.(*Stopper).RunWorker
	/go/src/github.com/cockroachdb/cockroach/pkg/util/stop/stopper.go:192 +0xad

goroutine 47804 [select]:
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*controlBuffer).get(0xc420ee8300, 0x1, 0x0, 0x0, 0x0, 0x0)
	/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/controlbuf.go:293 +0x133
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.(*loopyWriter).run(0xc4202aade0, 0xc421460270, 0xc420ee8300)
	/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/controlbuf.go:370 +0x166
github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.newHTTP2Client.func3(0xc421434900)
	/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/http2_client.go:298 +0x80
created by github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport.newHTTP2Client
	/go/src/github.com/cockroachdb/cockroach/vendor/google.golang.org/grpc/transport/http2_client.go:296 +0xc8f



TestStoreRangeMergeWatcher
--- FAIL: test/TestStoreRangeMergeWatcher (0.000s)
Test ended in panic.





Please assign, take a look and update the issue accordingly.

@cockroach-teamcity cockroach-teamcity added this to the 2.2 milestone Oct 9, 2018
@cockroach-teamcity cockroach-teamcity added C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. labels Oct 9, 2018
benesch added a commit to benesch/cockroach that referenced this issue Oct 10, 2018
This test could deadlock if the LHS replica on store2 was shut down
before it processed the split at "b". Teach the test to wait for the LHS
replica on store2 to process the split before blocking Raft traffic to
it.

Fixes cockroachdb#31096.
Fixes cockroachdb#31149.
Fixes cockroachdb#31160.
Fixes cockroachdb#31167.

Release note: None
@tbg tbg closed this as completed Oct 10, 2018
craig bot pushed a commit that referenced this issue Oct 10, 2018
31013: kv: try next replica on RangeNotFoundError r=nvanbenschoten,bdarnell a=tschottdorf

Previously, if a Batch RPC came back with a RangeNotFoundError, we would
immediately stop trying to send to more replicas, evict the range
descriptor, and start a new attempt after a back-off.

This new attempt could end up using the same replica, so if the
RangeNotFoundError persisted for some amount of time, so would the
unsuccessful retries for requests to it as DistSender doesn't aggressively
shuffle the replicas.

It turns out that there are such situations, and the election-after-restart
roachtest spuriously hit one of them:

1. new replica receives a preemptive snapshot and the ConfChange
2. cluster restarts
3. now the new replica is in this state until the range wakes
   up, which may not happen for some time. 4. the first request to the range
   runs into the above problem

@nvanbenschoten: I think there is an issue to be filed about the tendency
of DistSender to get stuck in unfortunate configurations.

Fixes #30613.

Release note (bug fix): Avoid repeatedly trying a replica that was found to
be in the process of being added.

31187: roachtest: add synctest r=bdarnell a=tschottdorf

This new roachtest sets up a charybdefs on a single (Ubuntu) node and runs
the `synctest` cli command against a nemesis that injects random I/O
errors.

The synctest command is new. It simulates a Raft log and can be directed at a
filesystem that is being hit with random failures.

The workload essentially writes ascending keys (flushing each one to disk
synchronously) until an I/O error occurs, at which point it re-opens the
instance to verify that all persisted writes are still there. If the
RocksDB instance was permanently corrupted, it switches to a new, pristine,
directory.
This is used in the roachtest, but is also useful for manual use in user
deployments in which we suspect there is a failure to persist data to disk.

This hasn't found anything, but it's fun to watch and also shows us a
number of errors that we know and love from sentry.

Release note: None

31215: storage: deflake TestStoreRangeMergeWatcher r=tschottdorf a=benesch

This test could deadlock if the LHS replica on store2 was shut down
before it processed the split at "b". Teach the test to wait for the LHS
replica on store2 to process the split before blocking Raft traffic to
it.

Fixes #31096.
Fixes #31149.
Fixes #31160.
Fixes #31167.

Release note: None

31217: importccl: add explicit default to mysql testdata timestamp r=dt a=dt

this makes the testdata work on mysql 8.0.2+, where the timestamp type no longer has the implicit defaults.

Release note: none.

31221: cluster: Create final cluster version for 2.1 r=bdarnell a=bdarnell

Release note: None

Co-authored-by: Tobias Schottdorf <tobias.schottdorf@gmail.com>
Co-authored-by: Nikhil Benesch <nikhil.benesch@gmail.com>
Co-authored-by: David Taylor <tinystatemachine@gmail.com>
Co-authored-by: Ben Darnell <ben@bendarnell.com>
tbg pushed a commit to tbg/cockroach that referenced this issue Oct 11, 2018
This test could deadlock if the LHS replica on store2 was shut down
before it processed the split at "b". Teach the test to wait for the LHS
replica on store2 to process the split before blocking Raft traffic to
it.

Fixes cockroachdb#31096.
Fixes cockroachdb#31149.
Fixes cockroachdb#31160.
Fixes cockroachdb#31167.

Release note: None
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.
Projects
None yet
Development

No branches or pull requests

2 participants