Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

backupccl: TestDataDriven/multiregion fails under stress during 4th test cluster startup #93941

Closed
msbutler opened this issue Dec 19, 2022 · 2 comments
Assignees
Labels
A-disaster-recovery C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. C-test-failure Broken test (automatically or manually discovered). T-disaster-recovery

Comments

@msbutler
Copy link
Collaborator

msbutler commented Dec 19, 2022

To repro on your gce worker, run:

./dev test pkg/ccl/backupccl -f TestDataDriven/multiregion

Bisecting this failure is hard as the test has surfaced other bugs recently #92911 and #93492 (comment).

The root cause is likely independent of bulk code and related to server startup. I confirmed no restore is running at the time of failure and the logs right before failure seem to point to server startup code.

The test log looks like this before everything shuts down:

I221219 19:17:48.285012 167725 1@gossip/gossip.go:1559  [n2] 2186  node has connected to cluster via gossip
I221219 19:18:47.907626 25991 kv/kvserver/replica_rangefeed.go:719  [n1,s1,r10/1:/Table/{6-7}] 2187  RangeFeed closed timestamp 1671477467.457059075,0 is behind by 1m0.450540702s
W221219 19:18:48.094144 128036 13@kv/kvserver/store_rebalancer.go:259  [n2,s2,store-rebalancer] 2191  StorePool missing descriptor for local store with ID 2, store list   candidate: avg-ranges=0 avg-leases=0 avg-disk-usage=0 B avg-queries-per-second=0 <no candidates>
W221219 19:18:48.246399 128847 kv/kvclient/rangefeed/rangefeed.go:322  [n2,rangefeed=system-config-cache] 2192  rangefeed failed 0 times, restarting: grpc: keepalive ping failed to receive ACK within timeout [code 14/Unavailable]
W221219 19:18:48.410770 128036 13@kv/kvserver/store_rebalancer.go:363  [n2,s2,store-rebalancer] 2221  no rebalance context given, bailing out of rebalancing store, will try again later
W221219 19:17:48.393245 121263 2@rpc/clock_offset.go:234  [n1,rnode=1,raddr=127.0.0.1:39601,class=default,rpc] 2185  latency jump (prev avg 17.20ms, current 196.65ms)
W221219 19:17:48.191030 21938 2@rpc/clock_offset.go:234  [n3,rnode=2,raddr=127.0.0.1:44283,class=default,rpc] 2188  latency jump (prev avg 13.86ms, current 170.67ms)
W221219 19:18:48.284080 170680 kv/kvserver/raft_transport.go:593  [n3] 2193  creating batch client for node 2 failed: grpc: connection error: desc = "transport: Error while dialing connection interrupted (did the remote node shut down or are there networking issues?)" [code 14/Unavailable]
W221219 19:18:48.284080 170680 kv/kvserver/raft_transport.go:593  [n3] 2193 +(1) grpc: connection error: desc = "transport: Error while dialing connection interrupted (did the remote node shut down or are there networking issues?)" [code 14/Unavailable]
W221219 19:18:48.284080 170680 kv/kvserver/raft_transport.go:593  [n3] 2193 +Error types: (1) *status.Error
W221219 19:18:48.378842 169313 kv/kvclient/kvcoord/dist_sender.go:1675  [n2,intExec=select-running/get-claimed-jobs] 2200  slow range RPC: have been waiting 60.17s (1 attempts) for RPC Scan [/Table/15/4/"\x01\x01\x80/\xbeO\xebI\x84BS\xa8,\xee\xaa\xf0\xce\xd3\xc2"/"reverting",/Table/15/4/"\x01\x01\x80/\xbeO\xebI\x84BS\xa8,\xee\xaa\xf0\xce\xd3\xc2"/"reverting"/PrefixEnd), Scan [/Table/15/4/"\x01\x01\x80/\xbeO\xebI\x84BS\xa8,\xee\xaa\xf0\xce\xd3\xc2"/"running",/Table/15/4/"\x01\x01\x80/\xbeO\xebI\x84BS\xa8,\xee\xaa\xf0\xce\xd3\xc2"/"running"/PrefixEnd), [txn: 15fda723], [can-forward-ts] to r17:/Table/1{5-6} [(n1,s1):1, next=2, gen=0]; resp: failed to send RPC: sending to all replicas failed; last error: rpc error: code = Unavailable desc = keepalive ping failed to receive ACK within timeout
W221219 19:18:48.379682 170276 kv/kvclient/kvcoord/dist_sender.go:1675  [n2,intExec=claim-jobs] 2201  slow range RPC: have been waiting 60.38s (1 attempts) for RPC Scan [/Table/15/4/NULL,/Table/15/4/!NULL), [txn: 89770b54], [can-forward-ts] to r17:/Table/1{5-6} [(n1,s1):1, next=2, gen=0]; resp: (err: <nil>), *roachpb.ScanResponse
W221219 19:18:48.218763 30763 13@kv/kvserver/store_rebalancer.go:259  [n3,s3,store-rebalancer] 2208  StorePool missing descriptor for local store with ID 3, store list   candidate: avg-ranges=0 avg-leases=0 avg-disk-usage=0 B avg-queries-per-second=0 <no candidates>
W221219 19:18:48.229895 25992 13@kv/kvserver/store_rebalancer.go:259  [n1,s1,store-rebalancer] 2215  StorePool missing descriptor for local store with ID 1, store list   candidate: avg-ranges=0 avg-leases=0 avg-disk-usage=0 B avg-queries-per-second=0 <no candidates>
W221219 19:18:48.230732 29081 13@kv/kvserver/store_rebalancer.go:259  [n2,s2,store-rebalancer] 2216  StorePool missing descriptor for local store with ID 2, store list   candidate: avg-ranges=0 avg-leases=0 avg-disk-usage=0 B avg-queries-per-second=0 <no candidates>
W221219 19:18:48.230779 120401 13@kv/kvserver/store_rebalancer.go:259  [n1,s1,store-rebalancer] 2217  StorePool missing descriptor for local store with ID 1, store list   candidate: avg-ranges=0 avg-leases=0 avg-disk-usage=0 B avg-queries-per-second=0 <no candidates>
W221219 19:18:48.229016 158723 13@kv/kvserver/store_rebalancer.go:259  [n1,s1,store-rebalancer] 2219  StorePool missing descriptor for local store with ID 1, store list   candidate: avg-ranges=0 avg-leases=0 avg-disk-usage=0 B avg-queries-per-second=0 <no candidates>
W221219 19:18:48.202366 2414 13@kv/kvserver/store_rebalancer.go:259  [n2,s2,store-rebalancer] 2218  StorePool missing descriptor for local store with ID 2, store list   candidate: avg-ranges=0 avg-leases=0 avg-disk-usage=0 B avg-queries-per-second=0 <no candidates>
W221219 19:17:48.330405 42359 2@rpc/clock_offset.go:234  [n2,rnode=3,raddr=127.0.0.1:36919,class=system,rpc] 2189  latency jump (prev avg 41.32ms, current 755.87ms)
W221219 19:18:48.248259 30017 kv/kvclient/rangefeed/rangefeed.go:322  [n3,rangefeed=lease] 2203  rangefeed failed 0 times, restarting: grpc: keepalive ping failed to receive ACK within timeout [code 14/Unavailable]
W221219 19:17:48.225732 60166 2@rpc/clock_offset.go:234  [n3,rnode=2,raddr=127.0.0.1:36247,class=default,rpc] 2190  latency jump (prev avg 34.72ms, current 139.12ms)
W221219 19:18:48.182040 32707 kv/kvserver/closedts/sidetransport/receiver.go:139  [n3] 2204  closed timestamps side-transport connection dropped from node: 1
E221219 19:18:48.285256 2986 2@rpc/context.go:2137  [n1,rnode=2,raddr=127.0.0.1:44283,class=default,rpc] 2194  closing connection after: grpc: keepalive ping failed to receive ACK within timeout [code 14/Unavailable]
W221219 19:18:48.182061 32344 kv/kvserver/closedts/sidetransport/receiver.go:139  [n2] 2205  closed timestamps side-transport connection dropped from node: 1
W221219 19:18:48.410988 31557 kv/kvclient/rangefeed/rangefeed.go:322  [n3,rangefeed=system-config-cache] 2222  rangefeed failed 0 times, restarting: grpc: keepalive ping failed to receive ACK within timeout [code 14/Unavailable]
W221219 19:18:48.233529 2645 kv/kvclient/rangefeed/rangefeed.go:322  [n2,rangefeed=table-stats-cache] 2223  rangefeed failed 0 times, restarting: grpc: keepalive ping failed to receive ACK within timeout [code 14/Unavailable]
W221219 19:18:48.233707 2646 kv/kvclient/rangefeed/rangefeed.go:322  [n2,rangefeed=lease] 2224  rangefeed failed 0 times, restarting: grpc: keepalive ping failed to receive ACK within timeout [code 14/Unavailable]
W221219 19:18:48.233837 2660 kv/kvclient/rangefeed/rangefeed.go:322  [n2,rangefeed=settings-watcher] 2225  rangefeed failed 0 times, restarting: grpc: keepalive ping failed to receive ACK within timeout [code 14/Unavailable]
W221219 19:18:48.237629 129099 kv/kvclient/rangefeed/rangefeed.go:322  [n2,rangefeed=table-stats-cache] 2226  rangefeed failed 0 times, restarting: grpc: keepalive ping failed to receive ACK within timeout [code 14/Unavailable]
W221219 19:18:47.907802 3321 13@kv/kvserver/store_rebalancer.go:259  [n3,s3,store-rebalancer] 2227  StorePool missing descriptor for local store with ID 3, store list   candidate: avg-ranges=0 avg-leases=0 avg-disk-usage=0 B avg-queries-per-second=0 <no candidates>
W221219 19:18:48.546140 3321 13@kv/kvserver/store_rebalancer.go:363  [n3,s3,store-rebalancer] 2228  no rebalance context given, bailing out of rebalancing store, will try again later
W221219 19:18:48.237915 129100 kv/kvclient/rangefeed/rangefeed.go:322  [n2,rangefeed=lease] 2229  rangefeed failed 0 times, restarting: grpc: keepalive ping failed to receive ACK within timeout [code 14/Unavailable]
W221219 19:18:48.238016 128641 kv/kvclient/rangefeed/rangefeed.go:322  [n2,rangefeed=spanconfig-subscriber] 2230  rangefeed failed 0 times, restarting: grpc: keepalive ping failed to receive ACK within timeout [code 14/Unavailable]
W221219 19:18:48.238435 29459 kv/kvclient/rangefeed/rangefeed.go:322  [n2,rangefeed=table-stats-cache] 2231  rangefeed failed 0 times, restarting: grpc: keepalive ping failed to receive ACK within timeout [code 14/Unavailable]
W221219 19:18:48.549057 3041 2@rpc/clock_offset.go:234  [n3,rnode=1,raddr=127.0.0.1:42909,class=default,rpc] 2232  latency jump (prev avg 36.51ms, current 831.98ms)
E221219 19:18:48.378728 170336 sql/flowinfra/flow_registry.go:336  [n2,intExec=get-vtable-privileges,f00f4af3d,distsql.stmt=SELECT path, username, privileges, grant_options FROM system.privileges WHERE path LIKE $1,distsql.gateway=2,distsql.appname=$ internal-get-vtable-privileges,distsql.txn=0a104fc1-6807-409b-9fdc-b0844fd36f69] 2233  flow id:00f4af3d-1b0d-4a1c-8c9d-fff8024a8a3e : 1 inbound streams timed out after 10s; propagated error throughout flow
W221219 19:18:48.238808 29432 kv/kvclient/rangefeed/rangefeed.go:322  [n2,rangefeed=spanconfig-subscriber] 2234  rangefeed failed 0 times, restarting: grpc: keepalive ping failed to receive ACK within timeout [code 14/Unavailable]
W221219 19:18:48.558530 169887 kv/kvclient/kvcoord/dist_sender.go:1675  [n2] 2235  slow range RPC: have been waiting 60.38s (1 attempts) for RPC EndTxn(parallel commit) [/Table/15/1/823810677177614338/1/1], [txn: 13bf2429] to r17:/Table/1{5-6} [(n1,s1):1, (n2,s2):2, (n3,s3):3, next=4, gen=4]; resp: result is ambiguous: error=grpc: keepalive ping failed to receive ACK within timeout [code 14/Unavailable] [exhausted]
W221219 19:18:48.543260 62671 kv/kvserver/closedts/sidetransport/receiver.go:139  [n3] 2236  closed timestamps side-transport connection dropped from node: 2
E221219 19:18:48.370624 2254 2@rpc/context.go:2137  [n2,rnode=1,raddr=127.0.0.1:42909,class=default,rpc] 2195  closing connection after: grpc: connection error: desc = "transport: Error while dialing connection interrupted (did the remote node shut down or are there networking issues?)" [code 14/Unavailable]
I221219 19:18:48.182844 158722 kv/kvserver/replica_rangefeed.go:719  [n1,s1,r51/1:/Table/5{0-1}] 2206  RangeFeed closed timestamp 1671477468.421572526,0 is behind by 59.76126835s
W221219 19:18:48.549971 3041 2@rpc/clock_offset.go:326  [n3] 2237  uncertain remote offset off=369.113725ms, err=415.990974ms, at=2022-12-19 19:18:48.441472866 +0000 UTC for maximum tolerated offset 400ms, treating as healthy
W221219 19:18:48.093745 42285 kv/kvserver/closedts/sidetransport/receiver.go:139  [n1] 2239  closed timestamps side-transport connection dropped from node: 2
W221219 19:18:48.248628 168113 kv/kvserver/liveness/liveness.go:886  [n2,liveness-hb] 2244  slow heartbeat took 59.706207403s; err=disk write failed while updating node liveness: interrupted during singleflight engine sync:0: context deadline exceeded
E221219 19:18:48.371638 128311 2@rpc/context.go:2137  [n2,rnode=1,raddr=127.0.0.1:39601,class=default,rpc] 2196  closing connection after: grpc: connection error: desc = "transport: Error while dialing connection interrupted (did the remote node shut down or are there networking issues?)" [code 14/Unavailable]
W221219 19:18:48.743647 170497 2@rpc/nodedialer/nodedialer.go:194  [n3] 2245  unable to connect to n2: failed to connect to n2 at 127.0.0.1:36247: grpc: connection error: desc = "transport: Error while dialing connection interrupted (did the remote node shut down or are there networking issues?)" [code 14/Unavailable]
W221219 19:18:48.573280 60405 kv/kvserver/closedts/sidetransport/receiver.go:139  [n2] 2240  closed timestamps side-transport connection dropped from node: 3
W221219 19:18:48.222083 26005 kv/kvserver/liveness/liveness.go:788  [n1,liveness-hb] 2220  failed node liveness heartbeat: operation "node liveness heartbeat" timed out after 0s (given timeout 3s): disk write failed while updating node liveness: interrupted during singleflight engine sync:0: context deadline exceeded
W221219 19:18:48.222083 26005 kv/kvserver/liveness/liveness.go:788  [n1,liveness-hb] 2220 +(1) operation "node liveness heartbeat" timed out after 0s (given timeout 3s)
W221219 19:18:48.222083 26005 kv/kvserver/liveness/liveness.go:788  [n1,liveness-hb] 2220 +Wraps: (2) attached stack trace
W221219 19:18:48.222083 26005 kv/kvserver/liveness/liveness.go:788  [n1,liveness-hb] 2220 +  -- stack trace:
W221219 19:18:48.222083 26005 kv/kvserver/liveness/liveness.go:788  [n1,liveness-hb] 2220 +  | github.com/cockroachdb/cockroach/pkg/kv/kvserver/liveness.(*NodeLiveness).updateLiveness
W221219 19:18:48.222083 26005 kv/kvserver/liveness/liveness.go:788  [n1,liveness-hb] 2220 +  |  github.com/cockroachdb/cockroach/pkg/kv/kvserver/liveness/liveness.go:1290
W221219 19:18:48.222083 26005 kv/kvserver/liveness/liveness.go:788  [n1,liveness-hb] 2220 +  | [...repeated from below...]
W221219 19:18:48.222083 26005 kv/kvserver/liveness/liveness.go:788  [n1,liveness-hb] 2220 +Wraps: (3) disk write failed while updating node liveness
W221219 19:18:48.222083 26005 kv/kvserver/liveness/liveness.go:788  [n1,liveness-hb] 2220 +Wraps: (4) attached stack trace

Jira issue: CRDB-22608

@msbutler msbutler added C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. C-test-failure Broken test (automatically or manually discovered). T-disaster-recovery labels Dec 19, 2022
@blathers-crl
Copy link

blathers-crl bot commented Dec 19, 2022

cc @cockroachdb/disaster-recovery

@stevendanna
Copy link
Collaborator

I stressed this for some time and was unable to reproduce a failure here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-disaster-recovery C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. C-test-failure Broken test (automatically or manually discovered). T-disaster-recovery
Projects
No open projects
Archived in project
Development

No branches or pull requests

2 participants