Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

teamcity: failed test: TestInitialPartitioning #41850

Closed
cockroach-teamcity opened this issue Oct 23, 2019 · 0 comments · Fixed by #41880
Closed

teamcity: failed test: TestInitialPartitioning #41850

cockroach-teamcity opened this issue Oct 23, 2019 · 0 comments · Fixed by #41880
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.
Milestone

Comments

@cockroach-teamcity
Copy link
Member

The following tests appear to have failed on master (testrace): TestInitialPartitioning/oid_table, TestInitialPartitioning/collatedstring{da}_table, TestInitialPartitioning, TestInitialPartitioning/inet_table, TestInitialPartitioning/time_table, TestInitialPartitioning/string_table, TestInitialPartitioning/uuid_table, TestInitialPartitioning/varbit_table

You may want to check for open issues.

#1554973:

TestInitialPartitioning/varbit_table
...le collection
              [n1,client=127.0.0.1:32806,user=root] query cache hit
              [n1,client=127.0.0.1:32806,user=root] planning ends
              [n1,client=127.0.0.1:32806,user=root] checking distributability
              [n1,client=127.0.0.1:32806,user=root] will distribute plan: true
              [n1,client=127.0.0.1:32806,user=root] execution starts: distributed engine
               === SPAN START: consuming rows ===
              [n1,client=127.0.0.1:32806,user=root] creating DistSQL plan with isLocal=false
              [n1,client=127.0.0.1:32806,user=root] querying next range at /Table/91/1/B1101000110010100111011101011100001111101101011001100
              [n1,client=127.0.0.1:32806,user=root] running DistSQL plan
               === SPAN START: flow ===
              [n1,client=127.0.0.1:32806,user=root] starting (0 processors, 0 startables)
               === SPAN START: table reader ===
            cockroach.processorid: 0
            cockroach.stat.tablereader.bytes.read: 0 B
            cockroach.stat.tablereader.input.rows: 0
            cockroach.stat.tablereader.stalltime: 11.173ms
              [n1,client=127.0.0.1:32806,user=root] starting scan with limitBatches false
              [n1,client=127.0.0.1:32806,user=root] Scan /Table/91/1/B1101000110010100111011101011100001111101101011001100{-/#}
               === SPAN START: txn coordinator send ===
               === SPAN START: dist sender send ===
              [n1,client=127.0.0.1:32806,user=root,txn=25474556] querying next range at /Table/91/1/B1101000110010100111011101011100001111101101011001100
              [n1,client=127.0.0.1:32806,user=root,txn=25474556] r268: sending batch 1 Scan to (n1,s1):1
              [n1,client=127.0.0.1:32806,user=root,txn=25474556] sending request to local client
               === SPAN START: /cockroach.roachpb.Internal/Batch ===
              [n1] 1 Scan
              [n1,s1] executing 1 requests
              [n1,s1,r268/1:/{Table/89/2-Max}] read-only path
              [n1,s1,r268/1:/{Table/89/2-Max}] read has no clock uncertainty
              [n1,s1,r268/1:/{Table/89/2-Max}] acquire latches
              [n1,s1,r268/1:/{Table/89/2-Max}] waited 73.865µs to acquire latches
              [n1,s1,r268/1:/{Table/89/2-Max}] waiting for read lock
              [n1,s1,r268/1:/{Table/89/2-Max}] read completed
               === SPAN START: count rows ===
            cockroach.processorid: 1
            cockroach.stat.aggregator.input.rows: 0
            cockroach.stat.aggregator.mem.max: 0 B
            cockroach.stat.aggregator.stalltime: 370µs
              [n1,client=127.0.0.1:32806,user=root] execution ends
              [n1,client=127.0.0.1:32806,user=root] rows affected: 1
              [n1,client=127.0.0.1:32806,user=root] AutoCommit. err: <nil>
              [n1,client=127.0.0.1:32806,user=root] releasing 1 tables
               === SPAN START: exec stmt ===
              [n1,client=127.0.0.1:32806,user=root] [NoTxn pos:19980] executing ExecStmt: SET TRACING = off
              [n1,client=127.0.0.1:32806,user=root] executing: SET TRACING = off in state: NoTxn
            goroutine 1183345 [running]:
            runtime/debug.Stack(0xc003e135c0, 0x6722520, 0xc0059aa000)
            	/usr/local/go/src/runtime/debug/stack.go:24 +0xab
            github.com/cockroachdb/cockroach/pkg/testutils.SucceedsSoon(0x6816820, 0xc001783000, 0xc003e135c0)
            	/go/src/github.com/cockroachdb/cockroach/pkg/testutils/soon.go:37 +0x87
            github.com/cockroachdb/cockroach/pkg/ccl/partitionccl.TestInitialPartitioning.func1(0xc001783000)
            	/go/src/github.com/cockroachdb/cockroach/pkg/ccl/partitionccl/partition_test.go:1196 +0x23a
            testing.tRunner(0xc001783000, 0xc006096ff0)
            	/usr/local/go/src/testing/testing.go:865 +0x164
            created by testing.(*T).Run
            	/usr/local/go/src/testing/testing.go:916 +0x65b



TestInitialPartitioning
...ing ADD_REPLICA[(n2,s2):3]: after=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I191023 10:54:30.802207 3159 storage/replica_command.go:1586  [n1,replicate,s1,r11/1:/Table/1{5-6}] change replicas (add [(n3,s3):2LEARNER] remove []): existing descriptor r11:/Table/1{5-6} [(n1,s1):1, next=2, gen=0]
I191023 10:54:31.006542 3159 storage/replica_raft.go:291  [n1,s1,r11/1:/Table/1{5-6}] proposing ADD_REPLICA[(n3,s3):2LEARNER]: after=[(n1,s1):1 (n3,s3):2LEARNER] next=3
I191023 10:54:31.024825 3159 storage/store_snapshot.go:978  [n1,replicate,s1,r11/1:/Table/1{5-6}] sending LEARNER snapshot 925481a3 at applied index 16
I191023 10:54:31.025977 3159 storage/store_snapshot.go:1021  [n1,replicate,s1,r11/1:/Table/1{5-6}] streamed snapshot to (n3,s3):2LEARNER: kv pairs: 6, log entries: 0, rate-limit: 8.0 MiB/sec, 0.01s
I191023 10:54:31.047066 32543 storage/replica_raftstorage.go:794  [n3,s3,r11/2:{-}] applying LEARNER snapshot [id=925481a3 index=16]
I191023 10:54:31.053373 32543 storage/replica_raftstorage.go:815  [n3,s3,r11/2:/Table/1{5-6}] applied LEARNER snapshot [total=6ms ingestion=4@3ms id=925481a3 index=16]
I191023 10:54:31.089071 3159 storage/replica_command.go:1586  [n1,replicate,s1,r11/1:/Table/1{5-6}] change replicas (add [(n3,s3):2] remove []): existing descriptor r11:/Table/1{5-6} [(n1,s1):1, (n3,s3):2LEARNER, next=3, gen=1]
I191023 10:54:31.185731 3159 storage/replica_raft.go:291  [n1,s1,r11/1:/Table/1{5-6}] proposing ADD_REPLICA[(n3,s3):2]: after=[(n1,s1):1 (n3,s3):2] next=3
I191023 10:54:31.214232 3159 storage/replica_command.go:1586  [n1,replicate,s1,r11/1:/Table/1{5-6}] change replicas (add [(n2,s2):3LEARNER] remove []): existing descriptor r11:/Table/1{5-6} [(n1,s1):1, (n3,s3):2, next=3, gen=2]
I191023 10:54:31.401818 3159 storage/replica_raft.go:291  [n1,s1,r11/1:/Table/1{5-6}] proposing ADD_REPLICA[(n2,s2):3LEARNER]: after=[(n1,s1):1 (n3,s3):2 (n2,s2):3LEARNER] next=4
I191023 10:54:31.467359 33120 storage/raft_snapshot_queue.go:125  [n1,raftsnapshot,s1,r11/1:/Table/1{5-6}] skipping snapshot; replica is likely a learner in the process of being added: (n2,s2):3LEARNER
I191023 10:54:31.491861 3159 storage/store_snapshot.go:978  [n1,replicate,s1,r11/1:/Table/1{5-6}] sending LEARNER snapshot f39f9600 at applied index 23
I191023 10:54:31.493255 3159 storage/store_snapshot.go:1021  [n1,replicate,s1,r11/1:/Table/1{5-6}] streamed snapshot to (n2,s2):3LEARNER: kv pairs: 10, log entries: 0, rate-limit: 8.0 MiB/sec, 0.05s
I191023 10:54:31.511281 33106 storage/replica_raftstorage.go:794  [n2,s2,r11/3:{-}] applying LEARNER snapshot [id=f39f9600 index=23]
I191023 10:54:31.528958 33106 storage/replica_raftstorage.go:815  [n2,s2,r11/3:/Table/1{5-6}] applied LEARNER snapshot [total=17ms ingestion=4@14ms id=f39f9600 index=23]
I191023 10:54:31.563982 3159 storage/replica_command.go:1586  [n1,replicate,s1,r11/1:/Table/1{5-6}] change replicas (add [(n2,s2):3] remove []): existing descriptor r11:/Table/1{5-6} [(n1,s1):1, (n3,s3):2, (n2,s2):3LEARNER, next=4, gen=3]
I191023 10:54:31.799426 3159 storage/replica_raft.go:291  [n1,s1,r11/1:/Table/1{5-6}] proposing ADD_REPLICA[(n2,s2):3]: after=[(n1,s1):1 (n3,s3):2 (n2,s2):3] next=4
I191023 10:54:31.935778 3159 storage/queue.go:1134  [n1,replicate] purgatory is now empty
I191023 10:54:32.007283 2620 testutils/testcluster/testcluster.go:747  WaitForFullReplication took: 21.858414896s
I191023 10:54:32.271472 4868 sql/event_log.go:132  [n1,client=127.0.0.1:32806,user=root] Event: "create_database", target: 52, info: {DatabaseName:data Statement:CREATE DATABASE data User:root}
I191023 10:54:32.436801 4868 sql/event_log.go:132  [n1,client=127.0.0.1:32806,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:server.declined_reservation_timeout Value:00:00:00 User:root}
I191023 10:54:32.616817 4868 sql/event_log.go:132  [n1,client=127.0.0.1:32806,user=root] Event: "set_cluster_setting", target: 0, info: {SettingName:server.failed_reservation_timeout Value:00:00:00 User:root}



TestInitialPartitioning/string_table
...er plan
              [n1,client=127.0.0.1:32806,user=root] added table 'data.public.string_table' to table collection
              [n1,client=127.0.0.1:32806,user=root] query cache hit
              [n1,client=127.0.0.1:32806,user=root] planning ends
              [n1,client=127.0.0.1:32806,user=root] checking distributability
              [n1,client=127.0.0.1:32806,user=root] will distribute plan: true
              [n1,client=127.0.0.1:32806,user=root] execution starts: distributed engine
               === SPAN START: consuming rows ===
              [n1,client=127.0.0.1:32806,user=root] creating DistSQL plan with isLocal=false
              [n1,client=127.0.0.1:32806,user=root] querying next range at /Table/84/1/"r"
              [n1,client=127.0.0.1:32806,user=root] running DistSQL plan
               === SPAN START: flow ===
              [n1,client=127.0.0.1:32806,user=root] starting (0 processors, 0 startables)
               === SPAN START: table reader ===
            cockroach.processorid: 0
            cockroach.stat.tablereader.bytes.read: 0 B
            cockroach.stat.tablereader.input.rows: 0
            cockroach.stat.tablereader.stalltime: 27.641ms
              [n1,client=127.0.0.1:32806,user=root] starting scan with limitBatches false
              [n1,client=127.0.0.1:32806,user=root] Scan /Table/84/1/"r"{-/#}
               === SPAN START: txn coordinator send ===
               === SPAN START: dist sender send ===
              [n1,client=127.0.0.1:32806,user=root,txn=4d224d81] querying next range at /Table/84/1/"r"
              [n1,client=127.0.0.1:32806,user=root,txn=4d224d81] r241: sending batch 1 Scan to (n1,s1):1
              [n1,client=127.0.0.1:32806,user=root,txn=4d224d81] sending request to local client
               === SPAN START: /cockroach.roachpb.Internal/Batch ===
              [n1] 1 Scan
              [n1,s1] executing 1 requests
              [n1,s1,r241/1:/Table/84/1/"r"{-/Pref…}] read-only path
              [n1,s1,r241/1:/Table/84/1/"r"{-/Pref…}] read has no clock uncertainty
              [n1,s1,r241/1:/Table/84/1/"r"{-/Pref…}] acquire latches
              [n1,s1,r241/1:/Table/84/1/"r"{-/Pref…}] waited 85.077µs to acquire latches
              [n1,s1,r241/1:/Table/84/1/"r"{-/Pref…}] waiting for read lock
              [n1,s1,r241/1:/Table/84/1/"r"{-/Pref…}] read completed
               === SPAN START: count rows ===
            cockroach.processorid: 1
            cockroach.stat.aggregator.input.rows: 0
            cockroach.stat.aggregator.mem.max: 0 B
            cockroach.stat.aggregator.stalltime: 14.975ms
              [n1,client=127.0.0.1:32806,user=root] execution ends
              [n1,client=127.0.0.1:32806,user=root] rows affected: 1
              [n1,client=127.0.0.1:32806,user=root] AutoCommit. err: <nil>
              [n1,client=127.0.0.1:32806,user=root] releasing 1 tables
               === SPAN START: exec stmt ===
              [n1,client=127.0.0.1:32806,user=root] [NoTxn pos:15079] executing ExecStmt: SET TRACING = off
              [n1,client=127.0.0.1:32806,user=root] executing: SET TRACING = off in state: NoTxn
            goroutine 861217 [running]:
            runtime/debug.Stack(0xc005f59680, 0x6722520, 0xc004de8dc0)
            	/usr/local/go/src/runtime/debug/stack.go:24 +0xab
            github.com/cockroachdb/cockroach/pkg/testutils.SucceedsSoon(0x6816820, 0xc006600300, 0xc005f59680)
            	/go/src/github.com/cockroachdb/cockroach/pkg/testutils/soon.go:37 +0x87
            github.com/cockroachdb/cockroach/pkg/ccl/partitionccl.TestInitialPartitioning.func1(0xc006600300)
            	/go/src/github.com/cockroachdb/cockroach/pkg/ccl/partitionccl/partition_test.go:1196 +0x23a
            testing.tRunner(0xc006600300, 0xc00414afc0)
            	/usr/local/go/src/testing/testing.go:865 +0x164
            created by testing.(*T).Run
            	/usr/local/go/src/testing/testing.go:916 +0x65b



TestInitialPartitioning/time_table
...timizer plan
              [n1,client=127.0.0.1:32806,user=root] added table 'data.public.time_table' to table collection
              [n1,client=127.0.0.1:32806,user=root] query cache hit
              [n1,client=127.0.0.1:32806,user=root] planning ends
              [n1,client=127.0.0.1:32806,user=root] checking distributability
              [n1,client=127.0.0.1:32806,user=root] will distribute plan: true
              [n1,client=127.0.0.1:32806,user=root] execution starts: distributed engine
               === SPAN START: consuming rows ===
              [n1,client=127.0.0.1:32806,user=root] creating DistSQL plan with isLocal=false
              [n1,client=127.0.0.1:32806,user=root] querying next range at /Table/90/1/21122602240
              [n1,client=127.0.0.1:32806,user=root] running DistSQL plan
               === SPAN START: flow ===
              [n1,client=127.0.0.1:32806,user=root] starting (0 processors, 0 startables)
               === SPAN START: table reader ===
            cockroach.processorid: 0
            cockroach.stat.tablereader.bytes.read: 0 B
            cockroach.stat.tablereader.input.rows: 0
            cockroach.stat.tablereader.stalltime: 3.129ms
              [n1,client=127.0.0.1:32806,user=root] starting scan with limitBatches false
              [n1,client=127.0.0.1:32806,user=root] Scan /Table/90/1/21122602240{-/#}
               === SPAN START: txn coordinator send ===
               === SPAN START: dist sender send ===
              [n1,client=127.0.0.1:32806,user=root,txn=899e79fd] querying next range at /Table/90/1/21122602240
              [n1,client=127.0.0.1:32806,user=root,txn=899e79fd] r266: sending batch 1 Scan to (n1,s1):1
              [n1,client=127.0.0.1:32806,user=root,txn=899e79fd] sending request to local client
               === SPAN START: /cockroach.roachpb.Internal/Batch ===
              [n1] 1 Scan
              [n1,s1] executing 1 requests
              [n1,s1,r266/1:/{Table/89/1/"…-Max}] read-only path
              [n1,s1,r266/1:/{Table/89/1/"…-Max}] read has no clock uncertainty
              [n1,s1,r266/1:/{Table/89/1/"…-Max}] acquire latches
              [n1,s1,r266/1:/{Table/89/1/"…-Max}] waited 89.026µs to acquire latches
              [n1,s1,r266/1:/{Table/89/1/"…-Max}] waiting for read lock
              [n1,s1,r266/1:/{Table/89/1/"…-Max}] read completed
               === SPAN START: count rows ===
            cockroach.processorid: 1
            cockroach.stat.aggregator.input.rows: 0
            cockroach.stat.aggregator.mem.max: 0 B
            cockroach.stat.aggregator.stalltime: 415µs
              [n1,client=127.0.0.1:32806,user=root] execution ends
              [n1,client=127.0.0.1:32806,user=root] rows affected: 1
              [n1,client=127.0.0.1:32806,user=root] AutoCommit. err: <nil>
              [n1,client=127.0.0.1:32806,user=root] releasing 1 tables
               === SPAN START: exec stmt ===
              [n1,client=127.0.0.1:32806,user=root] [NoTxn pos:19255] executing ExecStmt: SET TRACING = off
              [n1,client=127.0.0.1:32806,user=root] executing: SET TRACING = off in state: NoTxn
            goroutine 1135592 [running]:
            runtime/debug.Stack(0xc004778060, 0x6722520, 0xc0065081e0)
            	/usr/local/go/src/runtime/debug/stack.go:24 +0xab
            github.com/cockroachdb/cockroach/pkg/testutils.SucceedsSoon(0x6816820, 0xc002866e00, 0xc004778060)
            	/go/src/github.com/cockroachdb/cockroach/pkg/testutils/soon.go:37 +0x87
            github.com/cockroachdb/cockroach/pkg/ccl/partitionccl.TestInitialPartitioning.func1(0xc002866e00)
            	/go/src/github.com/cockroachdb/cockroach/pkg/ccl/partitionccl/partition_test.go:1196 +0x23a
            testing.tRunner(0xc002866e00, 0xc0059d2f00)
            	/usr/local/go/src/testing/testing.go:865 +0x164
            created by testing.(*T).Run
            	/usr/local/go/src/testing/testing.go:916 +0x65b



TestInitialPartitioning/uuid_table
...'data.public.uuid_table'
              [n1,client=127.0.0.1:32806,user=root] query cache hit but needed update
              [n1,client=127.0.0.1:32806,user=root] planning ends
              [n1,client=127.0.0.1:32806,user=root] checking distributability
              [n1,client=127.0.0.1:32806,user=root] will distribute plan: true
              [n1,client=127.0.0.1:32806,user=root] execution starts: distributed engine
               === SPAN START: consuming rows ===
              [n1,client=127.0.0.1:32806,user=root] creating DistSQL plan with isLocal=false
              [n1,client=127.0.0.1:32806,user=root] querying next range at /Table/88/1/"\x8eӣ:\xb8\x81D~\x812\x1d\x11[D}\xbd"
              [n1,client=127.0.0.1:32806,user=root] running DistSQL plan
               === SPAN START: flow ===
              [n1,client=127.0.0.1:32806,user=root] starting (0 processors, 0 startables)
               === SPAN START: table reader ===
            cockroach.processorid: 0
            cockroach.stat.tablereader.bytes.read: 0 B
            cockroach.stat.tablereader.input.rows: 0
            cockroach.stat.tablereader.stalltime: 2.323ms
              [n1,client=127.0.0.1:32806,user=root] starting scan with limitBatches false
              [n1,client=127.0.0.1:32806,user=root] Scan /Table/88/1/"\x8eӣ:\xb8\x81D~\x812\x1d\x11[D}\xbd"{-/#}
               === SPAN START: txn coordinator send ===
               === SPAN START: dist sender send ===
              [n1,client=127.0.0.1:32806,user=root,txn=3c80f37c] querying next range at /Table/88/1/"\x8eӣ:\xb8\x81D~\x812\x1d\x11[D}\xbd"
              [n1,client=127.0.0.1:32806,user=root,txn=3c80f37c] r261: sending batch 1 Scan to (n1,s1):1
              [n1,client=127.0.0.1:32806,user=root,txn=3c80f37c] sending request to local client
               === SPAN START: /cockroach.roachpb.Internal/Batch ===
              [n1] 1 Scan
              [n1,s1] executing 1 requests
              [n1,s1,r261/1:/{Table/88/1/"…-Max}] read-only path
              [n1,s1,r261/1:/{Table/88/1/"…-Max}] read has no clock uncertainty
              [n1,s1,r261/1:/{Table/88/1/"…-Max}] acquire latches
              [n1,s1,r261/1:/{Table/88/1/"…-Max}] waited 88.71µs to acquire latches
              [n1,s1,r261/1:/{Table/88/1/"…-Max}] waiting for read lock
              [n1,s1,r261/1:/{Table/88/1/"…-Max}] read completed
               === SPAN START: count rows ===
            cockroach.processorid: 1
            cockroach.stat.aggregator.input.rows: 0
            cockroach.stat.aggregator.mem.max: 0 B
            cockroach.stat.aggregator.stalltime: 405µs
              [n1,client=127.0.0.1:32806,user=root] execution ends
              [n1,client=127.0.0.1:32806,user=root] rows affected: 1
              [n1,client=127.0.0.1:32806,user=root] AutoCommit. err: <nil>
              [n1,client=127.0.0.1:32806,user=root] releasing 1 tables
               === SPAN START: exec stmt ===
              [n1,client=127.0.0.1:32806,user=root] [NoTxn pos:17745] executing ExecStmt: SET TRACING = off
              [n1,client=127.0.0.1:32806,user=root] executing: SET TRACING = off in state: NoTxn
            goroutine 1031463 [running]:
            runtime/debug.Stack(0xc00573a360, 0x6722520, 0xc003d08100)
            	/usr/local/go/src/runtime/debug/stack.go:24 +0xab
            github.com/cockroachdb/cockroach/pkg/testutils.SucceedsSoon(0x6816820, 0xc0065ae700, 0xc00573a360)
            	/go/src/github.com/cockroachdb/cockroach/pkg/testutils/soon.go:37 +0x87
            github.com/cockroachdb/cockroach/pkg/ccl/partitionccl.TestInitialPartitioning.func1(0xc0065ae700)
            	/go/src/github.com/cockroachdb/cockroach/pkg/ccl/partitionccl/partition_test.go:1196 +0x23a
            testing.tRunner(0xc0065ae700, 0xc0048cc210)
            	/usr/local/go/src/testing/testing.go:865 +0x164
            created by testing.(*T).Run
            	/usr/local/go/src/testing/testing.go:916 +0x65b



TestInitialPartitioning/oid_table
...ient=127.0.0.1:32806,user=root] added table 'data.public.oid_table' to table collection
              [n1,client=127.0.0.1:32806,user=root] query cache hit
              [n1,client=127.0.0.1:32806,user=root] planning ends
              [n1,client=127.0.0.1:32806,user=root] checking distributability
              [n1,client=127.0.0.1:32806,user=root] will distribute plan: true
              [n1,client=127.0.0.1:32806,user=root] execution starts: distributed engine
               === SPAN START: consuming rows ===
              [n1,client=127.0.0.1:32806,user=root] creating DistSQL plan with isLocal=false
              [n1,client=127.0.0.1:32806,user=root] querying next range at /Table/87/1/1034856974
              [n1,client=127.0.0.1:32806,user=root] running DistSQL plan
               === SPAN START: flow ===
              [n1,client=127.0.0.1:32806,user=root] starting (0 processors, 0 startables)
               === SPAN START: table reader ===
            cockroach.processorid: 0
            cockroach.stat.tablereader.bytes.read: 0 B
            cockroach.stat.tablereader.input.rows: 0
            cockroach.stat.tablereader.stalltime: 20.428ms
              [n1,client=127.0.0.1:32806,user=root] starting scan with limitBatches false
              [n1,client=127.0.0.1:32806,user=root] Scan /Table/87/1/1034856974{-/#}
               === SPAN START: txn coordinator send ===
               === SPAN START: dist sender send ===
              [n1,client=127.0.0.1:32806,user=root,txn=76874057] querying next range at /Table/87/1/1034856974
              [n1,client=127.0.0.1:32806,user=root,txn=76874057] r256: sending batch 1 Scan to (n1,s1):1
              [n1,client=127.0.0.1:32806,user=root,txn=76874057] sending request to local client
               === SPAN START: /cockroach.roachpb.Internal/Batch ===
              [n1] 1 Scan
              [n1,s1] executing 1 requests
              [n1,s1,r256/1:/Table/87/1/103485697{4-5}] read-only path
              [n1,s1,r256/1:/Table/87/1/103485697{4-5}] read has no clock uncertainty
              [n1,s1,r256/1:/Table/87/1/103485697{4-5}] acquire latches
              [n1,s1,r256/1:/Table/87/1/103485697{4-5}] waited 97.566µs to acquire latches
              [n1,s1,r256/1:/Table/87/1/103485697{4-5}] waiting for read lock
              [n1,s1,r256/1:/Table/87/1/103485697{4-5}] read completed
               === SPAN START: count rows ===
            cockroach.processorid: 1
            cockroach.stat.aggregator.input.rows: 0
            cockroach.stat.aggregator.mem.max: 0 B
            cockroach.stat.aggregator.stalltime: 8.337ms
              [n1,client=127.0.0.1:32806,user=root] execution ends
              [n1,client=127.0.0.1:32806,user=root] rows affected: 1
              [n1,client=127.0.0.1:32806,user=root] AutoCommit. err: <nil>
              [n1,client=127.0.0.1:32806,user=root] releasing 1 tables
               === SPAN START: exec stmt ===
              [n1,client=127.0.0.1:32806,user=root] [NoTxn pos:17002] executing ExecStmt: SET TRACING = off
              [n1,client=127.0.0.1:32806,user=root] executing: SET TRACING = off in state: NoTxn
            goroutine 980382 [running]:
            runtime/debug.Stack(0xc005fdd290, 0x6722520, 0xc003a93aa0)
            	/usr/local/go/src/runtime/debug/stack.go:24 +0xab
            github.com/cockroachdb/cockroach/pkg/testutils.SucceedsSoon(0x6816820, 0xc0065af100, 0xc005fdd290)
            	/go/src/github.com/cockroachdb/cockroach/pkg/testutils/soon.go:37 +0x87
            github.com/cockroachdb/cockroach/pkg/ccl/partitionccl.TestInitialPartitioning.func1(0xc0065af100)
            	/go/src/github.com/cockroachdb/cockroach/pkg/ccl/partitionccl/partition_test.go:1196 +0x23a
            testing.tRunner(0xc0065af100, 0xc006a42a80)
            	/usr/local/go/src/testing/testing.go:865 +0x164
            created by testing.(*T).Run
            	/usr/local/go/src/testing/testing.go:916 +0x65b



TestInitialPartitioning/collatedstring{da}_table
...2806,user=root] planning ends
              [n1,client=127.0.0.1:32806,user=root] checking distributability
              [n1,client=127.0.0.1:32806,user=root] will distribute plan: true
              [n1,client=127.0.0.1:32806,user=root] execution starts: distributed engine
               === SPAN START: consuming rows ===
              [n1,client=127.0.0.1:32806,user=root] creating DistSQL plan with isLocal=false
              [n1,client=127.0.0.1:32806,user=root] querying next range at /Table/92/1/"\x89uy\x8d\x9e\x9b\x88\xd18\x92h>\x00\x00\x00 \x00 \x00 \x00 \x00\x00\x02\x02\x02\x02"
              [n1,client=127.0.0.1:32806,user=root] running DistSQL plan
               === SPAN START: flow ===
              [n1,client=127.0.0.1:32806,user=root] starting (0 processors, 0 startables)
               === SPAN START: table reader ===
            cockroach.processorid: 0
            cockroach.stat.tablereader.bytes.read: 0 B
            cockroach.stat.tablereader.input.rows: 0
            cockroach.stat.tablereader.stalltime: 40.699ms
              [n1,client=127.0.0.1:32806,user=root] starting scan with limitBatches false
              [n1,client=127.0.0.1:32806,user=root] Scan /Table/92/1/"\x89uy\x8d\x9e\x9b\x88\xd18\x92h>\x00\x00\x00 \x00 \x00 \x00 \x00\x00\x02\x02\x02\x02"{-/#}
               === SPAN START: txn coordinator send ===
               === SPAN START: dist sender send ===
              [n1,client=127.0.0.1:32806,user=root,txn=3f5a0bd9] querying next range at /Table/92/1/"\x89uy\x8d\x9e\x9b\x88\xd18\x92h>\x00\x00\x00 \x00 \x00 \x00 \x00\x00\x02\x02\x02\x02"
              [n1,client=127.0.0.1:32806,user=root,txn=3f5a0bd9] r271: sending batch 1 Scan to (n1,s1):1
              [n1,client=127.0.0.1:32806,user=root,txn=3f5a0bd9] sending request to local client
               === SPAN START: /cockroach.roachpb.Internal/Batch ===
              [n1] 1 Scan
              [n1,s1] executing 1 requests
              [n1,s1,r271/1:/{Table/90/1/2…-Max}] read-only path
              [n1,s1,r271/1:/{Table/90/1/2…-Max}] read has no clock uncertainty
              [n1,s1,r271/1:/{Table/90/1/2…-Max}] acquire latches
              [n1,s1,r271/1:/{Table/90/1/2…-Max}] waited 77.873µs to acquire latches
              [n1,s1,r271/1:/{Table/90/1/2…-Max}] waiting for read lock
              [n1,s1,r271/1:/{Table/90/1/2…-Max}] read completed
               === SPAN START: count rows ===
            cockroach.processorid: 1
            cockroach.stat.aggregator.input.rows: 0
            cockroach.stat.aggregator.mem.max: 0 B
            cockroach.stat.aggregator.stalltime: 508µs
              [n1,client=127.0.0.1:32806,user=root] execution ends
              [n1,client=127.0.0.1:32806,user=root] rows affected: 1
              [n1,client=127.0.0.1:32806,user=root] AutoCommit. err: <nil>
              [n1,client=127.0.0.1:32806,user=root] releasing 1 tables
               === SPAN START: exec stmt ===
              [n1,client=127.0.0.1:32806,user=root] [NoTxn pos:20717] executing ExecStmt: SET TRACING = off
              [n1,client=127.0.0.1:32806,user=root] executing: SET TRACING = off in state: NoTxn
            goroutine 1231716 [running]:
            runtime/debug.Stack(0xc0053aa120, 0x6722520, 0xc004a646c0)
            	/usr/local/go/src/runtime/debug/stack.go:24 +0xab
            github.com/cockroachdb/cockroach/pkg/testutils.SucceedsSoon(0x6816820, 0xc000133e00, 0xc0053aa120)
            	/go/src/github.com/cockroachdb/cockroach/pkg/testutils/soon.go:37 +0x87
            github.com/cockroachdb/cockroach/pkg/ccl/partitionccl.TestInitialPartitioning.func1(0xc000133e00)
            	/go/src/github.com/cockroachdb/cockroach/pkg/ccl/partitionccl/partition_test.go:1196 +0x23a
            testing.tRunner(0xc000133e00, 0xc003c93b00)
            	/usr/local/go/src/testing/testing.go:865 +0x164
            created by testing.(*T).Run
            	/usr/local/go/src/testing/testing.go:916 +0x65b



TestInitialPartitioning/inet_table
...public.inet_table' to table collection
              [n1,client=127.0.0.1:32806,user=root] query cache hit
              [n1,client=127.0.0.1:32806,user=root] planning ends
              [n1,client=127.0.0.1:32806,user=root] checking distributability
              [n1,client=127.0.0.1:32806,user=root] will distribute plan: true
              [n1,client=127.0.0.1:32806,user=root] execution starts: distributed engine
               === SPAN START: consuming rows ===
              [n1,client=127.0.0.1:32806,user=root] creating DistSQL plan with isLocal=false
              [n1,client=127.0.0.1:32806,user=root] querying next range at /Table/89/1/"\x014\v\xe8\xeec\xebKr\x97Q\td\x04\xef\xe8n\xdf"
              [n1,client=127.0.0.1:32806,user=root] running DistSQL plan
               === SPAN START: flow ===
              [n1,client=127.0.0.1:32806,user=root] starting (0 processors, 0 startables)
               === SPAN START: table reader ===
            cockroach.processorid: 0
            cockroach.stat.tablereader.bytes.read: 0 B
            cockroach.stat.tablereader.input.rows: 0
            cockroach.stat.tablereader.stalltime: 3.129ms
              [n1,client=127.0.0.1:32806,user=root] starting scan with limitBatches false
              [n1,client=127.0.0.1:32806,user=root] Scan /Table/89/1/"\x014\v\xe8\xeec\xebKr\x97Q\td\x04\xef\xe8n\xdf"{-/#}
               === SPAN START: txn coordinator send ===
               === SPAN START: dist sender send ===
              [n1,client=127.0.0.1:32806,user=root,txn=0c9da276] querying next range at /Table/89/1/"\x014\v\xe8\xeec\xebKr\x97Q\td\x04\xef\xe8n\xdf"
              [n1,client=127.0.0.1:32806,user=root,txn=0c9da276] r264: sending batch 1 Scan to (n1,s1):1
              [n1,client=127.0.0.1:32806,user=root,txn=0c9da276] sending request to local client
               === SPAN START: /cockroach.roachpb.Internal/Batch ===
              [n1] 1 Scan
              [n1,s1] executing 1 requests
              [n1,s1,r264/1:/{Table/89-Max}] read-only path
              [n1,s1,r264/1:/{Table/89-Max}] read has no clock uncertainty
              [n1,s1,r264/1:/{Table/89-Max}] acquire latches
              [n1,s1,r264/1:/{Table/89-Max}] waited 74.508µs to acquire latches
              [n1,s1,r264/1:/{Table/89-Max}] waiting for read lock
              [n1,s1,r264/1:/{Table/89-Max}] read completed
               === SPAN START: count rows ===
            cockroach.processorid: 1
            cockroach.stat.aggregator.input.rows: 0
            cockroach.stat.aggregator.mem.max: 0 B
            cockroach.stat.aggregator.stalltime: 460µs
              [n1,client=127.0.0.1:32806,user=root] execution ends
              [n1,client=127.0.0.1:32806,user=root] rows affected: 1
              [n1,client=127.0.0.1:32806,user=root] AutoCommit. err: <nil>
              [n1,client=127.0.0.1:32806,user=root] releasing 1 tables
               === SPAN START: exec stmt ===
              [n1,client=127.0.0.1:32806,user=root] [NoTxn pos:18506] executing ExecStmt: SET TRACING = off
              [n1,client=127.0.0.1:32806,user=root] executing: SET TRACING = off in state: NoTxn
            goroutine 1082350 [running]:
            runtime/debug.Stack(0xc0053bc8d0, 0x6722520, 0xc004d1b0c0)
            	/usr/local/go/src/runtime/debug/stack.go:24 +0xab
            github.com/cockroachdb/cockroach/pkg/testutils.SucceedsSoon(0x6816820, 0xc001b0fa00, 0xc0053bc8d0)
            	/go/src/github.com/cockroachdb/cockroach/pkg/testutils/soon.go:37 +0x87
            github.com/cockroachdb/cockroach/pkg/ccl/partitionccl.TestInitialPartitioning.func1(0xc001b0fa00)
            	/go/src/github.com/cockroachdb/cockroach/pkg/ccl/partitionccl/partition_test.go:1196 +0x23a
            testing.tRunner(0xc001b0fa00, 0xc005d4d290)
            	/usr/local/go/src/testing/testing.go:865 +0x164
            created by testing.(*T).Run
            	/usr/local/go/src/testing/testing.go:916 +0x65b




Please assign, take a look and update the issue accordingly.

@cockroach-teamcity cockroach-teamcity added C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot. labels Oct 23, 2019
@cockroach-teamcity cockroach-teamcity added this to the 19.2 milestone Oct 23, 2019
andreimatei added a commit to andreimatei/cockroach that referenced this issue Oct 23, 2019
This reverts commit 0b0c71f, reversing
changes made to 9b36103.

Reverting "config,sqlbase: move computation of splits for SQL tables to
sqlbase". It seems to have caused TestInitialPartitioning and
TestRepartitioning to timeout often when run under testrace on TC (only
when the build runs on the master branch and these tests are run in
conjunction with all the other packages). I don't know what's wrong yet.

Fixes cockroachdb#41812
Fixes cockroachdb#41821
Fixes cockroachdb#41825
Fixes cockroachdb#41831
Fixes cockroachdb#41835
Fixes cockroachdb#41843
Fixes cockroachdb#41846
Fixes cockroachdb#41850
Fixes cockroachdb#41862
Fixes cockroachdb#41874
Fixes cockroachdb#41879

Fixes cockroachdb#41652
Fixes cockroachdb#41813
Fixes cockroachdb#41822
Fixes cockroachdb#41826
Fixes cockroachdb#41832
Fixes cockroachdb#41836
Fixes cockroachdb#41844
Fixes cockroachdb#41847
Fixes cockroachdb#41851
Fixes cockroachdb#41868
Fixes cockroachdb#41875

Release note: None
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-test-failure Broken test (automatically or manually discovered). O-robot Originated from a bot.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant