Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Manager repair gets many errors of: Sending repair_flush_hints_batchlog to node failed: std::runtime_error (timedout) #10004

Closed
yarongilor opened this issue Jan 31, 2022 · 5 comments
Assignees
Milestone

Comments

@yarongilor
Copy link

Installation details
Kernel version: 5.11.0-1027-aws
Scylla version (or git commit hash): 5.0.dev-0.20220127.ba6c02b38 with build-id b93317e46cc252428454f96e8716b0948f28304c
Cluster size: 6 nodes (i3.4xlarge)
Scylla running with shards number (live nodes):
longevity-tls-50gb-3d-master-db-node-1bdb69d6-1 (54.75.41.17 | 10.0.2.245): 14 shards
longevity-tls-50gb-3d-master-db-node-1bdb69d6-4 (18.202.236.152 | 10.0.2.126): 14 shards
longevity-tls-50gb-3d-master-db-node-1bdb69d6-7 (34.255.208.176 | 10.0.2.16): 14 shards
longevity-tls-50gb-3d-master-db-node-1bdb69d6-16 (18.203.139.69 | 10.0.2.86): 14 shards
longevity-tls-50gb-3d-master-db-node-1bdb69d6-17 (54.75.56.198 | 10.0.2.236): 14 shards
longevity-tls-50gb-3d-master-db-node-1bdb69d6-18 (54.171.134.46 | 10.0.0.209): 14 shards
Scylla running with shards number (terminated nodes):
longevity-tls-50gb-3d-master-db-node-1bdb69d6-2 (34.252.164.228 | 10.0.0.69): 14 shards

List of nodes is larger than 10. See sct log for a full list of nodes.

OS (RHEL/CentOS/Ubuntu/AWS AMI): ami-098e8a18da4ea000f (aws: eu-west-1)

Test: longevity-50gb-3days
Test name: longevity_test.LongevityTest.test_custom_time
Test config file(s):

Issue description

====================================

  1. Run manager full cluster repair to nodes
< t:2022-01-30 08:17:52,797 f:cli.py          l:1002 c:sdcm.mgmt.cli        p:DEBUG > Issuing: 'sctool repair -c 7effadd4-2f51-4c53-a6e8-704be7b4ec76'
< t:2022-01-30 08:17:52,808 f:remote_base.py  l:520  c:RemoteCmdRunner      p:DEBUG > Running command "sudo sctool repair -c 7effadd4-2f51-4c53-a6e8-704be7b4ec76"...
< t:2022-01-30 08:17:54,934 f:base.py         l:142  c:RemoteCmdRunner      p:DEBUG > Command "sudo sctool repair -c 7effadd4-2f51-4c53-a6e8-704be7b4ec76" finished with status 0
  1. node 1 got many errors of:
< t:2022-01-30 08:22:57,438 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 022-01-30T08:22:57+00:00 longevity-tls-50gb-3d-master-db-node-1bdb69d6-1 ! WARNING |  [shard 0] repair - repair[ffe00e55-c67a-45fb-94bc-0604016ec04d]: Sending repair_flush_hints_batchlog to node=10.0.2.126, participants=[10.0.2.126, 10.0.1.180, 10.0.2.245], failed: std::runtime_error (timedout)
< t:2022-01-30 08:22:57,450 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-01-30T08:22:57+00:00 longevity-tls-50gb-3d-master-db-node-1bdb69d6-1 ! WARNING |  [shard 0] repair - repair[695d3cb9-c371-4a05-8ae5-cf7860201c37]: Sending repair_flush_hints_batchlog to node=10.0.2.126, participants=[10.0.2.126, 10.0.1.180, 10.0.2.245], failed: std::runtime_error (timedout)

====================================

Restore Monitor Stack command: $ hydra investigate show-monitor 1bdb69d6-92e7-44f0-bfe2-715494307241
Restore monitor on AWS instance using Jenkins job
Show all stored logs command: $ hydra investigate show-logs 1bdb69d6-92e7-44f0-bfe2-715494307241

Test id: 1bdb69d6-92e7-44f0-bfe2-715494307241

Logs:
grafana - https://cloudius-jenkins-test.s3.amazonaws.com/1bdb69d6-92e7-44f0-bfe2-715494307241/20220130_154319/grafana-screenshot-overview-20220130_154322-longevity-tls-50gb-3d-master-monitor-node-1bdb69d6-1.png
db-cluster - https://cloudius-jenkins-test.s3.amazonaws.com/1bdb69d6-92e7-44f0-bfe2-715494307241/20220130_161519/db-cluster-1bdb69d6.tar.gz
monitor-set - https://cloudius-jenkins-test.s3.amazonaws.com/1bdb69d6-92e7-44f0-bfe2-715494307241/20220130_161519/monitor-set-1bdb69d6.tar.gz

Jenkins job URL

@roydahan roydahan added the triage/master Looking for assignee label Feb 1, 2022
@yarongilor yarongilor changed the title Manager repair gets many errors of: ending repair_flush_hints_batchlog to node failed: std::runtime_error (timedout) Manager repair gets many errors of: Sending repair_flush_hints_batchlog to node failed: std::runtime_error (timedout) Feb 1, 2022
@slivne
Copy link
Contributor

slivne commented Feb 2, 2022

@asias I am unsure what this message is

@asias
Copy link
Contributor

asias commented Feb 22, 2022

The verb is about flush hints for repair based tombstone gc. We need to skip such flush if the feature is not enabled at all.

asias added a commit to asias/scylla that referenced this issue Feb 23, 2022
…epair

The flush of hints and batchlog are needed only for the table with
tombstone_gc_mode set to repair mode. We should skip the flush if the
tombstone_gc_mode is not repair mode.

Fixes scylladb#10004
@asias
Copy link
Contributor

asias commented Feb 23, 2022

PR is sent: #10124

denesb pushed a commit that referenced this issue Feb 24, 2022
…epair

The flush of hints and batchlog are needed only for the table with
tombstone_gc_mode set to repair mode. We should skip the flush if the
tombstone_gc_mode is not repair mode.

Fixes #10004

Closes #10124
avikivity pushed a commit that referenced this issue Feb 24, 2022
…epair

The flush of hints and batchlog are needed only for the table with
tombstone_gc_mode set to repair mode. We should skip the flush if the
tombstone_gc_mode is not repair mode.

Fixes #10004

Closes #10124
avikivity pushed a commit that referenced this issue Feb 24, 2022
…epair

The flush of hints and batchlog are needed only for the table with
tombstone_gc_mode set to repair mode. We should skip the flush if the
tombstone_gc_mode is not repair mode.

Fixes #10004

Closes #10124
@aleksbykov
Copy link
Contributor

Issue reproduced during job:
Installation details
Kernel version: 5.11.0-1028-aws
Scylla version (or git commit hash): 5.1.dev-0.20220225.680195564de7 with build-id ddb50a4cec4995f08be48bff703067d207209192
Cluster size: 6 nodes (i3.4xlarge)
Scylla running with shards number (live nodes):
longevity-cdc-100gb-4h-master-db-node-ca6babf1-1 (34.245.80.97 | 10.0.2.92): 14 shards
longevity-cdc-100gb-4h-master-db-node-ca6babf1-2 (54.155.127.165 | 10.0.2.253): 14 shards
longevity-cdc-100gb-4h-master-db-node-ca6babf1-3 (63.32.57.155 | 10.0.3.239): 14 shards
longevity-cdc-100gb-4h-master-db-node-ca6babf1-4 (3.250.75.32 | 10.0.3.38): 14 shards
longevity-cdc-100gb-4h-master-db-node-ca6babf1-7 (3.249.245.90 | 10.0.1.224): 14 shards
longevity-cdc-100gb-4h-master-db-node-ca6babf1-8 (34.243.3.14 | 10.0.1.152): 14 shards
Scylla running with shards number (terminated nodes):
longevity-cdc-100gb-4h-master-db-node-ca6babf1-6 (34.244.125.229 | 10.0.2.91): 14 shards
longevity-cdc-100gb-4h-master-db-node-ca6babf1-5 (63.33.208.50 | 10.0.1.52): 14 shards
OS (RHEL/CentOS/Ubuntu/AWS AMI): ami-068c5f920fa103552 (aws: eu-west-1)

Test: longevity-cdc-100gb-4h-test
Test name: longevity_test.LongevityTest.test_custom_time
Test config file(s):

  • longevity-cdc-100gb-4h.yaml

Issue description

====================================
During RemovenodeAddNode, after node removed and running repair a lot of warning/error:

2022-02-27 08:29:21.169 <2022-02-27 08:29:10.000>: (DatabaseLogEvent Severity.WARNING) period_type=one-time event_id=5e130fce-c9a1-4830-bc5e-8c573f625bf0: type=WARNING regex=!\s*?WARNING  line_number=69045 node=longevity-cdc-100gb-4h-master-db-node-ca6babf1-7
2022-02-27T08:29:10+00:00 longevity-cdc-100gb-4h-master-db-node-ca6babf1-7 ! WARNING |  [shard 2] repair - repair[fa8b7709-a6ae-4282-916a-e819e34332cf]: Repair 1488 out of 1537 ranges, shard=2, keyspace=system_distributed_everywhere, table={cdc_generation_descriptions_v2}, range=(8571698795014278876, 8576489202800067148], peers={10.0.1.52, 10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, live_peers={10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, status=partial
2022-02-27 08:29:21.169 <2022-02-27 08:29:10.000>: (DatabaseLogEvent Severity.WARNING) period_type=one-time event_id=5e130fce-c9a1-4830-bc5e-8c573f625bf0: type=WARNING regex=!\s*?WARNING  line_number=69046 node=longevity-cdc-100gb-4h-master-db-node-ca6babf1-7
2022-02-27T08:29:10+00:00 longevity-cdc-100gb-4h-master-db-node-ca6babf1-7 ! WARNING |  [shard 3] repair - repair[fa8b7709-a6ae-4282-916a-e819e34332cf]: Repair 1493 out of 1537 ranges, shard=3, keyspace=system_distributed_everywhere, table={cdc_generation_descriptions_v2}, range=(8657715598509174200, 8658970322893933446], peers={10.0.1.52, 10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, live_peers={10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, status=partial
2022-02-27 08:29:21.171 <2022-02-27 08:29:10.000>: (DatabaseLogEvent Severity.WARNING) period_type=one-time event_id=5e130fce-c9a1-4830-bc5e-8c573f625bf0: type=WARNING regex=!\s*?WARNING  line_number=69049 node=longevity-cdc-100gb-4h-master-db-node-ca6babf1-7
2022-02-27T08:29:10+00:00 longevity-cdc-100gb-4h-master-db-node-ca6babf1-7 ! WARNING |  [shard 3] repair - repair[fa8b7709-a6ae-4282-916a-e819e34332cf]: Repair 1494 out of 1537 ranges, shard=3, keyspace=system_distributed_everywhere, table={cdc_generation_descriptions_v2}, range=(8658970322893933446, 8661986261792809247], peers={10.0.1.52, 10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, live_peers={10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, status=partial
2022-02-27 08:29:21.172 <2022-02-27 08:29:10.000>: (DatabaseLogEvent Severity.WARNING) period_type=one-time event_id=5e130fce-c9a1-4830-bc5e-8c573f625bf0: type=WARNING regex=!\s*?WARNING  line_number=69051 node=longevity-cdc-100gb-4h-master-db-node-ca6babf1-7
2022-02-27T08:29:10+00:00 longevity-cdc-100gb-4h-master-db-node-ca6babf1-7 ! WARNING |  [shard 2] repair - repair[fa8b7709-a6ae-4282-916a-e819e34332cf]: Repair 1489 out of 1537 ranges, shard=2, keyspace=system_distributed_everywhere, table={cdc_generation_descriptions_v2}, range=(8576489202800067148, 8594531591067090272], peers={10.0.1.52, 10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, live_peers={10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, status=partial
2022-02-27 08:29:21.173 <2022-02-27 08:29:10.000>: (DatabaseLogEvent Severity.WARNING) period_type=one-time event_id=5e130fce-c9a1-4830-bc5e-8c573f625bf0: type=WARNING regex=!\s*?WARNING  line_number=69053 node=longevity-cdc-100gb-4h-master-db-node-ca6babf1-7
2022-02-27T08:29:10+00:00 longevity-cdc-100gb-4h-master-db-node-ca6babf1-7 ! WARNING |  [shard 3] repair - repair[fa8b7709-a6ae-4282-916a-e819e34332cf]: Repair 1495 out of 1537 ranges, shard=3, keyspace=system_distributed_everywhere, table={cdc_generation_descriptions_v2}, range=(8661986261792809247, 8665415673986421115], peers={10.0.1.52, 10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, live_peers={10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, status=partial
2022-02-27 08:29:21.174 <2022-02-27 08:29:10.000>: (DatabaseLogEvent Severity.WARNING) period_type=one-time event_id=5e130fce-c9a1-4830-bc5e-8c573f625bf0: type=WARNING regex=!\s*?WARNING  line_number=69055 node=longevity-cdc-100gb-4h-master-db-node-ca6babf1-7
2022-02-27T08:29:10+00:00 longevity-cdc-100gb-4h-master-db-node-ca6babf1-7 ! WARNING |  [shard 3] repair - repair[fa8b7709-a6ae-4282-916a-e819e34332cf]: Repair 1496 out of 1537 ranges, shard=3, keyspace=system_distributed_everywhere, table={cdc_generation_descriptions_v2}, range=(8665415673986421115, 8673788338015596236], peers={10.0.1.52, 10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, live_peers={10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, status=partial
2022-02-27 08:29:21.175 <2022-02-27 08:29:10.000>: (DatabaseLogEvent Severity.WARNING) period_type=one-time event_id=5e130fce-c9a1-4830-bc5e-8c573f625bf0: type=WARNING regex=!\s*?WARNING  line_number=69057 node=longevity-cdc-100gb-4h-master-db-node-ca6babf1-7
2022-02-27T08:29:10+00:00 longevity-cdc-100gb-4h-master-db-node-ca6babf1-7 ! WARNING |  [shard 2] repair - repair[fa8b7709-a6ae-4282-916a-e819e34332cf]: Repair 1490 out of 1537 ranges, shard=2, keyspace=system_distributed_everywhere, table={cdc_generation_descriptions_v2}, range=(8594531591067090272, 8599775295396031849], peers={10.0.1.52, 10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, live_peers={10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, status=partial
2022-02-27 08:29:21.176 <2022-02-27 08:29:10.000>: (DatabaseLogEvent Severity.WARNING) period_type=one-time event_id=5e130fce-c9a1-4830-bc5e-8c573f625bf0: type=WARNING regex=!\s*?WARNING  line_number=69059 node=longevity-cdc-100gb-4h-master-db-node-ca6babf1-7
2022-02-27T08:29:10+00:00 longevity-cdc-100gb-4h-master-db-node-ca6babf1-7 ! WARNING |  [shard 2] repair - repair[fa8b7709-a6ae-4282-916a-e819e34332cf]: Repair 1491 out of 1537 ranges, shard=2, keyspace=system_distributed_everywhere, table={cdc_generation_descriptions_v2}, range=(8599775295396031849, 8634242110728663007], peers={10.0.1.52, 10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, live_peers={10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, status=partial
2022-02-27 08:29:21.177 <2022-02-27 08:29:10.000>: (DatabaseLogEvent Severity.WARNING) period_type=one-time event_id=5e130fce-c9a1-4830-bc5e-8c573f625bf0: type=WARNING regex=!\s*?WARNING  line_number=69061 node=longevity-cdc-100gb-4h-master-db-node-ca6babf1-7
2022-02-27T08:29:10+00:00 longevity-cdc-100gb-4h-master-db-node-ca6babf1-7 ! WARNING |  [shard 2] repair - repair[fa8b7709-a6ae-4282-916a-e819e34332cf]: Repair 1492 out of 1537 ranges, shard=2, keyspace=system_distributed_everywhere, table={cdc_generation_descriptions_v2}, range=(8634242110728663007, 8657715598509174200], peers={10.0.1.52, 10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, live_peers={10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, status=partial
2022-02-27 08:29:21.178 <2022-02-27 08:29:10.000>: (DatabaseLogEvent Severity.WARNING) period_type=one-time event_id=5e130fce-c9a1-4830-bc5e-8c573f625bf0: type=WARNING regex=!\s*?WARNING  line_number=69063 node=longevity-cdc-100gb-4h-master-db-node-ca6babf1-7
2022-02-27T08:29:10+00:00 longevity-cdc-100gb-4h-master-db-node-ca6babf1-7 ! WARNING |  [shard 2] repair - repair[fa8b7709-a6ae-4282-916a-e819e34332cf]: Repair 1493 out of 1537 ranges, shard=2, keyspace=system_distributed_everywhere, table={cdc_generation_descriptions_v2}, range=(8657715598509174200, 8658970322893933446], peers={10.0.1.52, 10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, live_peers={10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, status=partial
2022-02-27 08:29:21.178 <2022-02-27 08:29:10.000>: (DatabaseLogEvent Severity.WARNING) period_type=one-time event_id=5e130fce-c9a1-4830-bc5e-8c573f625bf0: type=WARNING regex=!\s*?WARNING  line_number=69065 node=longevity-cdc-100gb-4h-master-db-node-ca6babf1-7
2022-02-27T08:29:10+00:00 longevity-cdc-100gb-4h-master-db-node-ca6babf1-7 ! WARNING |  [shard 3] repair - repair[fa8b7709-a6ae-4282-916a-e819e34332cf]: Repair 1497 out of 1537 ranges, shard=3, keyspace=system_distributed_everywhere, table={cdc_generation_descriptions_v2}, range=(8673788338015596236, 8692957558425367032], peers={10.0.1.52, 10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, live_peers={10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, status=partial
2022-02-27 08:29:21.179 <2022-02-27 08:29:10.000>: (DatabaseLogEvent Severity.WARNING) period_type=one-time event_id=5e130fce-c9a1-4830-bc5e-8c573f625bf0: type=WARNING regex=!\s*?WARNING  line_number=69067 node=longevity-cdc-100gb-4h-master-db-node-ca6babf1-7
2022-02-27T08:29:10+00:00 longevity-cdc-100gb-4h-master-db-node-ca6babf1-7 ! WARNING |  [shard 3] repair - repair[fa8b7709-a6ae-4282-916a-e819e34332cf]: Repair 1498 out of 1537 ranges, shard=3, keyspace=system_distributed_everywhere, table={cdc_generation_descriptions_v2}, range=(8692957558425367032, 8698072487889644464], peers={10.0.1.52, 10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, live_peers={10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, status=partial
2022-02-27 08:29:21.180 <2022-02-27 08:29:10.000>: (DatabaseLogEvent Severity.WARNING) period_type=one-time event_id=5e130fce-c9a1-4830-bc5e-8c573f625bf0: type=WARNING regex=!\s*?WARNING  line_number=69069 node=longevity-cdc-100gb-4h-master-db-node-ca6babf1-7
2022-02-27T08:29:10+00:00 longevity-cdc-100gb-4h-master-db-node-ca6babf1-7 ! WARNING |  [shard 2] repair - repair[fa8b7709-a6ae-4282-916a-e819e34332cf]: Repair 1494 out of 1537 ranges, shard=2, keyspace=system_distributed_everywhere, table={cdc_generation_descriptions_v2}, range=(8658970322893933446, 8661986261792809247], peers={10.0.1.52, 10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, live_peers={10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, status=partial
2022-02-27 08:29:21.181 <2022-02-27 08:29:10.000>: (DatabaseLogEvent Severity.WARNING) period_type=one-time event_id=5e130fce-c9a1-4830-bc5e-8c573f625bf0: type=WARNING regex=!\s*?WARNING  line_number=69071 node=longevity-cdc-100gb-4h-master-db-node-ca6babf1-7
2022-02-27T08:29:10+00:00 longevity-cdc-100gb-4h-master-db-node-ca6babf1-7 ! WARNING |  [shard 3] repair - repair[fa8b7709-a6ae-4282-916a-e819e34332cf]: Repair 1499 out of 1537 ranges, shard=3, keyspace=system_distributed_everywhere, table={cdc_generation_descriptions_v2}, range=(8698072487889644464, 8710480835702853760], peers={10.0.1.52, 10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, live_peers={10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, status=partial
2022-02-27 08:29:21.181 <2022-02-27 08:29:10.000>: (DatabaseLogEvent Severity.WARNING) period_type=one-time event_id=5e130fce-c9a1-4830-bc5e-8c573f625bf0: type=WARNING regex=!\s*?WARNING  line_number=69073 node=longevity-cdc-100gb-4h-master-db-node-ca6babf1-7
2022-02-27T08:29:10+00:00 longevity-cdc-100gb-4h-master-db-node-ca6babf1-7 ! WARNING |  [shard 3] repair - repair[fa8b7709-a6ae-4282-916a-e819e34332cf]: Repair 1500 out of 1537 ranges, shard=3, keyspace=system_distributed_everywhere, table={cdc_generation_descriptions_v2}, range=(8710480835702853760, 8712972501021104153], peers={10.0.1.52, 10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, live_peers={10.0.3.38, 10.0.2.92, 10.0.3.239, 10.0.2.253}, status=partial

====================================

Restore Monitor Stack command: $ hydra investigate show-monitor ca6babf1-15d9-47e8-82fd-b673684cbcb4
Restore monitor on AWS instance using Jenkins job
Show all stored logs command: $ hydra investigate show-logs ca6babf1-15d9-47e8-82fd-b673684cbcb4

Test id: ca6babf1-15d9-47e8-82fd-b673684cbcb4

Logs:
grafana - [https://cloudius-jenkins-test.s3.amazonaws.com/ca6babf1-15d9-47e8-82fd-b673684cbcb4/20220227_084251/grafana-screenshot-longevity-cdc-100gb-4h-test-scylla-per-server-metrics-nemesis-20220227_084522-longevity-cdc-100gb-4h-master-monitor-node-ca6babf1-1.png](https://www.google.com/url?q=https://cloudius-jenkins-test.s3.amazonaws.com/ca6babf1-15d9-47e8-82fd-b673684cbcb4/20220227_084251/grafana-screenshot-longevity-cdc-100gb-4h-test-scylla-per-server-metrics-nemesis-20220227_084522-longevity-cdc-100gb-4h-master-monitor-node-ca6babf1-1.png%255D(https://cloudius-jenkins-test.s3.amazonaws.com/ca6babf1-15d9-47e8-82fd-b673684cbcb4/20220227_084251/grafana-screenshot-longevity-cdc-100gb-4h-test-scylla-per-server-metrics-nemesis-20220227_084522-longevity-cdc-100gb-4h-master-monitor-node-ca6babf1-1.png)&source=gmail-html&ust=1646123803071000&usg=AOvVaw3HqVicXLEcyfLjkgT_3ma2)
grafana - [https://cloudius-jenkins-test.s3.amazonaws.com/ca6babf1-15d9-47e8-82fd-b673684cbcb4/20220227_084251/grafana-screenshot-overview-20220227_084251-longevity-cdc-100gb-4h-master-monitor-node-ca6babf1-1.png](https://www.google.com/url?q=https://cloudius-jenkins-test.s3.amazonaws.com/ca6babf1-15d9-47e8-82fd-b673684cbcb4/20220227_084251/grafana-screenshot-overview-20220227_084251-longevity-cdc-100gb-4h-master-monitor-node-ca6babf1-1.png%255D(https://cloudius-jenkins-test.s3.amazonaws.com/ca6babf1-15d9-47e8-82fd-b673684cbcb4/20220227_084251/grafana-screenshot-overview-20220227_084251-longevity-cdc-100gb-4h-master-monitor-node-ca6babf1-1.png)&source=gmail-html&ust=1646123803071000&usg=AOvVaw1Fgaqok4ax1gzFVbH7eK-1)
db-cluster - [https://cloudius-jenkins-test.s3.amazonaws.com/ca6babf1-15d9-47e8-82fd-b673684cbcb4/20220227_090222/db-cluster-ca6babf1.tar.gz](https://www.google.com/url?q=https://cloudius-jenkins-test.s3.amazonaws.com/ca6babf1-15d9-47e8-82fd-b673684cbcb4/20220227_090222/db-cluster-ca6babf1.tar.gz%255D(https://cloudius-jenkins-test.s3.amazonaws.com/ca6babf1-15d9-47e8-82fd-b673684cbcb4/20220227_090222/db-cluster-ca6babf1.tar.gz)&source=gmail-html&ust=1646123803071000&usg=AOvVaw15p-UdK50Hpw5qI0OWALLv)
loader-set - [https://cloudius-jenkins-test.s3.amazonaws.com/ca6babf1-15d9-47e8-82fd-b673684cbcb4/20220227_090222/loader-set-ca6babf1.tar.gz](https://www.google.com/url?q=https://cloudius-jenkins-test.s3.amazonaws.com/ca6babf1-15d9-47e8-82fd-b673684cbcb4/20220227_090222/loader-set-ca6babf1.tar.gz%255D(https://cloudius-jenkins-test.s3.amazonaws.com/ca6babf1-15d9-47e8-82fd-b673684cbcb4/20220227_090222/loader-set-ca6babf1.tar.gz)&source=gmail-html&ust=1646123803071000&usg=AOvVaw0_h-djv8Bgb4S1E2cPQ6Zn)
monitor-set - [https://cloudius-jenkins-test.s3.amazonaws.com/ca6babf1-15d9-47e8-82fd-b673684cbcb4/20220227_090222/monitor-set-ca6babf1.tar.gz](https://www.google.com/url?q=https://cloudius-jenkins-test.s3.amazonaws.com/ca6babf1-15d9-47e8-82fd-b673684cbcb4/20220227_090222/monitor-set-ca6babf1.tar.gz%255D(https://cloudius-jenkins-test.s3.amazonaws.com/ca6babf1-15d9-47e8-82fd-b673684cbcb4/20220227_090222/monitor-set-ca6babf1.tar.gz)&source=gmail-html&ust=1646123803071000&usg=AOvVaw3-Wt-p9gsC2nh2s3K7FSjd)
sct - [https://cloudius-jenkins-test.s3.amazonaws.com/ca6babf1-15d9-47e8-82fd-b673684cbcb4/20220227_090222/sct-runner-ca6babf1.tar.gz](https://www.google.com/url?q=https://cloudius-jenkins-test.s3.amazonaws.com/ca6babf1-15d9-47e8-82fd-b673684cbcb4/20220227_090222/sct-runner-ca6babf1.tar.gz%255D(https://cloudius-jenkins-test.s3.amazonaws.com/ca6babf1-15d9-47e8-82fd-b673684cbcb4/20220227_090222/sct-runner-ca6babf1.tar.gz)&source=gmail-html&ust=1646123803071000&usg=AOvVaw2i5oWvAIwiqIB7M-l8UGpR)

Jenkins job URL

avikivity pushed a commit that referenced this issue Jul 4, 2022
…epair

The flush of hints and batchlog are needed only for the table with
tombstone_gc_mode set to repair mode. We should skip the flush if the
tombstone_gc_mode is not repair mode.

Fixes #10004

Closes #10124

(cherry picked from commit ec59f7a)
@avikivity
Copy link
Member

Backported to 5.0. Earlier branches did not have the bug.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants