Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Having an index being dropped during the bootstrap causes a node to fail to start (Startup failed: std::runtime_error ({shard 2: std::runtime_error (repair[433ec093-991c-46c1-939f-9a9585a706e9]: 2 out of 2331 ranges failed ) #15598

Closed
1 of 2 tasks
k0machi opened this issue Oct 1, 2023 · 40 comments · Fixed by #17231
Assignees
Labels
area/materialized views Backport candidate P1 Urgent symptom/ci stability Issues that failed in ScyllaDB CI - tests and framework triage/master Looking for assignee
Milestone

Comments

@k0machi
Copy link
Contributor

k0machi commented Oct 1, 2023

Issue description

  • This issue is a regression.
  • It is unknown if this issue is a regression.

A decommission on node-1 starts, at the same time a bit later another nemesis runnning in parallel starts to create an index:

< t:2023-09-23 05:15:57,011 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:15:56+00:00 longevity-parallel-topology-schema--db-node-136349f0-1     !INFO | scylla[5373]:  [shard 0:main] init - Shutting down task_manager
< t:2023-09-23 05:15:57,011 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:15:56+00:00 longevity-parallel-topology-schema--db-node-136349f0-1     !INFO | scylla[5373]:  [shard 0:main] init - Shutting down task_manager was successful
< t:2023-09-23 05:15:57,011 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:15:56+00:00 longevity-parallel-topology-schema--db-node-136349f0-1     !INFO | scylla[5373]:  [shard 0:main] init - Shutting down service_memory_limiter
< t:2023-09-23 05:15:57,011 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:15:56+00:00 longevity-parallel-topology-schema--db-node-136349f0-1     !INFO | scylla[5373]:  [shard 0:main] init - Shutting down service_memory_limiter was successful
< t:2023-09-23 05:15:57,011 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:15:56+00:00 longevity-parallel-topology-schema--db-node-136349f0-1     !INFO | scylla[5373]:  [shard 0:main] init - Shutting down sst_dir_semaphore
< t:2023-09-23 05:15:57,012 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:15:56+00:00 longevity-parallel-topology-schema--db-node-136349f0-1     !INFO | scylla[5373]:  [shard 0:main] init - Shutting down sst_dir_semaphore was successful
< t:2023-09-23 05:15:57,012 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:15:56+00:00 longevity-parallel-topology-schema--db-node-136349f0-1     !INFO | scylla[5373]:  [shard 0:main] init - Shutting down migration manager notifier
< t:2023-09-23 05:15:57,012 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:15:56+00:00 longevity-parallel-topology-schema--db-node-136349f0-1     !INFO | scylla[5373]:  [shard 0:main] init - Shutting down migration manager notifier was successful
< t:2023-09-23 05:15:57,012 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:15:56+00:00 longevity-parallel-topology-schema--db-node-136349f0-1     !INFO | scylla[5373]:  [shard 0:main] init - Shutting down prometheus API server
< t:2023-09-23 05:15:57,013 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:15:56+00:00 longevity-parallel-topology-schema--db-node-136349f0-1     !INFO | scylla[5373]:  [shard 0:main] init - Shutting down prometheus API server was successful
< t:2023-09-23 05:15:57,013 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:15:56+00:00 longevity-parallel-topology-schema--db-node-136349f0-1     !INFO | scylla[5373]:  [shard 0:main] init - Shutting down sighup
< t:2023-09-23 05:15:57,013 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:15:56+00:00 longevity-parallel-topology-schema--db-node-136349f0-1     !INFO | scylla[5373]:  [shard 0:main] init - Shutting down sighup was successful
< t:2023-09-23 05:15:57,013 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:15:56+00:00 longevity-parallel-topology-schema--db-node-136349f0-1     !INFO | scylla[5373]:  [shard 0:main] init - Shutting down configurables
< t:2023-09-23 05:15:57,014 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:15:56+00:00 longevity-parallel-topology-schema--db-node-136349f0-1     !INFO | scylla[5373]:  [shard 0:main] init - Shutting down configurables was successful
< t:2023-09-23 05:15:57,014 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:15:56+00:00 longevity-parallel-topology-schema--db-node-136349f0-1     !INFO | scylla[5373]:  [shard 0:main] init - Scylla version 5.4.0~dev-0.20230921.a56a4b6226e6 shutdown complete.

< t:2023-09-23 05:17:11,316 f:common.py       l:1766 c:utils                p:DEBUG > Executing CQL 'SELECT column_name, type FROM system_schema.columns WHERE keyspace_name = 'scylla_bench' AND table_name = 'test' AND  kind in ('static', 'regular') ALLOW FILTERING' ...
< t:2023-09-23 05:17:11,316 f:common.py       l:1774 c:utils                p:DEBUG > Executing CQL 'SELECT column_name, type FROM system_schema.columns WHERE keyspace_name = 'scylla_bench' AND table_name = 'test' AND  kind in ('static', 'regular') ALLOW FILTERING' ...
< t:2023-09-23 05:17:11,319 f:common.py       l:1766 c:utils                p:DEBUG > Executing CQL 'CREATE INDEX test_v_nemesis ON scylla_bench.test("v")' ...
< t:2023-09-23 05:17:11,319 f:common.py       l:1774 c:utils                p:DEBUG > Executing CQL 'CREATE INDEX test_v_nemesis ON scylla_bench.test("v")' ...

We get standard warning messages about updating the column definitions at this time:

< t:2023-09-23 05:17:11,641 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-6     !INFO | scylla[5414]:  [shard 0:stre] schema_tables - Altering scylla_bench.test id=e6038830-59c3-11ee-843f-80facaa3fba2 version=730b15c0-59d0-11ee-9d7a-d646c7a3f354
< t:2023-09-23 05:17:11,642 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-6     !INFO | scylla[5414]:  [shard 0:stre] schema_tables - Creating scylla_bench.test_v_nemesis_index id=730b15c1-59d0-11ee-9d7a-d646c7a3f354 version=730b15c2-59d0-11ee-9d7a-d646c7a3f354
< t:2023-09-23 05:17:11,642 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-8     !INFO | scylla[5447]:  [shard 0:stre] schema_tables - Altering scylla_bench.test id=e6038830-59c3-11ee-843f-80facaa3fba2 version=730b15c0-59d0-11ee-9d7a-d646c7a3f354
< t:2023-09-23 05:17:11,643 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-6     !INFO | scylla[5414]:  [shard 0:stre] query_processor - Column definitions for scylla_bench.test changed, invalidating related prepared statements
< t:2023-09-23 05:17:11,643 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-8     !INFO | scylla[5447]:  [shard 0:stre] schema_tables - Creating scylla_bench.test_v_nemesis_index id=730b15c1-59d0-11ee-9d7a-d646c7a3f354 version=730b15c2-59d0-11ee-9d7a-d646c7a3f354
< t:2023-09-23 05:17:11,644 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-6     !INFO | scylla[5414]:  [shard 0:stre] view - Building view scylla_bench.test_v_nemesis_index, starting at token minimum token
< t:2023-09-23 05:17:11,645 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-8     !INFO | scylla[5447]:  [shard 0:stre] query_processor - Column definitions for scylla_bench.test changed, invalidating related prepared statements
< t:2023-09-23 05:17:11,645 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-6     !INFO | scylla[5414]:  [shard 3:stre] query_processor - Column definitions for scylla_bench.test changed, invalidating related prepared statements
< t:2023-09-23 05:17:11,645 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-8     !INFO | scylla[5447]:  [shard 1:stre] query_processor - Column definitions for scylla_bench.test changed, invalidating related prepared statements
< t:2023-09-23 05:17:11,646 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-6     !INFO | scylla[5414]:  [shard 4:stre] query_processor - Column definitions for scylla_bench.test changed, invalidating related prepared statements
< t:2023-09-23 05:17:11,646 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-8     !INFO | scylla[5447]:  [shard 4:stre] query_processor - Column definitions for scylla_bench.test changed, invalidating related prepared statements
< t:2023-09-23 05:17:11,647 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-6     !INFO | scylla[5414]:  [shard 6:stre] query_processor - Column definitions for scylla_bench.test changed, invalidating related prepared statements
< t:2023-09-23 05:17:11,647 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-8     !INFO | scylla[5447]:  [shard 3:stre] query_processor - Column definitions for scylla_bench.test changed, invalidating related prepared statements
< t:2023-09-23 05:17:11,647 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-6     !INFO | scylla[5414]:  [shard 1:stre] query_processor - Column definitions for scylla_bench.test changed, invalidating related prepared statements
< t:2023-09-23 05:17:11,648 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-8     !INFO | scylla[5447]:  [shard 2:stre] query_processor - Column definitions for scylla_bench.test changed, invalidating related prepared statements
< t:2023-09-23 05:17:11,648 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-6     !INFO | scylla[5414]:  [shard 5:stre] query_processor - Column definitions for scylla_bench.test changed, invalidating related prepared statements
< t:2023-09-23 05:17:11,649 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-8     !INFO | scylla[5447]:  [shard 5:stre] query_processor - Column definitions for scylla_bench.test changed, invalidating related prepared statements
< t:2023-09-23 05:17:11,649 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-6     !INFO | scylla[5414]:  [shard 2:stre] query_processor - Column definitions for scylla_bench.test changed, invalidating related prepared statements
< t:2023-09-23 05:17:11,650 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-8     !INFO | scylla[5447]:  [shard 6:stre] query_processor - Column definitions for scylla_bench.test changed, invalidating related prepared statements
< t:2023-09-23 05:17:11,650 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-6     !INFO | scylla[5414]:  [shard 0:stre] schema_tables - Schema version changed to 730b29a2-59d0-11ee-73b6-12c9dcde6dc3
< t:2023-09-23 05:17:11,651 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-8     !INFO | scylla[5447]:  [shard 0:stre] schema_tables - Schema version changed to 730b29a2-59d0-11ee-73b6-12c9dcde6dc3
< t:2023-09-23 05:17:11,658 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-6      !ERR | scylla[5414]:  [shard 6:stre] view - Error applying view update to 10.4.11.95 (view: scylla_bench.test_v_nemesis_index, base token: -9138123822764233487, view token: -7634433360284248986): exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
< t:2023-09-23 05:17:11,658 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-8      !ERR | scylla[5447]:  [shard 2:stre] view - Error applying view update to 10.4.11.95 (view: scylla_bench.test_v_nemesis_index, base token: -8438012855053435886, view token: 5502703116925908089): exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
< t:2023-09-23 05:17:11,661 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-8  !WARNING | scylla[5447]:  [shard 2:stre] view - Error executing build step for base scylla_bench.test: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
< t:2023-09-23 05:17:11,664 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-6  !WARNING | scylla[5414]:  [shard 6:stre] view - Error executing build step for base scylla_bench.test: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
< t:2023-09-23 05:17:11,668 f:db_log_reader.py l:114  c:sdcm.db_log_reader   p:DEBUG > 2023-09-23T05:17:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-8      !ERR | scylla[5447]:  [shard 4:stre] view - Error applying view update to 10.4.11.95 (view: scylla_bench.test_v_nemesis_index, base token: -6401198363050260881, view token: -7380691319846549773): exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)

Then during bootstrap we get a following error during repair once the index is being dropped:

< t:2023-09-23 06:13:08,647 f:common.py       l:1766 c:utils                p:DEBUG > Executing CQL 'DROP INDEX scylla_bench.test_v_nemesis' ...
< t:2023-09-23 06:13:08,647 f:common.py       l:1774 c:utils                p:DEBUG > Executing CQL 'DROP INDEX scylla_bench.test_v_nemesis' ...
2023-09-23 06:13:52.171 <2023-09-23 06:13:51.000>: (DatabaseLogEvent Severity.ERROR) period_type=one-time event_id=d14aa12a-8d41-46c7-b645-aed38c8a0954 during_nemesis=RemoveNodeThenAddNode,CreateIndex: type=RUNTIME_ERROR regex=std::runtime_error line_number=34535 node=longevity-parallel-topology-schema--db-node-136349f0-9
2023-09-23T06:13:51+00:00 longevity-parallel-topology-schema--db-node-136349f0-9      !ERR | scylla[5368]:  [shard 0:main] init - Startup failed: std::runtime_error ({shard 2: std::runtime_error (repair[433ec093-991c-46c1-939f-9a9585a706e9]: 2 out of 2331 ranges failed, keyspace=scylla_bench, tables={test, test_v_nemesis_index, test_counters}, repair_reason=bootstrap, nodes_down_during_repair={}, aborted_by_user=false, failed_because=std::runtime_error (Failed to repair for keyspace=scylla_bench, cf=test_v_nemesis_index, range=(6177157017308496421,6242123301949156734]))})

Before that we get a lot of storage_proxy warnings about mutation updates, a lot of them, 11618 lines to be exact:

2023-09-23T06:12:34+00:00 longevity-parallel-topology-schema--db-node-136349f0-9     !INFO | scylla[5368]:  [shard 3:stre] compaction - [Reshape scylla_bench.test_v_nemesis_index 2f869060-59d8-11ee-920b-1d882808ffa9] Reshaping [/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gw5_4btkg27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gw5_2unzk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gw2_4p3yo27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gw5_3edv427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gwb_1n5mo27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gwo_08kn427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gwe_01aao27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gwg_0lfls27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gwl_2u0u827yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gwg_27y3427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gwp_0vpz427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gwo_54jio27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gwx_2d39s27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gwx_5e6qo27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gwz_4u95c27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gx2_4dyq827yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gx3_23g1s27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gxm_58tu827yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gxc_0jag027yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gxf_2hlb427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gxh_1lfwg27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gxd_2mqhs27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gxe_45lsw27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gxo_5sz5c27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gxs_2my7k27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gxs_2ndn427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gxt_0z5fk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gxv_45lsw27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gy3_2f0ps27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gy5_5k6sg27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gy5_5h6rk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gyl_1ovcw27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gy7_44qxs27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gyb_2m3cg27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gyg_1l86o27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gyg_4aqzk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gyv_3q68w27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gyr_2tdow27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gyn_5i1mo27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gyz_1tsts27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gyz_2f8fk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gzb_4htm827yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gze_33gcg27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gze_0di4027yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gzd_2lnww27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gze_47yog27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gzf_2qt3k27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gzm_27ink27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gzn_1t5og27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gzm_4ke7k27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gzn_1i85s27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gzk_3uw0027yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gzk_5pjow27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gzs_01i0g27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gzs_5owjk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gzx_2qt3k27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gzx_3lo7k27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0gzy_0xfpc27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h11_47qyo27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h09_45e3427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h0c_2kt1s27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h0j_0857k27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h0o_48tjk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h0o_0h5a827yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h0k_1s33k27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h0k_2l8hc27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h0l_3d3kg27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h0m_2v3f427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h0r_1c0e827yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h0s_30vr427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h0n_5abuo27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h0p_43ocw27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h0u_2sqjk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h0u_5az0027yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h10_3562o27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h0z_556o027yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h10_0l7w027yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h13_4bluo27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h17_100ao27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h1a_3ybgg27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h1c_1dxu827yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h1f_5grc027yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h1m_0c03k27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h1o_50wcg27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h1l_3hlls27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h1w_5k6sg27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h1l_0rv3427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h1o_16nhs27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h21_438xc27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h26_0556o27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h1z_3co4w27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h21_0eseo27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h22_5az0027yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h2a_0c7tc27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h29_06fhc27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h29_5uh5s27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h2c_1y35c27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h2c_2f8fk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h2g_3dj0027yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h2i_398og27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h2i_1gxv427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h2f_4l1cw27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h2m_2bsz427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h2l_4lgsg27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h33_5142827yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h2r_4b6f427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h2v_3dqps27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h2s_3ttf427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h2x_5ara827yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h3e_5h6rk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h32_2rvog27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h32_067rk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h37_2m3cg27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h39_0spy827yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h3d_1f84w27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h3d_29vj427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h3h_1zsvk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h3k_0jpvk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h3w_4xwbk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h3o_3myi827yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h3s_24imo27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h3z_0kd0w27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h41_037qo27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h43_07ps027yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h4a_2tt4g27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h4a_2f0ps27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h4g_5m48g27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h4l_3vykw27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h4a_4he6o27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h4d_3dba827yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h4k_0yia827yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h4k_0rfnk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h4i_1f0f427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h4p_3tdzk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h4s_3xw0w27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h56_4htm827yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h52_38lj427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h4v_0oncg27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h4w_5ara827yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h4z_4lgsg27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h5a_4iw7427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h54_5qm9s27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h56_4klxc27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h5d_08scw27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h5g_4cgps27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h5h_01xg027yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h5g_561j427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h5j_1onn427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h5k_54bsw27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h5l_1aq3k27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h5n_5rwkg27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h5o_2sits27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h5o_3tdzk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h5s_1kl1c27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h5u_0yia827yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h61_20vgg27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h6p_2nlcw27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h5y_49w4g27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h60_4bluo27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h5y_4jjcg27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h62_4xgw027yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h6p_1f0f427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h6a_3jbc027yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h6d_1m31s27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h6a_2l0rk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h6c_330ww27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h6t_5k6sg27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h6o_5bm5c27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h6y_2wlfk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h77_1rno027yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h74_2o0sg27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h7b_03uw027yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h7e_4gbls27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h77_4o93k27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h7b_0qsi827yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h7e_00n5c27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h7h_47yog27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h7q_42ls027yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h7h_0rndc27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h7o_0u7yo27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h7l_3vqv427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h7p_1ksr427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h7n_1080g27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h7n_1u0jk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h7s_0spy827yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h7s_2w60027yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h81_5owjk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h7y_20g0w27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h7v_112vk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h89_1v34g27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h84_0v2ts27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h84_1fnkg27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h7z_2onxs27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h85_41j7427yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h86_2pqio27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h88_2o8i827yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h8b_00ffk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h8d_1jq6827yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h8f_2dazk27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h8f_2jqgw27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h8h_4jys027yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h8h_5abuo27yayogpcenqh-big-Data.db:level=0:origin=repair,/var/lib/scylla/data/scylla_bench/test_v_nemesis_index-730b15c159d011ee9d7ad646c7a3f354/me-3g9m_0h8k_4xols27yayogpcenqh-big-Data.db:level=0:origin=repair]
2023-09-23T06:12:35+00:00 longevity-parallel-topology-schema--db-node-136349f0-9   !NOTICE | sudo[6896]: scyllaadm : PWD=/home/scyllaadm ; USER=root ; COMMAND=/usr/bin/coredumpctl -q --json=short
2023-09-23T06:12:35+00:00 longevity-parallel-topology-schema--db-node-136349f0-9     !INFO | sudo[6896]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
2023-09-23T06:12:35+00:00 longevity-parallel-topology-schema--db-node-136349f0-9     !INFO | sudo[6896]: pam_unix(sudo:session): session closed for user root
2023-09-23T06:12:41+00:00 longevity-parallel-topology-schema--db-node-136349f0-9     !INFO | scylla[5368]:  [shard 3:stre] repair - repair[433ec093-991c-46c1-939f-9a9585a706e9]: stats: repair_reason=bootstrap, keyspace=scylla_bench, tables={test, test_v_nemesis_index, test_counters}, ranges_nr=777, round_nr=2724, round_nr_fast_path_already_synced=2331, round_nr_fast_path_same_combined_hashes=0, round_nr_slow_path=393, rpc_call_nr=13043, tx_hashes_nr=3787731, rx_hashes_nr=5671547, duration=492.34604 seconds, tx_row_nr=192182, rx_row_nr=3815128, tx_row_bytes=31740491, rx_row_bytes=531421636, row_from_disk_bytes={{10.4.8.181, 207541478}, {10.4.8.122, 264464227}, {10.4.9.46, 180996507}, {10.4.10.144, 193696371}, {10.4.10.212, 181911016}}, row_from_disk_nr={{10.4.8.181, 1534787}, {10.4.8.122, 2073605}, {10.4.9.46, 1334620}, {10.4.10.144, 1355027}, {10.4.10.212, 1472117}}, row_from_disk_bytes_per_sec={{10.4.8.181, 0.402008}, {10.4.8.122, 0.512267}, {10.4.9.46, 0.35059}, {10.4.10.144, 0.37519}, {10.4.10.212, 0.352362}} MiB/s, row_from_disk_rows_per_sec={{10.4.8.181, 3117.29}, {10.4.8.122, 4211.68}, {10.4.9.46, 2710.74}, {10.4.10.144, 2752.18}, {10.4.10.212, 2990}} Rows/s, tx_row_nr_peer={{10.4.8.181, 56050}, {10.4.9.46, 44037}, {10.4.10.144, 45794}, {10.4.10.212, 46301}}, rx_row_nr_peer={{10.4.8.181, 1012210}, {10.4.9.46, 895410}, {10.4.10.144, 915903}, {10.4.10.212, 991605}}
2023-09-23T06:12:41+00:00 longevity-parallel-topology-schema--db-node-136349f0-9     !INFO | scylla[5368]:  [shard 3:stre] repair - repair[433ec093-991c-46c1-939f-9a9585a706e9]: completed successfully, keyspace=scylla_bench
2023-09-23T06:13:05+00:00 longevity-parallel-topology-schema--db-node-136349f0-9   !NOTICE | sudo[6904]: scyllaadm : PWD=/home/scyllaadm ; USER=root ; COMMAND=/usr/bin/coredumpctl -q --json=short
2023-09-23T06:13:05+00:00 longevity-parallel-topology-schema--db-node-136349f0-9     !INFO | sudo[6904]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=1000)
2023-09-23T06:13:05+00:00 longevity-parallel-topology-schema--db-node-136349f0-9     !INFO | sudo[6904]: pam_unix(sudo:session): session closed for user root
2023-09-23T06:13:08+00:00 longevity-parallel-topology-schema--db-node-136349f0-9     !INFO | scylla[5368]:  [shard 0:stre] schema_tables - Altering scylla_bench.test id=e6038830-59c3-11ee-843f-80facaa3fba2 version=442a87b0-59d8-11ee-97b1-0edcd44312b8
2023-09-23T06:13:08+00:00 longevity-parallel-topology-schema--db-node-136349f0-9     !INFO | scylla[5368]:  [shard 0:stre] schema_tables - Dropping scylla_bench.test_v_nemesis_index id=730b15c1-59d0-11ee-9d7a-d646c7a3f354 version=730b15c2-59d0-11ee-9d7a-d646c7a3f354
2023-09-23T06:13:08+00:00 longevity-parallel-topology-schema--db-node-136349f0-9     !INFO | scylla[5368]:  [shard 0:stre] database - Dropping scylla_bench.test_v_nemesis_index with auto-snapshot
<...>
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 3:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#3: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 1:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
2023-09-23T06:13:11+00:00 longevity-parallel-topology-schema--db-node-136349f0-9  !WARNING | scylla[5368]:  [shard 3:stat] storage_proxy - Failed to apply mutation from 10.4.8.181#3: data_dictionary::no_such_column_family (Can't find a column family with UUID 730b15c1-59d0-11ee-9d7a-d646c7a3f354)
<...>

All of the updates come from longevity-parallel-topology-schema--db-node-136349f0-6 (54.171.56.159 | 10.4.8.181) (shards: 7)

Impact

Node fails to start up.

How frequently does it reproduce?

Unknown, did not reproduce on the subsequent run.

Installation details

Kernel Version: 5.15.0-1045-aws
Scylla version (or git commit hash): 5.4.0~dev-20230921.a56a4b6226e6 with build-id 616f734e7c7fb5e3ee8898792b3c415d2574a132

Cluster size: 5 nodes (i4i.2xlarge)

Scylla Nodes used in this run:

  • longevity-parallel-topology-schema--db-node-136349f0-1 (54.194.49.68 | 10.4.11.95) (shards: 7)
  • longevity-parallel-topology-schema--db-node-136349f0-2 (52.211.227.34 | 10.4.10.144) (shards: 7)
  • longevity-parallel-topology-schema--db-node-136349f0-3 (63.33.59.211 | 10.4.11.153) (shards: 7)
  • longevity-parallel-topology-schema--db-node-136349f0-4 (52.209.52.226 | 10.4.8.108) (shards: 7)
  • longevity-parallel-topology-schema--db-node-136349f0-5 (34.245.100.235 | 10.4.9.46) (shards: 7)
  • longevity-parallel-topology-schema--db-node-136349f0-6 (54.171.56.159 | 10.4.8.181) (shards: 7)
  • longevity-parallel-topology-schema--db-node-136349f0-7 (63.35.180.230 | 10.4.8.138) (shards: 7)
  • longevity-parallel-topology-schema--db-node-136349f0-8 (3.250.96.79 | 10.4.10.212) (shards: 7)
  • longevity-parallel-topology-schema--db-node-136349f0-9 (3.249.171.62 | 10.4.8.122) (shards: -1)

OS / Image: ami-00f051bf1c684c01a (aws: undefined_region)

Test: longevity-schema-topology-changes-12h-test
Test id: 136349f0-90e6-450a-a48b-61106861f0dd
Test name: scylla-master/longevity/longevity-schema-topology-changes-12h-test
Test config file(s):

Logs and commands
  • Restore Monitor Stack command: $ hydra investigate show-monitor 136349f0-90e6-450a-a48b-61106861f0dd
  • Restore monitor on AWS instance using Jenkins job
  • Show all stored logs command: $ hydra investigate show-logs 136349f0-90e6-450a-a48b-61106861f0dd

Logs:

Jenkins job URL
Argus

@k0machi k0machi assigned fruch and unassigned fruch Oct 1, 2023
@k0machi
Copy link
Contributor Author

k0machi commented Oct 1, 2023

cc @fruch

@fruch
Copy link
Contributor

fruch commented Oct 1, 2023

@mykaul is labeling with triage is enough to get it assigned? or anything else is needed ?

@mykaul
Copy link
Contributor

mykaul commented Oct 1, 2023

@mykaul is labeling with triage is enough to get it assigned? or anything else is needed ?

It is sufficient.

I'm not sure I understand when the node failed to start. And if we attempted again restarting it. It failed to repair and did not start. And then?

@fruch
Copy link
Contributor

fruch commented Oct 1, 2023

@mykaul is labeling with triage is enough to get it assigned? or anything else is needed ?

It is sufficient.

I'm not sure I understand when the node failed to start. And if we attempted again restarting it. It failed to repair and did not start. And then?

And that's it, we are not retrying those situations, we stop the test, and collect the information.

@mykaul
Copy link
Contributor

mykaul commented Oct 1, 2023

@denesb - please assign to someone to investigate (does not have to be @asias ! )

@mykaul mykaul changed the title Having an index being dropped during the bootstrap causes a node to fail startup Having an index being dropped during the bootstrap causes a node to fail to start (Startup failed: std::runtime_error ({shard 2: std::runtime_error (repair[433ec093-991c-46c1-939f-9a9585a706e9]: 2 out of 2331 ranges failed ) Oct 1, 2023
@denesb
Copy link
Contributor

denesb commented Oct 2, 2023

@Deexie please try to find out what went wrong.

@denesb
Copy link
Contributor

denesb commented Oct 2, 2023

@asias do we support a table being dropped in the middle of repair? I think we will have to, because of cloud, but is there code currently in repair, ensuring a table is kept alive while repair is ongoing?

@Deexie
Copy link
Contributor

Deexie commented Oct 3, 2023

Dropped table isn't a direct reason behind failing bootstrap. data_dictionary::no_such_column_family can be thrown during repair, but it is handled its own way - an exception is swallowed at some point and it's not even counted to 2 out of 2331 ranges failed.

The two mentioned ranges failed due to:

Sep 23 06:13:09 longevity-parallel-topology-schema--db-node-136349f0-9 scylla[5368]:  [shard 2:stre] repair - repair[433ec093-991c-46c1-939f-9a9585a706e9]: put_row_diff: got error from node=10.4.8.122, keyspace=scylla_bench, table=test_v_nemesis_index, range=(6830042757232597350,6841937695669989771], error=std::runtime_error (put_row_diff: Repair follower=10.4.8.181 failed in put_row_diff hanlder, status=0)
Sep 23 06:13:09 longevity-parallel-topology-schema--db-node-136349f0-9 scylla[5368]:  [shard 2:stre] repair - repair[433ec093-991c-46c1-939f-9a9585a706e9]: shard=2, keyspace=scylla_bench, cf=test_v_nemesis_index, range=(6830042757232597350,6841937695669989771], got error in row level repair: std::runtime_error (put_row_diff: Repair follower=10.4.8.181 failed in put_row_diff hanlder, status=0)
Sep 23 06:13:09 longevity-parallel-topology-schema--db-node-136349f0-9 scylla[5368]:  [shard 2:stre] repair - repair[433ec093-991c-46c1-939f-9a9585a706e9]: put_row_diff: got error from node=10.4.8.122, keyspace=scylla_bench, table=test_v_nemesis_index, range=(6177157017308496421,6242123301949156734], error=std::runtime_error (put_row_diff: Repair follower=10.4.10.144 failed in put_row_diff hanlder, status=0)
Sep 23 06:13:09 longevity-parallel-topology-schema--db-node-136349f0-9 scylla[5368]:  [shard 2:stre] repair - repair[433ec093-991c-46c1-939f-9a9585a706e9]: shard=2, keyspace=scylla_bench, cf=test_v_nemesis_index, range=(6177157017308496421,6242123301949156734], got error in row level repair: std::runtime_error (put_row_diff: Repair follower=10.4.10.144 failed in put_row_diff hanlder, status=0)

I'm not familiar with the code that throws it, though, so I will need some time to figure out what exactly and why happens.

@denesb
Copy link
Contributor

denesb commented Oct 3, 2023

Sep 23 06:13:09 longevity-parallel-topology-schema--db-node-136349f0-9 scylla[5368]:  [shard 2:stre] repair - repair[433ec093-991c-46c1-939f-9a9585a706e9]: put_row_diff: got error from node=10.4.8.122, keyspace=scylla_bench, table=test_v_nemesis_index, range=(6177157017308496421,6242123301949156734], error=std::runtime_error (put_row_diff: Repair follower=10.4.10.144 failed in put_row_diff hanlder, status=0)

This error means the failure happened on the other side. You will need to look into the log files of 10.4.8.122 to find out why it failed.

@Deexie
Copy link
Contributor

Deexie commented Oct 3, 2023

This error means the failure happened on the other side. You will need to look into the log files of 10.4.8.122 to find out why it failed.

Oh, makes sense, thanks!

@fruch
Copy link
Contributor

fruch commented Oct 5, 2023

reproduced in this week run:

node-17, failed to bootstrap:

Oct 02 21:04:47 parallel-topology-schema-changes-mu-db-node-7162315c-17 scylla[5481]:  [shard 0:main] init - Startup failed: std::runtime_error ({shard 1: std::runtime_error (repair[ca8a60d8-112e-4ffd-bac8-4ad414173900]: 1 out of 6910 ranges failed, keyspace=keyspace1, tables={sec_ind_c3_index, standard1, sec_ind_c2_index, standard1_c4_nemesis_index, standard2}, repair_reason=bootstrap, nodes_down_during_repair={}, aborted_by_user=false, failed_because=std::runtime_error (Failed to repair for keyspace=keyspace1, cf=standard1_c4_nemesis_index, range=(4000292461178491403,4019679277071696036])), shard 3: std::runtime_error (repair[ca8a60d8-112e-4ffd-bac8-4ad414173900]: 1 out of 6910 ranges failed, keyspace=keyspace1, tables={sec_ind_c3_index, standard1, sec_ind_c2_index, standard1_c4_nemesis_index, standard2}, repair_reason=bootstrap, nodes_down_during_repair={}, aborted_by_user=false, failed_because=std::runtime_error (Failed to repair for keyspace=keyspace1, cf=standard1_c4_nemesis_index, range=(4296358521689662149,4303462966441231372])), shard 4: std::runtime_error (repair[ca8a60d8-112e-4ffd-bac8-4ad414173900]: 1 out of 6910 ranges failed, keyspace=keyspace1, tables={sec_ind_c3_index, standard1, sec_ind_c2_index, standard1_c4_nemesis_index, standard2}, repair_reason=bootstrap, nodes_down_during_repair={}, aborted_by_user=false, failed_because=std::runtime_error (Failed to repair for keyspace=keyspace1, cf=standard1_c4_nemesis_index, range=(4489824299166744933,4512045607350533685]))})
Oct 02 21:06:01 parallel-topology-schema-changes-mu-db-node-7162315c-17 systemd[1]: scylla-server.service: Main process exited, code=exited, status=1/FAILURE

Installation details

Kernel Version: 5.15.0-1045-aws
Scylla version (or git commit hash): 5.4.0~dev-20231002.1640f83fdc31 with build-id d751f4ab981bda6f045bb902f0d341e61f0ac3a7

Cluster size: 12 nodes (i3en.2xlarge)

Scylla Nodes used in this run:

  • parallel-topology-schema-changes-mu-db-node-7162315c-9 (18.169.241.143 | 10.3.8.151) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-7162315c-8 (18.134.14.223 | 10.3.11.54) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-7162315c-7 (35.178.231.250 | 10.3.8.101) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-7162315c-6 (3.249.50.46 | 10.4.11.196) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-7162315c-5 (63.33.195.147 | 10.4.10.90) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-7162315c-4 (34.252.175.149 | 10.4.8.128) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-7162315c-3 (34.253.133.72 | 10.4.9.255) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-7162315c-2 (54.194.157.249 | 10.4.8.112) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-7162315c-17 (54.246.26.71 | 10.4.11.5) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-7162315c-16 (63.35.215.119 | 10.4.10.195) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-7162315c-15 (18.130.145.101 | 10.3.10.253) (shards: -1)
  • parallel-topology-schema-changes-mu-db-node-7162315c-14 (54.216.171.70 | 10.4.9.118) (shards: -1)
  • parallel-topology-schema-changes-mu-db-node-7162315c-13 (52.30.33.55 | 10.4.11.229) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-7162315c-12 (3.10.144.217 | 10.3.9.121) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-7162315c-11 (18.170.55.64 | 10.3.11.217) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-7162315c-10 (18.130.5.161 | 10.3.9.174) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-7162315c-1 (3.250.46.244 | 10.4.9.254) (shards: 7)

OS / Image: ami-0000f82a29037b49d ami-0205476ccb2b99643 (aws: undefined_region)

Test: longevity-multidc-schema-topology-changes-12h-test
Test id: 7162315c-e5dc-41a6-940d-13e7997f7162
Test name: scylla-master/longevity/longevity-multidc-schema-topology-changes-12h-test
Test config file(s):

Logs and commands
  • Restore Monitor Stack command: $ hydra investigate show-monitor 7162315c-e5dc-41a6-940d-13e7997f7162
  • Restore monitor on AWS instance using Jenkins job
  • Show all stored logs command: $ hydra investigate show-logs 7162315c-e5dc-41a6-940d-13e7997f7162

Logs:

Jenkins job URL
Argus

@fruch
Copy link
Contributor

fruch commented Oct 5, 2023

happened also on different test case:

2023-09-30 08:16:23.478 <2023-09-30 08:16:23.000>: (DatabaseLogEvent Severity.ERROR) period_type=one-time event_id=4254ca73-2a30-476f-b442-d85857fc08a2 during_nemesis=CreateIndex,DecommissionStreamingErr: type=RUNTIME_ERROR regex=std::runtime_error line_number=1370 node=longevity-parallel-topology-schema--db-node-b4eba44d-15
2023-09-30T08:16:23+00:00 longevity-parallel-topology-schema--db-node-b4eba44d-15      !ERR | scylla[5409]:  [shard 0:main] init - Startup failed: std::runtime_error (Failed to mark node as alive in 30000 ms, nodes={10.4.8.248, 10.4.9.112, 10.4.9.191, 10.4.10.219, 10.4.9.20, 10.4.9.6}, live_nodes={10.4.8.248, 10.4.9.112, 10.4.9.191, 10.4.10.219, 10.4.9.20})

Installation details

Kernel Version: 5.15.0-1045-aws
Scylla version (or git commit hash): 5.4.0~dev-20230927.0f22e8d196af with build-id 2c911e6e2b12c7d0c19f67f76711e0c1adfea3cb

Cluster size: 5 nodes (i4i.2xlarge)

Scylla Nodes used in this run:

  • longevity-parallel-topology-schema--db-node-b4eba44d-8 (34.243.52.127 | 10.4.10.33) (shards: 7)
  • longevity-parallel-topology-schema--db-node-b4eba44d-7 (3.249.192.94 | 10.4.9.191) (shards: 7)
  • longevity-parallel-topology-schema--db-node-b4eba44d-6 (18.203.92.39 | 10.4.11.152) (shards: 7)
  • longevity-parallel-topology-schema--db-node-b4eba44d-5 (54.170.246.46 | 10.4.9.239) (shards: 7)
  • longevity-parallel-topology-schema--db-node-b4eba44d-4 (3.253.54.43 | 10.4.8.80) (shards: 7)
  • longevity-parallel-topology-schema--db-node-b4eba44d-3 (54.77.216.216 | 10.4.9.78) (shards: 7)
  • longevity-parallel-topology-schema--db-node-b4eba44d-2 (54.74.133.187 | 10.4.11.167) (shards: 7)
  • longevity-parallel-topology-schema--db-node-b4eba44d-15 (34.244.227.122 | 10.4.8.248) (shards: -1)
  • longevity-parallel-topology-schema--db-node-b4eba44d-14 (54.74.143.197 | 10.4.9.20) (shards: 7)
  • longevity-parallel-topology-schema--db-node-b4eba44d-13 (52.208.250.115 | 10.4.9.6) (shards: 7)
  • longevity-parallel-topology-schema--db-node-b4eba44d-12 (34.244.221.238 | 10.4.9.112) (shards: 7)
  • longevity-parallel-topology-schema--db-node-b4eba44d-11 (34.245.142.12 | 10.4.9.3) (shards: 7)
  • longevity-parallel-topology-schema--db-node-b4eba44d-10 (34.254.179.18 | 10.4.10.219) (shards: 7)
  • longevity-parallel-topology-schema--db-node-b4eba44d-1 (63.33.204.190 | 10.4.9.27) (shards: 7)

OS / Image: ami-0c25786faf310fa10 (aws: undefined_region)

Test: longevity-schema-topology-changes-12h-test
Test id: b4eba44d-f499-4758-b9c6-2c7f9304b2df
Test name: scylla-master/longevity/longevity-schema-topology-changes-12h-test
Test config file(s):

Logs and commands
  • Restore Monitor Stack command: $ hydra investigate show-monitor b4eba44d-f499-4758-b9c6-2c7f9304b2df
  • Restore monitor on AWS instance using Jenkins job
  • Show all stored logs command: $ hydra investigate show-logs b4eba44d-f499-4758-b9c6-2c7f9304b2df

Logs:

Jenkins job URL
Argus

@asias
Copy link
Contributor

asias commented Oct 7, 2023 via email

@fruch
Copy link
Contributor

fruch commented Oct 8, 2023

Dropping a table during repair is supposed to work. We do not try to keep
the node as alive but ignore the "table not found" exception. There are
multiple places we need to take care of. Probably there are places that are
missed.

Any thoughts on how to flush those out ?

@avikivity
Copy link
Member

It should be assigned to a developer and reproduced locally. This doesn't require a full s-c-t run with 7,432 nodes and 8PB data, it just requires running the scenario enough times to trigger the right timing.

@denesb
Copy link
Contributor

denesb commented Oct 9, 2023

It should be assigned to a developer and reproduced locally. This doesn't require a full s-c-t run with 7,432 nodes and 8PB data, it just requires running the scenario enough times to trigger the right timing.

We should write a dedicated test for this. We can use the failure injection framework to time the drop table/drop index such that it happens in the middle of streaming.

@fruch
Copy link
Contributor

fruch commented Oct 9, 2023

once more case of repair that fails cause of dropping a view:

2023-10-07 05:28:24.943: (DisruptionEvent Severity.ERROR) period_type=end event_id=fad127a1-eff3-40ba-b02c-6e560c777880 duration=12m55s: nemesis_name=AddRemoveDc target_node=Node longevity-parallel-topology-schema--db-node-f0389556-8 [34.242.160.129 | 10.4.9.57] (seed: True) errors=Encountered a bad command exit code!
Command: '/usr/bin/nodetool  repair -pr '
Exit code: 2
Stdout:
[2023-10-07 05:26:55,194] Starting repair command #7, repairing 1 ranges for keyspace keyspace_new_dc (parallelism=SEQUENTIAL, full=true)
[2023-10-07 05:26:55,290] Repair session 7
[2023-10-07 05:26:55,290] Repair session 7 finished
[2023-10-07 05:26:55,315] Starting repair command #8, repairing 1 ranges for keyspace keyspace1 (parallelism=SEQUENTIAL, full=true)
[2023-10-07 05:27:18,435] Repair session 8 failed
[2023-10-07 05:27:18,436] Repair session 8 finished
Stderr:
-- StackTrace --
java.lang.RuntimeException: Repair job has failed with the error message: [2023-10-07 05:27:18,435] Repair session 8 failed
at org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:124)
at org.apache.cassandra.utils.progress.jmx.JMXNotificationProgressListener.handleNotification(JMXNotificationProgressListener.java:77)
at java.management/com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.dispatchNotification(ClientNotifForwarder.java:633)
at java.management/com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.doRun(ClientNotifForwarder.java:555)
at java.management/com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.run(ClientNotifForwarder.java:474)
at java.management/com.sun.jmx.remote.internal.ClientNotifForwarder$LinearExecutor.lambda$execute$0(ClientNotifForwarder.java:108)
at java.base/java.lang.Thread.run(Thread.java:829)
Traceback (most recent call last):
File "/home/ubuntu/scylla-cluster-tests/sdcm/nemesis.py", line 5045, in wrapper
result = method(*args[1:], **kwargs)
File "/home/ubuntu/scylla-cluster-tests/sdcm/nemesis.py", line 4387, in disrupt_add_remove_dc
cluster_node.run_nodetool(sub_cmd="repair -pr", publish_event=True)
File "/home/ubuntu/scylla-cluster-tests/sdcm/cluster.py", line 2638, in run_nodetool
self.remoter.run(cmd, timeout=timeout, ignore_status=ignore_status, verbose=verbose, retry=retry)
File "/home/ubuntu/scylla-cluster-tests/sdcm/remote/remote_base.py", line 613, in run
result = _run()
File "/home/ubuntu/scylla-cluster-tests/sdcm/utils/decorators.py", line 70, in inner
return func(*args, **kwargs)
File "/home/ubuntu/scylla-cluster-tests/sdcm/remote/remote_base.py", line 604, in _run
return self._run_execute(cmd, timeout, ignore_status, verbose, new_session, watchers)
File "/home/ubuntu/scylla-cluster-tests/sdcm/remote/remote_base.py", line 537, in _run_execute
result = connection.run(**command_kwargs)
File "/home/ubuntu/scylla-cluster-tests/sdcm/remote/libssh2_client/__init__.py", line 620, in run
return self._complete_run(channel, exception, timeout_reached, timeout, result, warn, stdout, stderr)
File "/home/ubuntu/scylla-cluster-tests/sdcm/remote/libssh2_client/__init__.py", line 655, in _complete_run
raise UnexpectedExit(result)
sdcm.remote.libssh2_client.exceptions.UnexpectedExit: Encountered a bad command exit code!
Command: '/usr/bin/nodetool  repair -pr '
Exit code: 2
Stdout:
[2023-10-07 05:26:55,194] Starting repair command #7, repairing 1 ranges for keyspace keyspace_new_dc (parallelism=SEQUENTIAL, full=true)
[2023-10-07 05:26:55,290] Repair session 7
[2023-10-07 05:26:55,290] Repair session 7 finished
[2023-10-07 05:26:55,315] Starting repair command #8, repairing 1 ranges for keyspace keyspace1 (parallelism=SEQUENTIAL, full=true)
[2023-10-07 05:27:18,435] Repair session 8 failed
[2023-10-07 05:27:18,436] Repair session 8 finished
Stderr:
-- StackTrace --
java.lang.RuntimeException: Repair job has failed with the error message: [2023-10-07 05:27:18,435] Repair session 8 failed
at org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:124)
at org.apache.cassandra.utils.progress.jmx.JMXNotificationProgressListener.handleNotification(JMXNotificationProgressListener.java:77)
at java.management/com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.dispatchNotification(ClientNotifForwarder.java:633)
at java.management/com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.doRun(ClientNotifForwarder.java:555)
at java.management/com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.run(ClientNotifForwarder.java:474)
at java.management/com.sun.jmx.remote.internal.ClientNotifForwarder$LinearExecutor.lambda$execute$0(ClientNotifForwarder.java:108)
at java.base/java.lang.Thread.run(Thread.java:829)

and on node-8, we see the repair fails casue it can find the table anymore (or at least this how it seems):

2023-10-07T05:27:18.300+00:00 longevity-parallel-topology-schema--db-node-f0389556-8     !INFO | scylla[5424]:  [shard 0:stat] migration_manager - Update table 'keyspace1.standard1' From org.apache.cassandra.config.CFMetaData@0x600004e66380[cfId=af6d24b0-64c3-11ee-ad83-070492b57387,ksName=keyspace1,cfName=standard1,cfType=Standard,comparator=org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type),comment=,readRepairChance=0,dcLocalReadRepairChance=0,tombstoneGcOptions={"mode":"timeout","propagation_delay_in_seconds":"3600"},gcGraceSeconds=864000,keyValidator=org.apache.cassandra.db.marshal.BytesType,minCompactionThreshold=4,maxCompactionThreshold=32,columnMetadata=[ColumnDefinition{name=key, type=org.apache.cassandra.db.marshal.BytesType, kind=PARTITION_KEY, componentIndex=0, droppedAt=-9223372036854775808}, ColumnDefinition{name=C0, type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=C1, type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=C2, type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=C3, type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=C4, type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=C5, type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=C6, type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=C7, type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=C8, type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=C9, type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}],compactionStrategyClass=class org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy,compactionStrategyOptions={enabled=true},compressionParameters={},bloomFilterFpChance=0.01,memtableFlushPeriod=0,caching={"keys":"ALL","rows_per_partition":"ALL"},cdc={},defaultTimeToLive=0,minIndexInterval=128,maxIndexInterval=2048,speculativeRetry=775.00ms,triggers=[],isDense=false,version=64bee750-64cd-11ee-8358-ac2baa30a1d5,droppedColumns={},collections={},indices={standard1_c6_nemesis : 04e86a8d-a587-3b1b-8cb4-1b929f510dee}] To org.apache.cassandra.config.CFMetaData@0x60000a5ca000[cfId=af6d24b0-64c3-11ee-ad83-070492b57387,ksName=keyspace1,cfName=standard1,cfType=Standard,comparator=org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.UTF8Type),comment=,readRepairChance=0,dcLocalReadRepairChance=0,tombstoneGcOptions={"mode":"timeout","propagation_delay_in_seconds":"3600"},gcGraceSeconds=864000,keyValidator=org.apache.cassandra.db.marshal.BytesType,minCompactionThreshold=4,maxCompactionThreshold=32,columnMetadata=[ColumnDefinition{name=key, type=org.apache.cassandra.db.marshal.BytesType, kind=PARTITION_KEY, componentIndex=0, droppedAt=-9223372036854775808}, ColumnDefinition{name=C0, type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=C1, type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=C2, type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=C3, type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=C4, type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=C5, type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=C6, type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=C7, type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=C8, type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=C9, type=org.apache.cassandra.db.marshal.BytesType, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}],compactionStrategyClass=class org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy,compactionStrategyOptions={enabled=true},compressionParameters={},bloomFilterFpChance=0.01,memtableFlushPeriod=0,caching={"keys":"ALL","rows_per_partition":"ALL"},cdc={},defaultTimeToLive=0,minIndexInterval=128,maxIndexInterval=2048,speculativeRetry=775.00ms,triggers=[],isDense=false,version=2e5f4790-64d2-11ee-a933-07363e8cee11,droppedColumns={},collections={},indices={}]
2023-10-07T05:27:18.300+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 6:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: put_row_diff: got error from node=10.4.8.126, keyspace=keyspace1, table=standard1_c6_nemesis_index, range=(7717296480951832488,7722188364576543175], error=std::runtime_error (put_row_diff: Repair follower=10.4.11.48 failed in put_row_diff hanlder, status=0)
2023-10-07T05:27:18.300+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 6:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: shard=6, keyspace=keyspace1, cf=standard1_c6_nemesis_index, range=(7717296480951832488,7722188364576543175], got error in row level repair: std::runtime_error (put_row_diff: Repair follower=10.4.11.48 failed in put_row_diff hanlder, status=0)
2023-10-07T05:27:18.300+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 2:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: put_row_diff: got error from node=10.4.9.57, keyspace=keyspace1, table=standard1_c6_nemesis_index, range=(-2947470824833475658,-2940360987708268952], error=std::runtime_error (put_row_diff: Repair follower=10.4.11.48 failed in put_row_diff hanlder, status=0)
2023-10-07T05:27:18.300+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 2:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: shard=2, keyspace=keyspace1, cf=standard1_c6_nemesis_index, range=(-2947470824833475658,-2940360987708268952], got error in row level repair: std::runtime_error (put_row_diff: Repair follower=10.4.11.48 failed in put_row_diff hanlder, status=0)
2023-10-07T05:27:18.300+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 0:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: put_row_diff: got error from node=10.4.11.213, keyspace=keyspace1, table=standard1_c6_nemesis_index, range=(-2536177651073903534,-2527689797267946752], error=std::runtime_error (put_row_diff: Repair follower=10.4.11.48 failed in put_row_diff hanlder, status=0)
2023-10-07T05:27:18.300+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 0:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: shard=0, keyspace=keyspace1, cf=standard1_c6_nemesis_index, range=(-2536177651073903534,-2527689797267946752], got error in row level repair: std::runtime_error (put_row_diff: Repair follower=10.4.11.48 failed in put_row_diff hanlder, status=0)
2023-10-07T05:27:18.300+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 5:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: shard=5, keyspace=keyspace1, cf=standard1_c6_nemesis_index, range=(-1260867541728177352,-1247012100734613170], got error in row level repair: data_dictionary::no_such_column_family (Can't find a column family standard1_c6_nemesis_index in keyspace keyspace1)
2023-10-07T05:27:18.300+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 2:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: put_row_diff: got error from node=10.4.8.126, keyspace=keyspace1, table=standard1_c6_nemesis_index, range=(-2923170087665928458,-2918752851033874802], error=std::runtime_error (put_row_diff: Repair follower=10.4.11.48 failed in put_row_diff hanlder, status=0)
2023-10-07T05:27:18.300+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 2:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: shard=2, keyspace=keyspace1, cf=standard1_c6_nemesis_index, range=(-2923170087665928458,-2918752851033874802], got error in row level repair: std::runtime_error (put_row_diff: Repair follower=10.4.11.48 failed in put_row_diff hanlder, status=0)
2023-10-07T05:27:18.300+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 2:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: put_row_diff: got error from node=10.4.8.126, keyspace=keyspace1, table=standard1_c6_nemesis_index, range=(-2889335208701633410,-2884690251121526828], error=std::runtime_error (put_row_diff: Repair follower=10.4.11.48 failed in put_row_diff hanlder, status=0)
2023-10-07T05:27:18.300+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 2:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: shard=2, keyspace=keyspace1, cf=standard1_c6_nemesis_index, range=(-2889335208701633410,-2884690251121526828], got error in row level repair: std::runtime_error (put_row_diff: Repair follower=10.4.11.48 failed in put_row_diff hanlder, status=0)
2023-10-07T05:27:18.300+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 0:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: shard=0, keyspace=keyspace1, cf=standard1_c6_nemesis_index, range=(-2255750805562672290,-2248575877698735154], got error in row level repair: data_dictionary::no_such_column_family (Can't find a column family standard1_c6_nemesis_index in keyspace keyspace1)
2023-10-07T05:27:18.300+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 2:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: shard=2, keyspace=keyspace1, cf=standard1_c6_nemesis_index, range=(-2789271916130356859,-2785556145488806094], got error in row level repair: data_dictionary::no_such_column_family (Can't find a column family standard1_c6_nemesis_index in keyspace keyspace1)
2023-10-07T05:27:18.300+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 3:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: shard=3, keyspace=keyspace1, cf=standard1_c6_nemesis_index, range=(-6076236566991128,-574533681109778], got error in row level repair: data_dictionary::no_such_column_family (Can't find a column family standard1_c6_nemesis_index in keyspace keyspace1)
2023-10-07T05:27:18.303+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 3:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: shard=3, keyspace=keyspace1, cf=standard1_c6_nemesis_index, range=(8503814029474733,34059073716755403], got error in row level repair: data_dictionary::no_such_column_family (Can't find a column family standard1_c6_nemesis_index in keyspace keyspace1)
2023-10-07T05:27:18.303+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 2:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: put_row_diff: got error from node=10.4.8.126, keyspace=keyspace1, table=standard1_c6_nemesis_index, range=(-2884690251121526828,-2879923914668299427], error=std::runtime_error (put_row_diff: Repair follower=10.4.11.48 failed in put_row_diff hanlder, status=0)
2023-10-07T05:27:18.303+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 6:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: shard=6, keyspace=keyspace1, cf=standard1_c6_nemesis_index, range=(7961079652198588357,7976523637932328823], got error in row level repair: data_dictionary::no_such_column_family (Can't find a column family standard1_c6_nemesis_index in keyspace keyspace1)
2023-10-07T05:27:18.303+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 2:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: shard=2, keyspace=keyspace1, cf=standard1_c6_nemesis_index, range=(-2884690251121526828,-2879923914668299427], got error in row level repair: std::runtime_error (put_row_diff: Repair follower=10.4.11.48 failed in put_row_diff hanlder, status=0)
2023-10-07T05:27:18.303+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 6:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: shard=6, keyspace=keyspace1, cf=standard1_c6_nemesis_index, range=(7976523637932328823,7979770321459895374], got error in row level repair: data_dictionary::no_such_column_family (Can't find a column family standard1_c6_nemesis_index in keyspace keyspace1)
2023-10-07T05:27:18.303+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 4:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: shard=4, keyspace=keyspace1, cf=standard1_c6_nemesis_index, range=(729914667755181047,747857021156085719], got error in row level repair: data_dictionary::no_such_column_family (Can't find a column family standard1_c6_nemesis_index in keyspace keyspace1)
2023-10-07T05:27:18.303+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 0:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: put_row_diff: got error from node=10.4.11.213, keyspace=keyspace1, table=standard1_c6_nemesis_index, range=(-2291647390189503983,-2282548007711999144], error=std::runtime_error (put_row_diff: Repair follower=10.4.11.48 failed in put_row_diff hanlder, status=0)
2023-10-07T05:27:18.303+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 5:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: put_row_diff: got error from node=10.4.10.45, keyspace=keyspace1, table=standard1_c6_nemesis_index, range=(-1348744548770587072,-1333497984900991349], error=std::runtime_error (put_row_diff: Repair follower=10.4.11.48 failed in put_row_diff hanlder, status=0)
2023-10-07T05:27:18.303+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 5:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: shard=5, keyspace=keyspace1, cf=standard1_c6_nemesis_index, range=(-1348744548770587072,-1333497984900991349], got error in row level repair: std::runtime_error (put_row_diff: Repair follower=10.4.11.48 failed in put_row_diff hanlder, status=0)
2023-10-07T05:27:18.303+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 0:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: put_row_diff: got error from node=10.4.9.57, keyspace=keyspace1, table=standard1_c6_nemesis_index, range=(-2291647390189503983,-2282548007711999144], error=std::runtime_error (put_row_diff: Repair follower=10.4.11.213 failed in put_row_diff hanlder, status=0)
2023-10-07T05:27:18.303+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 0:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: shard=0, keyspace=keyspace1, cf=standard1_c6_nemesis_index, range=(-2291647390189503983,-2282548007711999144], got error in row level repair: std::runtime_error (put_row_diff: Repair follower=10.4.11.213 failed in put_row_diff hanlder, status=0)
2023-10-07T05:27:18.303+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 1:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: put_row_diff: got error from node=10.4.9.57, keyspace=keyspace1, table=standard1_c6_nemesis_index, range=(-2224065091364362765,-2219100274329267292], error=std::runtime_error (put_row_diff: Repair follower=10.4.11.48 failed in put_row_diff hanlder, status=0)
2023-10-07T05:27:18.303+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 6:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: put_row_diff: got error from node=10.4.10.45, keyspace=keyspace1, table=standard1_c6_nemesis_index, range=(7887902453262889689,7902878328995368862], error=std::runtime_error (put_row_diff: Repair follower=10.4.11.213 failed in put_row_diff hanlder, status=0)
2023-10-07T05:27:18.303+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 6:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: put_row_diff: got error from node=10.4.9.57, keyspace=keyspace1, table=standard1_c6_nemesis_index, range=(7887902453262889689,7902878328995368862], error=std::runtime_error (put_row_diff: Repair follower=10.4.10.45 failed in put_row_diff hanlder, status=0)
2023-10-07T05:27:18.303+00:00 longevity-parallel-topology-schema--db-node-f0389556-8  !WARNING | scylla[5424]:  [shard 6:stre] repair - repair[ec2d3aef-1f8b-4998-aa10-821a24acb1b5]: shard=6, keyspace=keyspace1, cf=standard1_c6_nemesis_index, range=(7887902453262889689,7902878328995368862], got error in row level repair: std::runtime_error (put_row_diff: Repair follower=10.4.10.45 failed in put_row_diff hanlder, status=0)

Installation details

Kernel Version: 5.15.0-1047-aws
Scylla version (or git commit hash): 5.4.0~dev-20231006.498e3ec435be with build-id 16c6112202348a8adba536b4195d48adfdf958f9

Cluster size: 5 nodes (i4i.2xlarge)

Scylla Nodes used in this run:

  • longevity-parallel-topology-schema--db-node-f0389556-9 (3.250.197.139 | 10.4.9.182) (shards: 7)
  • longevity-parallel-topology-schema--db-node-f0389556-8 (34.242.160.129 | 10.4.9.57) (shards: 7)
  • longevity-parallel-topology-schema--db-node-f0389556-7 (3.249.126.217 | 10.4.11.213) (shards: 7)
  • longevity-parallel-topology-schema--db-node-f0389556-6 (3.252.73.173 | 10.4.11.48) (shards: 7)
  • longevity-parallel-topology-schema--db-node-f0389556-5 (3.250.201.78 | 10.4.8.197) (shards: 7)
  • longevity-parallel-topology-schema--db-node-f0389556-4 (52.49.101.255 | 10.4.11.81) (shards: 7)
  • longevity-parallel-topology-schema--db-node-f0389556-3 (3.248.198.196 | 10.4.10.45) (shards: 7)
  • longevity-parallel-topology-schema--db-node-f0389556-2 (3.250.128.254 | 10.4.8.126) (shards: 7)
  • longevity-parallel-topology-schema--db-node-f0389556-19 (3.250.57.141 | 10.4.11.73) (shards: 7)
  • longevity-parallel-topology-schema--db-node-f0389556-18 (54.229.116.226 | 10.4.11.0) (shards: -1)
  • longevity-parallel-topology-schema--db-node-f0389556-17 (34.246.191.208 | 10.4.11.130) (shards: -1)
  • longevity-parallel-topology-schema--db-node-f0389556-16 (54.78.252.161 | 10.4.9.134) (shards: 7)
  • longevity-parallel-topology-schema--db-node-f0389556-15 (3.254.115.208 | 10.4.10.212) (shards: 7)
  • longevity-parallel-topology-schema--db-node-f0389556-14 (54.246.243.194 | 10.4.8.254) (shards: 7)
  • longevity-parallel-topology-schema--db-node-f0389556-13 (54.246.137.234 | 10.4.9.220) (shards: 7)
  • longevity-parallel-topology-schema--db-node-f0389556-12 (34.242.99.132 | 10.4.8.190) (shards: -1)
  • longevity-parallel-topology-schema--db-node-f0389556-11 (3.250.197.239 | 10.4.11.74) (shards: 7)
  • longevity-parallel-topology-schema--db-node-f0389556-10 (3.252.107.15 | 10.4.10.122) (shards: 7)
  • longevity-parallel-topology-schema--db-node-f0389556-1 (34.248.73.222 | 10.4.11.174) (shards: 7)

OS / Image: ami-06f33c2dc88569dd3 (aws: undefined_region)

Test: longevity-schema-topology-changes-12h-test
Test id: f0389556-5d2e-4532-a131-def85a5b3181
Test name: scylla-master/longevity/longevity-schema-topology-changes-12h-test
Test config file(s):

Logs and commands
  • Restore Monitor Stack command: $ hydra investigate show-monitor f0389556-5d2e-4532-a131-def85a5b3181
  • Restore monitor on AWS instance using Jenkins job
  • Show all stored logs command: $ hydra investigate show-logs f0389556-5d2e-4532-a131-def85a5b3181

Logs:

Jenkins job URL
Argus

@bhalevy
Copy link
Member

bhalevy commented Oct 15, 2023

This is essentially #12373

@bhalevy
Copy link
Member

bhalevy commented Oct 15, 2023

This is essentially #12373

Hmm, maybe not. The error is apparently coming from a different path.

@kostja kostja assigned kbr-scylla and unassigned Deexie Oct 20, 2023
@kostja
Copy link
Contributor

kostja commented Oct 20, 2023

@k0machi we will not be investigating such issues in eventually consistent schema/topology mode. Well, unless @avikivity pushed really hard, but given we have a bunch of similar issues, the turn of this one will not come quickly.
So I suggest switching the tests to use consistent schema and topology, and trying to reproduce with them.

@bhalevy
Copy link
Member

bhalevy commented Oct 20, 2023

@k0machi we will not be investigating such issues in eventually consistent schema/topology mode. Well, unless @avikivity pushed really hard, but given we have a bunch of similar issues, the turn of this one will not come quickly. So I suggest switching the tests to use consistent schema and topology, and trying to reproduce with them.

Makes sense. Consistent topology (and schema) changes can be considered a fix for this issue, if they indeed fix it.
I highly doubt it's a regression. If it is, we may want to reconsider the above.

@kbr-scylla
Copy link
Contributor

if they indeed fix it

TBH they probably don't

fruch added a commit to fruch/scylla-cluster-tests that referenced this issue Dec 19, 2023
we had multiple places where we tried to apply a filtering/demoating
of view update errors, and they keep popping up in all kind of cases

* cases of parallel nemesis
* cases our log reading slow down, and those pop out of context, since filter is gone

so cause of those issue, and the fact those aren't gonna be fixed any
time soon, we'll apply this filter globaly until all of the view update
issues would be addressed

Ref: scylladb/scylladb#16206
Ref: scylladb/scylladb#16259
Ref: scylladb/scylladb#15598
fruch added a commit to scylladb/scylla-cluster-tests that referenced this issue Dec 19, 2023
we had multiple places where we tried to apply a filtering/demoating
of view update errors, and they keep popping up in all kind of cases

* cases of parallel nemesis
* cases our log reading slow down, and those pop out of context, since filter is gone

so cause of those issue, and the fact those aren't gonna be fixed any
time soon, we'll apply this filter globaly until all of the view update
issues would be addressed

Ref: scylladb/scylladb#16206
Ref: scylladb/scylladb#16259
Ref: scylladb/scylladb#15598
@mykaul
Copy link
Contributor

mykaul commented Jan 1, 2024

@Deexie - any updates?

@fruch
Copy link
Contributor

fruch commented Jan 21, 2024

Reproduced again in the weekly tier1 runs:

Installation details

Kernel Version: 5.15.0-1051-aws
Scylla version (or git commit hash): 5.5.0~dev-20240119.b1ba904c4977 with build-id 7a5829efb1f6ef7b467d2dc837300abcc0b739c8

Cluster size: 12 nodes (i3en.2xlarge)

Scylla Nodes used in this run:

  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-9 (13.42.76.52 | 10.3.9.221) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-8 (18.171.249.192 | 10.3.11.81) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-7 (13.40.174.75 | 10.3.9.111) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-6 (34.244.129.107 | 10.4.8.102) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-5 (52.51.232.161 | 10.4.8.84) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-4 (3.249.157.152 | 10.4.10.67) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-3 (18.200.241.196 | 10.4.11.162) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-21 (18.171.217.89 | 10.3.10.29) (shards: -1)
  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-20 (54.194.80.49 | 10.4.11.15) (shards: -1)
  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-2 (3.255.155.171 | 10.4.11.23) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-19 (3.250.203.165 | 10.4.9.229) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-18 (63.32.105.171 | 10.4.8.98) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-17 (13.40.50.44 | 10.3.10.220) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-16 (3.10.178.2 | 10.3.11.165) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-15 (18.133.122.38 | 10.3.8.229) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-14 (3.253.73.136 | 10.4.11.110) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-13 (52.50.172.72 | 10.4.11.85) (shards: -1)
  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-12 (18.132.204.209 | 10.3.8.167) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-11 (3.10.138.102 | 10.3.8.68) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-10 (35.178.3.178 | 10.3.11.20) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-5a4b1b09-1 (34.252.138.87 | 10.4.11.32) (shards: 7)

OS / Image: ami-0c85f335032e27007 ami-09aedbf1f551ba668 (aws: undefined_region)

Test: longevity-multidc-schema-topology-changes-12h-test
Test id: 5a4b1b09-e349-4867-959c-329daf296b8c
Test name: scylla-master/longevity/longevity-multidc-schema-topology-changes-12h-test
Test config file(s):

Logs and commands
  • Restore Monitor Stack command: $ hydra investigate show-monitor 5a4b1b09-e349-4867-959c-329daf296b8c
  • Restore monitor on AWS instance using Jenkins job
  • Show all stored logs command: $ hydra investigate show-logs 5a4b1b09-e349-4867-959c-329daf296b8c

Logs:

Jenkins job URL
Argus

@avikivity
Copy link
Member

Let's fix it already, it's wasting everyone's time.

@kostja
Copy link
Contributor

kostja commented Jan 22, 2024

@gleb-cloudius is working on scheduling the repair through the topology coordinator. As such repair won't block DDL, so the error will still be there. Also, with tablets and file-based streaming, the issue will not be present. With row-based streaming we'll still need to have a fix.

@denesb
Copy link
Contributor

denesb commented Jan 22, 2024

Let's fix it already, it's wasting everyone's time.

We dropped the ball on this one. AFAIK @Deexie couldn't reproduce and it is not clear where the exception is coming from, without a reproducer.
@Deexie let's resume work on this.

@gleb-cloudius
Copy link
Contributor

@gleb-cloudius is working on scheduling the repair through the topology coordinator. As such repair won't block DDL, so the error will still be there.

Also the repair here is not really nodetool repair operation, but streaming that uses repair during node bootstrap which is already done by the topology coordinator.

@kostja
Copy link
Contributor

kostja commented Jan 22, 2024

I am willing to help with the test scenario @Deexie

@Deexie
Copy link
Contributor

Deexie commented Jan 22, 2024

So I understood what's going on (and recently added new logs confirm):

A new node (A) starts repair and it comes to the step when it sends missing rows to the followers. In the meantime one of the tables is dropped. Node A still sends mutation fragment of that table to the follower (B).
Then on B we have:

repair_put_row_diff_with_rpc_stream_handler -> 
repair_put_row_diff_with_rpc_stream_process_op -> 
repair_meta::put_row_diff_handler -> 
repair_meta::apply_rows_on_follower -> 
repair_meta::do_apply_rows -> 
repair_writer::create_writer -> 
repair_writer_impl::create_writer ->
database::find_column_family

Which throws since B's already dropped the table and so repair_stream_cmd::error is sinked.

So I guess we should just send repair_stream_cmd::put_rows_done when no_such_column_family is thrown.

But I'm stuck writing a test for that (especially with providing data loss on a node). The test should probably:

  • disable consistent topology,
  • enable rbno,
  • support rpc stream,
  • lose some data on node B,
  • maybe some other options I'm not aware of.

@kostja (or others) could you, please, help? I would be thankful for any tips, examples of similiar usages etc.

@avikivity
Copy link
Member

Didn't we fix this already? 9859bae

Maybe we just missed a few cases.

@Deexie
Copy link
Contributor

Deexie commented Jan 22, 2024

Didn't we fix this already? 9859bae

Maybe we just missed a few cases.

Yes, handling of no_such_column_family is missed in a few places and separated issues are opened for each.
#15370 is yet another for when RBNO is disabled.

@kostja
Copy link
Contributor

kostja commented Jan 22, 2024 via email

Deexie added a commit to Deexie/scylla that referenced this issue Jan 23, 2024
If a table is dropped during repair, repair master may send row
of a dropped table to a follower.

Currently, in this situation no_such_column_family is thrown
on the follower node which responds with repair_stream_cmd::error
and then handles the exception at its side.

When follower receives repair_stream_cmd::error, it assumes that
the repair failed.

To avoid that add table_dropped option to repair_stream_cmd and
send it in this case. Handle table_dropped as if the range repair
succedd on repair master.

Fixes: scylladb#15598.
Deexie added a commit to Deexie/scylla that referenced this issue Jan 25, 2024
If a table is dropped during repair, repair master may send row
of a dropped table to a follower.

Currently, in this situation no_such_column_family is thrown
on the follower node which responds with repair_stream_cmd::error
and then handles the exception at its side.

When follower receives repair_stream_cmd::error, it assumes that
the repair failed.

To avoid that add table_dropped option to repair_stream_cmd and
send it in this case. Handle table_dropped as if the range repair
succedd on repair master.

Fixes: scylladb#15598.
@mykaul mykaul added this to the 6.1 milestone Jan 28, 2024
@aleksbykov
Copy link
Contributor

Issue reproduced with:

Packages

Scylla version: 5.5.0~dev-20240125.03313d359e32 with build-id 3ba319c2856389fd41198911105d86848352105e
Kernel Version: 5.15.0-1052-aws

Issue description

  • This issue is a regression.
  • It is unknown if this issue is a regression.

node25 failed to bootstrap, because index was removed at this moment

Impact

Describe the impact this issue causes to the user.

How frequently does it reproduce?

Describe the frequency with how this issue can be reproduced.

Installation details

Cluster size: 12 nodes (i3en.2xlarge)

Scylla Nodes used in this run:

  • parallel-topology-schema-changes-mu-db-node-211b3014-9 (18.133.255.108 | 10.3.11.192) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-8 (3.8.10.166 | 10.3.8.158) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-7 (3.8.21.168 | 10.3.10.38) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-6 (34.244.160.236 | 10.4.10.25) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-5 (54.220.244.123 | 10.4.11.44) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-4 (3.249.142.253 | 10.4.10.151) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-3 (54.77.216.36 | 10.4.10.135) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-25 (18.171.189.140 | 10.3.10.8) (shards: -1)
  • parallel-topology-schema-changes-mu-db-node-211b3014-24 (18.133.121.221 | 10.3.9.36) (shards: -1)
  • parallel-topology-schema-changes-mu-db-node-211b3014-23 (3.250.6.105 | 10.4.8.224) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-22 (63.33.195.44 | 10.4.8.142) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-21 (18.170.48.16 | 10.3.8.49) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-20 (34.240.132.113 | 10.4.10.241) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-2 (52.31.92.194 | 10.4.11.23) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-19 (54.229.189.76 | 10.4.8.200) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-18 (13.40.113.54 | 10.3.8.46) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-17 (18.169.132.40 | 10.3.9.246) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-16 (34.247.102.110 | 10.4.9.40) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-15 (34.253.191.211 | 10.4.10.218) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-14 (52.208.35.140 | 10.4.8.116) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-13 (18.170.121.168 | 10.3.11.198) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-12 (3.11.81.173 | 10.3.11.13) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-11 (13.40.141.229 | 10.3.11.242) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-10 (18.168.198.226 | 10.3.10.248) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-211b3014-1 (52.18.10.102 | 10.4.9.12) (shards: 7)

OS / Image: ami-0ef9a24dac72b0573 ami-0ddf22c311b756be2 (aws: undefined_region)

Test: longevity-multidc-schema-topology-changes-12h-test
Test id: 211b3014-25f2-4618-8e2c-97f026f9a393
Test name: scylla-master/longevity/longevity-multidc-schema-topology-changes-12h-test
Test config file(s):

Logs and commands
  • Restore Monitor Stack command: $ hydra investigate show-monitor 211b3014-25f2-4618-8e2c-97f026f9a393
  • Restore monitor on AWS instance using Jenkins job
  • Show all stored logs command: $ hydra investigate show-logs 211b3014-25f2-4618-8e2c-97f026f9a393

Logs:

Jenkins job URL
Argus

@bhalevy
Copy link
Member

bhalevy commented Feb 14, 2024

Hit a variant of this issue in https://jenkins.scylladb.com/job/scylla-master/job/dtest-release/494/testReport/repair_additional_test/TestRepairAdditional/Run_Dtest_Parallel_Cloud_Machines___FullDtest___full_split008___test_repair_while_table_is_dropped/

ccmlib.node.ToolError: Subprocess /jenkins/workspace/scylla-master/dtest-release/scylla/.ccm/scylla-repository/3d81138852bad568fdff42012e8fd8d1e8b9cc87/share/cassandra/bin/nodetool -h 127.0.57.2 -p 7199 -Dcom.sun.jndi.rmiURLParsing=legacy repair exited with non-zero status; exit status: 2; 
stdout: [2024-02-13 23:26:50,450] Starting repair command #1, repairing 1 ranges for keyspace ks (parallelism=SEQUENTIAL, full=true)
[2024-02-13 23:26:53,546] Repair session 1 failed
[2024-02-13 23:26:53,547] Repair session 1 finished
; 
stderr: error: Repair job has failed with the error message: [2024-02-13 23:26:53,546] Repair session 1 failed
-- StackTrace --
java.lang.RuntimeException: Repair job has failed with the error message: [2024-02-13 23:26:53,546] Repair session 1 failed
	at org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:124)
	at org.apache.cassandra.utils.progress.jmx.JMXNotificationProgressListener.handleNotification(JMXNotificationProgressListener.java:77)
	at java.management/com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.dispatchNotification(ClientNotifForwarder.java:633)
	at java.management/com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.doRun(ClientNotifForwarder.java:555)
	at java.management/com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.run(ClientNotifForwarder.java:474)
	at java.management/com.sun.jmx.remote.internal.ClientNotifForwarder$LinearExecutor.lambda$execute$0(ClientNotifForwarder.java:108)
	at java.base/java.lang.Thread.run(Thread.java:829)

https://jenkins.scylladb.com/job/scylla-master/job/dtest-release/494/artifact/logs-full.release.008/1707866821871_repair_additional_test.py%3A%3ATestRepairAdditional%3A%3Atest_repair_while_table_is_dropped/node2.log

WARN  2024-02-13 23:26:52,570 [shard 0:strm] repair - repair[12f350c5-4ec3-4ade-91a6-5a632474a4ba]: user-requested repair failed: std::runtime_error ({shard 0: std::runtime_error (repair[12f350c5-4ec3-4ade-91a6-5a632474a4ba]: 1 out of 6921 ranges failed, keyspace=ks, tables={cf, cf_del0, cf_del1, cf_del2, cf_del3, cf_del4, cf_del5, cf_del6, cf_del7}, repair_reason=repair, nodes_down_during_repair={}, aborted_by_user=false, failed_because=seastar::nested_exception: std::runtime_error (Failed to repair for keyspace=ks, cf=cf_del2, range=(-1555104735018443878,-1551520088607069646]) (while cleaning up after seastar::rpc::remote_verb_error (Can't find a column family with UUID 57dce0e0-cac7-11ee-bead-945228575867))), shard 1: std::runtime_error (repair[12f350c5-4ec3-4ade-91a6-5a632474a4ba]: 1 out of 6921 ranges failed, keyspace=ks, tables={cf, cf_del0, cf_del1, cf_del2, cf_del3, cf_del4, cf_del5, cf_del6, cf_del7}, repair_reason=repair, nodes_down_during_repair={}, aborted_by_user=false, failed_because=seastar::nested_exception: std::runtime_error (Failed to repair for keyspace=ks, cf=cf_del2, range=(-1986588074525715205,-1982377594942292289]) (while cleaning up after seastar::rpc::remote_verb_error (Can't find a column family with UUID 57dce0e0-cac7-11ee-bead-945228575867)))})

@mykaul mykaul added symptom/ci stability Issues that failed in ScyllaDB CI - tests and framework P1 Urgent labels Feb 14, 2024
denesb added a commit that referenced this issue Feb 27, 2024
… from remote node' from Aleksandra Martyniuk

RPC calls lose information about the type of returned exception.
Thus, if a table is dropped on receiver node, but it still exists
on a sender node and sender node streams the table's data, then
the whole operation fails.

To prevent that, add a method which synchronizes schema and then
checks, if the exception was caused by table drop. If so,
the exception is swallowed.

Use the method in streaming and repair to continue them when
the table is dropped in the meantime.

Fixes: #17028.
Fixes: #15370.
Fixes: #15598.

Closes #17525

* github.com:scylladb/scylladb:
  repair: handle no_such_column_family from remote node gracefully
  test: test drop table on receiver side during streaming
  streaming: fix indentation
  streaming: handle no_such_column_family from remote node gracefully
  repair: add methods to skip dropped table
denesb added a commit that referenced this issue Feb 28, 2024
… from remote node' from Aleksandra Martyniuk

RPC calls lose information about the type of returned exception.
Thus, if a table is dropped on receiver node, but it still exists
on a sender node and sender node streams the table's data, then
the whole operation fails.

To prevent that, add a method which synchronizes schema and then
checks, if the exception was caused by table drop. If so,
the exception is swallowed.

Use the method in streaming and repair to continue them when
the table is dropped in the meantime.

Fixes: #17028.
Fixes: #15370.
Fixes: #15598.

Closes #17528

* github.com:scylladb/scylladb:
  repair: handle no_such_column_family from remote node gracefully
  test: test drop table on receiver side during streaming
  streaming: fix indentation
  streaming: handle no_such_column_family from remote node gracefully
  repair: add methods to skip dropped table
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/materialized views Backport candidate P1 Urgent symptom/ci stability Issues that failed in ScyllaDB CI - tests and framework triage/master Looking for assignee
Projects
None yet