Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Segmentation fault and core dump on alter table cqlstress_lwt_example.blogposts command #10026

Closed
yarongilor opened this issue Feb 3, 2022 · 60 comments
Assignees
Milestone

Comments

@yarongilor
Copy link

yarongilor commented Feb 3, 2022

Installation details
Kernel version: 5.11.0-1022-aws
Scylla version (or git commit hash): 4.6.rc4-0.20220203.34d470967a0 with build-id bf9a6fccf70cbe9f5c482869342c931fadd4c0b2
Cluster size: 4 nodes (i3.2xlarge)
Scylla running with shards number (live nodes):
longevity-lwt-parallel-24h-4-6-db-node-52f8a7d6-1 (34.205.29.108 | 10.0.3.86): 8 shards
longevity-lwt-parallel-24h-4-6-db-node-52f8a7d6-2 (54.237.170.199 | 10.0.3.210): 8 shards
longevity-lwt-parallel-24h-4-6-db-node-52f8a7d6-3 (54.236.34.38 | 10.0.2.103): 8 shards
longevity-lwt-parallel-24h-4-6-db-node-52f8a7d6-4 (44.197.123.169 | 10.0.2.202): 8 shards
OS (RHEL/CentOS/Ubuntu/AWS AMI): ami-06d4532db67cc0577 (aws: us-east-1)

Test: longevity-lwt-parallel-schema-changes-with-disruptive-24h-test
Test name: longevity_lwt_test.LWTLongevityTest.test_lwt_longevity
Test config file(s):

  • longevity-lwt-parallel-24h.yaml

Issue description

====================================

scenario: a Modify-table nemesis ran an alter table command of:
ALTER TABLE cqlstress_lwt_example.blogposts WITH min_index_interval = 256;

Then the node got a segmentation fault and a core dump:

< t:2022-02-03 12:58:31,382 f:sct_config.py   l:1529 c:sdcm.sct_config      p:INFO  > nemesis_class_name: DisruptiveMonkey:1 ModifyTableMonkey:1
< t:2022-02-03 13:56:40,493 f:nemesis.py      l:1216 c:sdcm.nemesis         p:INFO  > sdcm.nemesis.DisruptiveMonkey: >>>>>>>>>>>>>Started random_disrupt_method rolling_restart_cluster
< t:2022-02-03 13:56:40,497 f:nemesis.py      l:309  c:sdcm.nemesis         p:INFO  > sdcm.nemesis.ModifyTableMonkey: Interval: 300 s
< t:2022-02-03 13:56:40,502 f:nemesis.py      l:792  c:sdcm.nemesis         p:DEBUG > sdcm.nemesis.DisruptiveMonkey: Set current_disruption -> RollingRestartCluster Node longevity-lwt-parallel-24h-4-6-db-node-52f8a7d6-2 [54.237.170.199 | 10.0.3.210] (seed: False)
< t:2022-02-03 13:56:40,501 f:nemesis.py      l:301  c:sdcm.nemesis         p:INFO  > sdcm.nemesis.ModifyTableMonkey: Current Target: Node longevity-lwt-parallel-24h-4-6-db-node-52f8a7d6-3 [54.236.34.38 | 10.0.2.103] (seed: False) with running nemesis: None
< t:2022-02-03 13:56:40,507 f:nemesis.py      l:3138 c:sdcm.nemesis         p:DEBUG > sdcm.nemesis.ModifyTableMonkey: Start disruption at `2022-02-03 13:56:40.507850`
< t:2022-02-03 13:56:40,508 f:nemesis.py      l:792  c:sdcm.nemesis         p:DEBUG > sdcm.nemesis.ModifyTableMonkey: Set current_disruption -> ModifyTable Node longevity-lwt-parallel-24h-4-6-db-node-52f8a7d6-3 [54.236.34.38 | 10.0.2.103] (seed: False)
< t:2022-02-03 13:56:56,800 f:nemesis.py      l:3126 c:sdcm.nemesis         p:DEBUG > sdcm.nemesis.ModifyTableMonkey: Data validator error: expected string or bytes-like object
< t:2022-02-03 13:57:07,243 f:nemesis.py      l:1359 c:sdcm.nemesis         p:DEBUG > sdcm.nemesis.ModifyTableMonkey: _modify_table_property: ALTER TABLE cqlstress_lwt_example.blogposts WITH min_index_interval = 256;
< t:2022-02-03 13:57:16,054 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-03T13:57:15+00:00 longevity-lwt-parallel-24h-4-6-db-node-52f8a7d6-3 !    INFO | Segmentation fault on shard 1.
< t:2022-02-03 13:57:16,079 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:INFO  > 2022-02-03 13:57:16.056 <2022-02-03 13:57:15.000>: (DatabaseLogEvent Severity.ERROR) period_type=one-time event_id=5ec23a7c-9081-4958-a7c2-b3e1849c325b: type=SEGMENTATION regex=segmentation line_number=8421 node=longevity-lwt-parallel-24h-4-6-db-node-52f8a7d6-3
< t:2022-02-03 13:57:16,079 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:INFO  > 2022-02-03T13:57:15+00:00 longevity-lwt-parallel-24h-4-6-db-node-52f8a7d6-3 !    INFO | Segmentation fault on shard 1.
< t:2022-02-03 13:57:32,491 f:nemesis.py      l:327  c:sdcm.nemesis         p:INFO  > sdcm.nemesis.ModifyTableMonkey: Report
< t:2022-02-03 13:57:32,491 f:nemesis.py      l:328  c:sdcm.nemesis         p:INFO  > sdcm.nemesis.ModifyTableMonkey: DB Version: 4.6.rc4
< t:2022-02-03 13:57:32,491 f:nemesis.py      l:329  c:sdcm.nemesis         p:INFO  > sdcm.nemesis.ModifyTableMonkey: Interval: 300 s
< t:2022-02-03 13:57:32,491 f:nemesis.py      l:330  c:sdcm.nemesis         p:INFO  > sdcm.nemesis.ModifyTableMonkey: Average duration: 0 s
< t:2022-02-03 13:57:32,491 f:nemesis.py      l:331  c:sdcm.nemesis         p:INFO  > sdcm.nemesis.ModifyTableMonkey: Total execution time: 2135 s
< t:2022-02-03 13:57:32,491 f:nemesis.py      l:332  c:sdcm.nemesis         p:INFO  > sdcm.nemesis.ModifyTableMonkey: Times executed: 0
< t:2022-02-03 13:57:32,491 f:nemesis.py      l:333  c:sdcm.nemesis         p:INFO  > sdcm.nemesis.ModifyTableMonkey: Unexpected errors: 0
< t:2022-02-03 13:57:32,492 f:nemesis.py      l:334  c:sdcm.nemesis         p:INFO  > sdcm.nemesis.ModifyTableMonkey: Operation log:
< t:2022-02-03 13:58:45,669 f:nemesis.py      l:3173 c:sdcm.nemesis         p:ERROR > sdcm.nemesis.ModifyTableMonkey: Unhandled exception in method <function Nemesis.disrupt_modify_table at 0x7f88b88d2830> < t:2022-02-03 13:58:45,669 f:nemesis.py      l:3173 c:sdcm.nemesis         p:ERROR > sdcm.nemesis.ModifyTableMonkey: Unhandled exception in method <function Nemesis.disrupt_modify_table at 0x7f88b88d2830>

might also be related to #9933

core dump details:

2022-02-03 14:03:49.548: (CoreDumpEvent Severity.ERROR) period_type=one-time event_id=9508e75e-a40b-4ba6-a96e-f0a43fe150a0 node=Node longevity-lwt-parallel-24h-4-6-db-node-52f8a7d6-3 [54.236.34.38 | 10.0.2.103] (seed: False)
corefile_url=https://storage.cloud.google.com/upload.scylladb.com/core.scylla.113.d2396c4c3db4470383cc42e9131664cb.12910.1643896635000000000000/core.scylla.113.d2396c4c3db4470383cc42e9131664cb.12910.1643896635000000000000.gz
backtrace=           PID: 12910 (scylla)
UID: 113 (scylla)
GID: 119 (scylla)
Signal: 11 (SEGV)
Timestamp: Thu 2022-02-03 13:57:15 UTC (2min 10s ago)
Command Line: /usr/bin/scylla --blocked-reactor-notify-ms 100 --abort-on-lsa-bad-alloc 1 --abort-on-seastar-bad-alloc --abort-on-internal-error 1 --abort-on-ebadf 1 --enable-sstable-key-validation 1 --log-to-syslog 1 --log-to-stdout 0 --default-log-level info --network-stack posix --io-properties-file=/etc/scylla.d/io_properties.yaml --cpuset 0-7 --lock-memory=1
Executable: /opt/scylladb/libexec/scylla
Control Group: /scylla.slice/scylla-server.slice/scylla-server.service
Unit: scylla-server.service
Slice: scylla-server.slice
Boot ID: d2396c4c3db4470383cc42e9131664cb
Machine ID: 0eca5000438542cca6674ccc88b6e3c2
Hostname: longevity-lwt-parallel-24h-4-6-db-node-52f8a7d6-3
Storage: /var/lib/systemd/coredump/core.scylla.113.d2396c4c3db4470383cc42e9131664cb.12910.1643896635000000000000
Message: Process 12910 (scylla) of user 113 dumped core.
Stack trace of thread 12911:
#0  0x0000000001c7b5bc _ZN4cql34expr12_GLOBAL__N_19get_valueERKNS0_12column_valueERKNS1_21column_value_eval_bagE (scylla + 0x1a7b5bc)
#1  0x0000000001c74e75 _ZZN4cql34expr12_GLOBAL__N_115is_satisfied_byERKNS0_15binary_operatorERKNS1_21column_value_eval_bagEENK3$_8clERKNS0_12column_valueE (scylla + 0x1a74e75)
#2  0x0000000001c71c66 _ZNSt8__detail9__variant17__gen_vtable_implINS0_12_Multi_arrayIPFNS0_21__deduce_visit_resultIbEER18overloaded_functorIJZN4cql34expr12_GLOBAL__N_115is_satisfied_byERKNS7_10expressionERKNS8_21column_value_eval_bagEE4$_25ZNS8_15is_satisfied_byESB_SE_E4$_24ZNS8_15is_satisfied_byESB_SE_E4$_26ZNS8_15is_satisfied_byESB_SE_E4$_27ZNS8_15is_satisfied_byESB_SE_E4$_28ZNS8_15is_satisfied_byESB_SE_E4$_29ZNS8_15is_satisfied_byESB_SE_E4$_30ZNS8_15is_satisfied_byESB_SE_E4$_31ZNS8_15is_satisfied_byESB_SE_E4$_32ZNS8_15is_satisfied_byESB_SE_E4$_33ZNS8_15is_satisfied_byESB_SE_E4$_34ZNS8_15is_satisfied_byESB_SE_E4$_35ZNS8_15is_satisfied_byESB_SE_E4$_36ZNS8_15is_satisfied_byESB_SE_E4$_37ZNS8_15is_satisfied_byESB_SE_E4$_38ZNS8_15is_satisfied_byESB_SE_E4$_39EERSt7variantIJNS7_11conjunctionENS7_15binary_operatorENS7_12column_valueENS7_5tokenENS7_21unresolved_identifierENS7_25column_mutation_attributeENS7_13function_callENS7_4castENS7_15field_selectionENS7_4nullENS7_13bind_variableENS7_16untyped_constantENS7_8constantENS7_17tuple_constructorENS7_22collection_constructorENS7_20usertype_constructorEEEEJEEESt16integer_sequenceImJLm1EEEE14__visit_invokeESW_S1F_ (scylla + 0x1a71c66)
#3  0x0000000001c71be2 _ZNSt8__detail9__variant17__gen_vtable_implINS0_12_Multi_arrayIPFNS0_21__deduce_visit_resultIbEER18overloaded_functorIJZN4cql34expr12_GLOBAL__N_115is_satisfied_byERKNS7_10expressionERKNS8_21column_value_eval_bagEE4$_25ZNS8_15is_satisfied_byESB_SE_E4$_24ZNS8_15is_satisfied_byESB_SE_E4$_26ZNS8_15is_satisfied_byESB_SE_E4$_27ZNS8_15is_satisfied_byESB_SE_E4$_28ZNS8_15is_satisfied_byESB_SE_E4$_29ZNS8_15is_satisfied_byESB_SE_E4$_30ZNS8_15is_satisfied_byESB_SE_E4$_31ZNS8_15is_satisfied_byESB_SE_E4$_32ZNS8_15is_satisfied_byESB_SE_E4$_33ZNS8_15is_satisfied_byESB_SE_E4$_34ZNS8_15is_satisfied_byESB_SE_E4$_35ZNS8_15is_satisfied_byESB_SE_E4$_36ZNS8_15is_satisfied_byESB_SE_E4$_37ZNS8_15is_satisfied_byESB_SE_E4$_38ZNS8_15is_satisfied_byESB_SE_E4$_39EERSt7variantIJNS7_11conjunctionENS7_15binary_operatorENS7_12column_valueENS7_5tokenENS7_21unresolved_identifierENS7_25column_mutation_attributeENS7_13function_callENS7_4castENS7_15field_selectionENS7_4nullENS7_13bind_variableENS7_16untyped_constantENS7_8constantENS7_17tuple_constructorENS7_22collection_constructorENS7_20usertype_constructorEEEEJEEESt16integer_sequenceImJLm0EEEE14__visit_invokeESW_S1F_ (scylla + 0x1a71be2)
#4  0x0000000001c5e088 _ZN4cql34expr15is_satisfied_byERKNS0_10expressionERKSt6vectorIN7seastar13basic_sstringIajLj31ELb0EEESaIS7_EESB_RKN5query15result_row_viewEPSE_RKNS_9selection9selectionERKNS_13query_optionsE (scylla + 0x1a5e088)
#5  0x00000000028cf98e _ZZNK2db4view28view_filter_checking_visitor16check_if_matchesERK21clustering_key_prefixRKN5query15result_row_viewES8_ENKUlOT_E_clIRKN7seastar10shared_ptrIN4cql312restrictions25single_column_restrictionEEEEEDaSA_ (scylla + 0x26cf98e)
#6  0x00000000028cf571 _ZNK5query11result_view7consumeIRN2db4view28view_filter_checking_visitorEEEvRKNS_15partition_sliceEOT_ (scylla + 0x26cf571)
#7  0x000000000287da77 _ZN2db4view19matches_view_filterERK6schemaRK9view_infoRK13partition_keyRK14clustering_rowNSt6chrono10time_pointI8gc_clockNSD_8durationIlSt5ratioILl1ELl1EEEEEE (scylla + 0x267da77)
#8  0x000000000288635c _ZN2db4view12view_updates15generate_updateERK13partition_keyRK14clustering_rowRKSt8optionalIS5_ENSt6chrono10time_pointI8gc_clockNSC_8durationIlSt5ratioILl1ELl1EEEEEE (scylla + 0x268635c)
#9  0x000000000288ea93 _ZN2db4view19view_update_builder10on_resultsEv (scylla + 0x268ea93)
#10 0x0000000002889ddb _ZN2db4view19view_update_builder10build_someEv (scylla + 0x2689ddb)
#11 0x000000000116e225 _ZNK5table35generate_and_propagate_view_updatesERKN7seastar13lw_shared_ptrIK6schemaEE13reader_permitOSt6vectorIN2db4view13view_and_baseESaISB_EEO8mutationNS0_18optimized_optionalI20flat_mutation_readerEEN7tracing15trace_state_ptrENSt6chrono10time_pointI8gc_clockNSM_8durationIlSt5ratioILl1ELl1EEEEEE (scylla + 0xf6e225)
#12 0x000000000118b601 _ZNK5table28do_push_view_replica_updatesEN7seastar13lw_shared_ptrIK6schemaEE8mutationNSt6chrono10time_pointINS0_12lowres_clockENS6_8durationIlSt5ratioILl1ELl1000EEEEEE15mutation_sourceN7tracing15trace_state_ptrER28reader_concurrency_semaphoreRKNS0_17io_priority_classE8enum_setI10super_enumIN5query15partition_slice6optionEJLSQ_0ELSQ_1ELSQ_2ELSQ_3ELSQ_4ELSQ_5ELSQ_6ELSQ_7ELSQ_8ELSQ_9ELSQ_10ELSQ_11ELSQ_12EEEE (scylla + 0xf8b601)
#13 0x0000000001189146 _ZNK5table25push_view_replica_updatesERKN7seastar13lw_shared_ptrIK6schemaEERK15frozen_mutationNSt6chrono10time_pointINS0_12lowres_clockENSA_8durationIlSt5ratioILl1ELl1000EEEEEEN7tracing15trace_state_ptrER28reader_concurrency_semaphore (scylla + 0xf89146)
#14 0x0000000001067826 _ZN8database8do_applyEN7seastar13lw_shared_ptrIK6schemaEERK15frozen_mutationN7tracing15trace_state_ptrENSt6chrono10time_pointINS0_12lowres_clockENSA_8durationIlSt5ratioILl1ELl1000EEEEEENS0_10bool_classIN2db14force_sync_tagEEE (scylla + 0xe67826)
#15 0x00000000010eb70b _ZN7seastar20noncopyable_functionIFNS_6futureIvEEP8databaseNS_13lw_shared_ptrIK6schemaEERK15frozen_mutationN7tracing15trace_state_ptrENSt6chrono10time_pointINS_12lowres_clockENSE_8durationIlSt5ratioILl1ELl1000EEEEEENS_10bool_classIN2db14force_sync_tagEEEEE17direct_vtable_forISt7_Mem_fnIMS3_FS2_S8_SB_SD_SL_SP_EEE4callEPKSR_S4_S8_SB_SD_SL_SP_ (scylla + 0xeeb70b)
#16 0x000000000110b28d _ZN7seastar20noncopyable_functionIFNS_6futureIvEEP8databaseNS_13lw_shared_ptrIK6schemaEERK15frozen_mutationN7tracing15trace_state_ptrENSt6chrono10time_pointINS_12lowres_clockENSE_8durationIlSt5ratioILl1ELl1000EEEEEENS_10bool_classIN2db14force_sync_tagEEEEE17direct_vtable_forIZNS_35inheriting_concrete_execution_stageIS2_JS4_S8_SB_SD_SL_SP_EE20make_stage_for_groupENS_16scheduling_groupEEUlS4_S8_SB_SD_SL_SP_E_E4callEPKSR_S4_S8_SB_SD_SL_SP_ (scylla + 0xf0b28d)
#17 0x00000000010d701c _ZN7seastar24concrete_execution_stageINS_6futureIvEEJP8databaseNS_13lw_shared_ptrIK6schemaEERK15frozen_mutationN7tracing15trace_state_ptrENSt6chrono10time_pointINS_12lowres_clockENSE_8durationIlSt5ratioILl1ELl1000EEEEEENS_10bool_classIN2db14force_sync_tagEEEEE8do_flushEv (scylla + 0xed701c)
#18 0x0000000003d79716 _ZN7seastar11lambda_taskIZNS_15execution_stage5flushEvE3$_5E15run_and_disposeEv (scylla + 0x3b79716)
#19 0x0000000003dc3d05 _ZN7seastar7reactor14run_some_tasksEv (scylla + 0x3bc3d05)
#20 0x0000000003dc50f8 _ZN7seastar7reactor6do_runEv (scylla + 0x3bc50f8)
#21 0x0000000003de2726 _ZNSt17_Function_handlerIFvvEZN7seastar3smp9configureEN5boost15program_options13variables_mapENS1_14reactor_configEE4$_98E9_M_invokeERKSt9_Any_data (scylla + 0x3be2726)
#22 0x0000000003d9c33b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x3b9c33b)
#23 0x00007f93fb1cf299 start_thread (libpthread.so.0 + 0x9299)
#24 0x00007f93fa6e1353 __clone (libc.so.6 + 0x100353)
Stack trace of thread 12918:
#0  0x00007f93fb1d894c read (libpthread.so.0 + 0x1294c)
#1  0x0000000003e05885 _ZN7seastar11thread_pool4workENS_13basic_sstringIcjLj15ELb1EEE (scylla + 0x3c05885)
#2  0x0000000003e05bf0 _ZNSt17_Function_handlerIFvvEZN7seastar11thread_poolC1EPNS1_7reactorENS1_13basic_sstringIcjLj15ELb1EEEE3$_0E9_M_invokeERKSt9_Any_data (scylla + 0x3c05bf0)
#3  0x0000000003d9c33b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x3b9c33b)
#4  0x00007f93fb1cf299 start_thread (libpthread.so.0 + 0x9299)
#5  0x00007f93fa6e1353 __clone (libc.so.6 + 0x100353)
Stack trace of thread 12910:
#0  0x000000000472ecc0 n/a (scylla + 0x452ecc0)
download_instructions=gsutil cp gs://[upload.scylladb.com/core.scylla.113.d2396c4c3db4470383cc42e9131664cb.12910.1643896635000000000000/core.scylla.113.d2396c4c3db4470383cc42e9131664cb.12910.1643896635000000000000.gz](http://upload.scylladb.com/core.scylla.113.d2396c4c3db4470383cc42e9131664cb.12910.1643896635000000000000/core.scylla.113.d2396c4c3db4470383cc42e9131664cb.12910.1643896635000000000000.gz) .
gunzip /var/lib/systemd/coredump/core.scylla.113.d2396c4c3db4470383cc42e9131664cb.12910.1643896635000000000000.gz

====================================

Restore Monitor Stack command: $ hydra investigate show-monitor 52f8a7d6-6b64-4051-9232-7bb17ee37cf3
Restore monitor on AWS instance using Jenkins job
Show all stored logs command: $ hydra investigate show-logs 52f8a7d6-6b64-4051-9232-7bb17ee37cf3

Test id: 52f8a7d6-6b64-4051-9232-7bb17ee37cf3

Logs:
grafana - [https://cloudius-jenkins-test.s3.amazonaws.com/52f8a7d6-6b64-4051-9232-7bb17ee37cf3/20220203_140655/grafana-screenshot-longevity-lwt-parallel-schema-changes-with-disruptive-24h-test-scylla-per-server-metrics-nemesis-20220203_140822-longevity-lwt-parallel-24h-4-6-monitor-node-52f8a7d6-1.png](https://www.google.com/url?q=https://cloudius-jenkins-test.s3.amazonaws.com/52f8a7d6-6b64-4051-9232-7bb17ee37cf3/20220203_140655/grafana-screenshot-longevity-lwt-parallel-schema-changes-with-disruptive-24h-test-scylla-per-server-metrics-nemesis-20220203_140822-longevity-lwt-parallel-24h-4-6-monitor-node-52f8a7d6-1.png%255D(https://cloudius-jenkins-test.s3.amazonaws.com/52f8a7d6-6b64-4051-9232-7bb17ee37cf3/20220203_140655/grafana-screenshot-longevity-lwt-parallel-schema-changes-with-disruptive-24h-test-scylla-per-server-metrics-nemesis-20220203_140822-longevity-lwt-parallel-24h-4-6-monitor-node-52f8a7d6-1.png)&source=gmail-html&ust=1643991989961000&usg=AOvVaw2bEFl2MuSVYjDw3gGavtuJ)
grafana - [https://cloudius-jenkins-test.s3.amazonaws.com/52f8a7d6-6b64-4051-9232-7bb17ee37cf3/20220203_140655/grafana-screenshot-overview-20220203_140655-longevity-lwt-parallel-24h-4-6-monitor-node-52f8a7d6-1.png](https://www.google.com/url?q=https://cloudius-jenkins-test.s3.amazonaws.com/52f8a7d6-6b64-4051-9232-7bb17ee37cf3/20220203_140655/grafana-screenshot-overview-20220203_140655-longevity-lwt-parallel-24h-4-6-monitor-node-52f8a7d6-1.png%255D(https://cloudius-jenkins-test.s3.amazonaws.com/52f8a7d6-6b64-4051-9232-7bb17ee37cf3/20220203_140655/grafana-screenshot-overview-20220203_140655-longevity-lwt-parallel-24h-4-6-monitor-node-52f8a7d6-1.png)&source=gmail-html&ust=1643991989961000&usg=AOvVaw13HqeHmXOzsHnNzJNlm2GO)
db-cluster - [https://cloudius-jenkins-test.s3.amazonaws.com/52f8a7d6-6b64-4051-9232-7bb17ee37cf3/20220203_141729/db-cluster-52f8a7d6.tar.gz](https://www.google.com/url?q=https://cloudius-jenkins-test.s3.amazonaws.com/52f8a7d6-6b64-4051-9232-7bb17ee37cf3/20220203_141729/db-cluster-52f8a7d6.tar.gz%255D(https://cloudius-jenkins-test.s3.amazonaws.com/52f8a7d6-6b64-4051-9232-7bb17ee37cf3/20220203_141729/db-cluster-52f8a7d6.tar.gz)&source=gmail-html&ust=1643991989961000&usg=AOvVaw2kJrwh0F_8pQaljHPY7rPT)
loader-set - [https://cloudius-jenkins-test.s3.amazonaws.com/52f8a7d6-6b64-4051-9232-7bb17ee37cf3/20220203_141729/loader-set-52f8a7d6.tar.gz](https://www.google.com/url?q=https://cloudius-jenkins-test.s3.amazonaws.com/52f8a7d6-6b64-4051-9232-7bb17ee37cf3/20220203_141729/loader-set-52f8a7d6.tar.gz%255D(https://cloudius-jenkins-test.s3.amazonaws.com/52f8a7d6-6b64-4051-9232-7bb17ee37cf3/20220203_141729/loader-set-52f8a7d6.tar.gz)&source=gmail-html&ust=1643991989961000&usg=AOvVaw2eNs_9NpU_wMncVPj-6_Du)
monitor-set - [https://cloudius-jenkins-test.s3.amazonaws.com/52f8a7d6-6b64-4051-9232-7bb17ee37cf3/20220203_141729/monitor-set-52f8a7d6.tar.gz](https://www.google.com/url?q=https://cloudius-jenkins-test.s3.amazonaws.com/52f8a7d6-6b64-4051-9232-7bb17ee37cf3/20220203_141729/monitor-set-52f8a7d6.tar.gz%255D(https://cloudius-jenkins-test.s3.amazonaws.com/52f8a7d6-6b64-4051-9232-7bb17ee37cf3/20220203_141729/monitor-set-52f8a7d6.tar.gz)&source=gmail-html&ust=1643991989961000&usg=AOvVaw1yeaHnNHISpMfeVwIxmtAD)
sct - [https://cloudius-jenkins-test.s3.amazonaws.com/52f8a7d6-6b64-4051-9232-7bb17ee37cf3/20220203_141729/sct-runner-52f8a7d6.tar.gz](https://www.google.com/url?q=https://cloudius-jenkins-test.s3.amazonaws.com/52f8a7d6-6b64-4051-9232-7bb17ee37cf3/20220203_141729/sct-runner-52f8a7d6.tar.gz%255D(https://cloudius-jenkins-test.s3.amazonaws.com/52f8a7d6-6b64-4051-9232-7bb17ee37cf3/20220203_141729/sct-runner-52f8a7d6.tar.gz)&source=gmail-html&ust=1643991989961000&usg=AOvVaw2nGemjF43q8HSdpfW_znme)

Jenkins job URL

@slivne
Copy link
Contributor

slivne commented Feb 3, 2022

The crash seems to be during view building

@eliransin / @psarna please look at this tomorrow and figure out if this is fixed in master or its a change we did in 4.6 and we can revert.

If we have no other choice please send a fix - in any case we need one for master (if it was not fixed there)

At this point I prefer to revert a change in 4.6 that was not in 4.5 to make progress (if its local and simple)

@psarna
Copy link
Contributor

psarna commented Feb 4, 2022

I'll take a look now

@psarna
Copy link
Contributor

psarna commented Feb 4, 2022

How do I download the matching binary? I used Benny's reloc tool (https://github.com/scylladb/scylla_s3_reloc_server) for finding the matching build-id (bf9a6fccf70cbe9f5c482869342c931fadd4c0b2), but it didn't find anything. And without the binary, the core itself is not inspectable.

@psarna
Copy link
Contributor

psarna commented Feb 4, 2022

Meanwhile, some random thougts.

  1. Master changed very substantially since 4.6, including vast rewrites (term => expressions), so it's very hard to reason if it fixed the problem or not. On the other hand, there were no commits that declared to fix any particular issue, so it's not likely a missing backport. The history of master vs 4.6 for the source file for expressions.cc is here:
c6f2afb93 Merge 'cql3: Allow to skip EQ restricted columns in ORDER BY' from Jan Ciołek
7bbfa48bc cql3: Add has_eq_restriction_on_column function
0bd139e81 Merge 'cql3: expr: detemplate and deinline find_in_expression()
be1490441 cql3: Don't allow unset values inside UDT
f7f77df14 cql3: expr: detemplate count_if()
c01188c41 cql3: expr: deinline recurse_until()
595cc328b Merge 'cql3: Remove term, replace with expression' from Jan Ciołek
e45834082 cql3: Remove term
c84e941df cql3: expr: Add size check to evaluate_set
7bc65868e cql3: expr: Add expr::contains_bind_marker
e37906ae3 cql3: expr: Remove term in expression
fd1596171 cql3: expr: Add evaluate_IN_list(expression, options)
e5391f1ee types: Add map_type_impl::serialize(range of <bytes, bytes>)
a82351dc7 cql3: expr: Add constant::view() method
c2eb3a58b cql3: expr: Implement fill_prepare_context(expression)
9c4051607 cql3: expr: Add receiver to expr::bind_variable
  1. The trace also points to a visitor that I implemented last year (d6b0a83), also as part of the migration towards expressions, perhaps there's a subtle bug in there that wasn't found in there? I'll start with code inspection.

@psarna
Copy link
Contributor

psarna commented Feb 4, 2022

Also, @cvybhu, perhaps you've seen a similar fault lately (one that ended up with a failure in is_satisfied_by)?

@slivne
Copy link
Contributor

slivne commented Feb 4, 2022

@psarna
Copy link
Contributor

psarna commented Feb 4, 2022

@slivne I always keep getting Error: [object Object] when I click on these links with package directories - is there a magic way of downloading it I'm not aware of?

@slivne
Copy link
Contributor

slivne commented Feb 4, 2022

I admit it doea nit allways work for me as well

The build that built the packagea and ami is this

https://jenkins.scylladb.com/job/scylla-4.6/job/byo/job/byo_build_tests_dtest/9/consoleFull

@slivne
Copy link
Contributor

slivne commented Feb 4, 2022

The link i provided you does work for me

If nithing else works please spin an instance using the ami id in the description its us east1 and yoy should have everything installed on that instance

@psarna
Copy link
Contributor

psarna commented Feb 4, 2022

The link works when I tunnel from the office ¯\(ツ)/¯ I have no idea why.
But anyway, this one linked to .deb packages that lack debuginfo (the debug package only contains symbol for node_exporter). I'll try this one next: http://downloads.scylladb.com/testing/unstable/scylla/branch-4.6/relocatable/byo/2022-02-03T11%3A33%3A23Z/

@psarna
Copy link
Contributor

psarna commented Feb 4, 2022

that one is probably it, debuginfo matches, thanks. I hope I'll figure out something meaningful out of it today

@KnifeyMoloko
Copy link
Contributor

Same / similar situation happened today when running with 4.6.rc5. LWT updates to one and then 2 columns, schema mismatch resolved in a schema merge, than segfault and coredump.

============================================================================================
Installation details
Kernel version: 5.11.0-1022-aws
Scylla version (or git commit hash): 4.6.rc5-0.20220203.5694ec189 with build-id f5d85bf5abe6d2f9fd3487e2469ce1c34304cc14
Cluster size: 4 nodes (i3.2xlarge)
Scylla running with shards number (live nodes):
longevity-lwt-parallel-24h-4-6-db-node-19780877-1 (3.226.248.24 | 10.0.1.232): 8 shards
longevity-lwt-parallel-24h-4-6-db-node-19780877-2 (18.214.40.181 | 10.0.2.66): 8 shards
longevity-lwt-parallel-24h-4-6-db-node-19780877-3 (3.237.197.70 | 10.0.3.173): 8 shards
longevity-lwt-parallel-24h-4-6-db-node-19780877-5 (100.24.99.227 | 10.0.2.55): 8 shards
Scylla running with shards number (terminated nodes):
longevity-lwt-parallel-24h-4-6-db-node-19780877-4 (3.224.114.137 | 10.0.1.201): 8 shards
OS (RHEL/CentOS/Ubuntu/AWS AMI): ami-097f7b2efb7723718 (aws: us-east-1)

Test: longevity-lwt-parallel-schema-changes-with-disruptive-24h-test
Test name: longevity_lwt_test.LWTLongevityTest.test_lwt_longevity
Test config file(s):

  • longevity-lwt-parallel-24h.yaml

Issue description

====================================
During the ModifyTable nemesis, after executing ALTER TABLE cqlstress_lwt_example.blogposts WITH caching = {'keys': 'ALL', 'rows_per_partition': 1440}; we got a segfault and coredump.

< t:2022-02-04 13:31:10,544 f:common.py       l:1284 c:utils                p:DEBUG > Executing CQL 'ALTER TABLE cqlstress_lwt_example.blogposts WITH caching = {'keys': 'ALL', 'rows_per_partition': 1440};' ...
< t:2022-02-04 13:31:10,545 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > First connection created to 10.0.3.173:9042 for shard_id=0
< t:2022-02-04 13:31:10,545 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Finished initializing connection for host 10.0.3.173:9042
< t:2022-02-04 13:31:10,545 f:thread.py       l:52   c:cassandra.cluster    p:DEBUG > Added pool for host 10.0.3.173:9042 to session
< t:2022-02-04 13:31:10,552 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=3 on host 10.0.2.66:9042
< t:2022-02-04 13:31:10,552 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705097616) created to shard_id=3 on host 10.0.2.66:9042
< t:2022-02-04 13:31:10,552 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 2/8 shards on host 10.0.2.66:9042 (6 missing or needs replacement)
< t:2022-02-04 13:31:10,554 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > First connection created to 10.0.1.201:9042 for shard_id=0
< t:2022-02-04 13:31:10,554 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Finished initializing connection for host 10.0.1.201:9042
< t:2022-02-04 13:31:10,554 f:thread.py       l:52   c:cassandra.cluster    p:DEBUG > Added pool for host 10.0.1.201:9042 to session
< t:2022-02-04 13:31:10,562 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=5 on host 10.0.2.66:9042
< t:2022-02-04 13:31:10,562 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705097328) created to shard_id=5 on host 10.0.2.66:9042
< t:2022-02-04 13:31:10,562 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 3/8 shards on host 10.0.2.66:9042 (5 missing or needs replacement)
< t:2022-02-04 13:31:10,562 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=4 on host 10.0.2.66:9042
< t:2022-02-04 13:31:10,563 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344704388976) created to shard_id=4 on host 10.0.2.66:9042
< t:2022-02-04 13:31:10,563 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 4/8 shards on host 10.0.2.66:9042 (4 missing or needs replacement)
< t:2022-02-04 13:31:10,565 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=7 on host 10.0.2.66:9042
< t:2022-02-04 13:31:10,566 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705096704) created to shard_id=7 on host 10.0.2.66:9042
< t:2022-02-04 13:31:10,566 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 5/8 shards on host 10.0.2.66:9042 (3 missing or needs replacement)
< t:2022-02-04 13:31:10,568 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=6 on host 10.0.2.66:9042
< t:2022-02-04 13:31:10,569 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705096512) created to shard_id=6 on host 10.0.2.66:9042
< t:2022-02-04 13:31:10,569 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 6/8 shards on host 10.0.2.66:9042 (2 missing or needs replacement)
< t:2022-02-04 13:31:10,573 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=0 on host 10.0.2.66:9042
< t:2022-02-04 13:31:10,573 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705095936) created to shard_id=0 on host 10.0.2.66:9042
< t:2022-02-04 13:31:10,573 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 7/8 shards on host 10.0.2.66:9042 (1 missing or needs replacement)
< t:2022-02-04 13:31:10,577 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=1 on host 10.0.2.66:9042
< t:2022-02-04 13:31:10,577 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705096272) created to shard_id=1 on host 10.0.2.66:9042
< t:2022-02-04 13:31:10,577 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 8/8 shards on host 10.0.2.66:9042 (0 missing or needs replacement)
< t:2022-02-04 13:31:10,577 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > All shards of host 10.0.2.66:9042 have at least one connection, closing 0 excess connections
< t:2022-02-04 13:31:10,579 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=2 on host 10.0.2.66:9042
< t:2022-02-04 13:31:10,579 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > All shards are already covered, closing newly opened excess connection for host 10.0.2.66:9042
< t:2022-02-04 13:31:10,579 f:libevreactor.py l:287  c:cassandra.io.libevreactor p:DEBUG > Closing connection (140344705095744) to 10.0.2.66:9042
< t:2022-02-04 13:31:10,579 f:libevreactor.py l:291  c:cassandra.io.libevreactor p:DEBUG > Closed socket to 10.0.2.66:9042
< t:2022-02-04 13:31:10,582 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=7 on host 10.0.1.232:9042
< t:2022-02-04 13:31:10,582 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705095216) created to shard_id=7 on host 10.0.1.232:9042
< t:2022-02-04 13:31:10,582 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 2/8 shards on host 10.0.1.232:9042 (6 missing or needs replacement)
< t:2022-02-04 13:31:10,589 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=0 on host 10.0.1.232:9042
< t:2022-02-04 13:31:10,589 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705095408) created to shard_id=0 on host 10.0.1.232:9042
< t:2022-02-04 13:31:10,589 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 3/8 shards on host 10.0.1.232:9042 (5 missing or needs replacement)
< t:2022-02-04 13:31:10,593 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=1 on host 10.0.1.232:9042
< t:2022-02-04 13:31:10,593 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705095552) created to shard_id=1 on host 10.0.1.232:9042
< t:2022-02-04 13:31:10,593 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 4/8 shards on host 10.0.1.232:9042 (4 missing or needs replacement)
< t:2022-02-04 13:31:10,601 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=2 on host 10.0.1.232:9042
< t:2022-02-04 13:31:10,601 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705094880) created to shard_id=2 on host 10.0.1.232:9042
< t:2022-02-04 13:31:10,601 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 5/8 shards on host 10.0.1.232:9042 (3 missing or needs replacement)
< t:2022-02-04 13:31:10,605 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=3 on host 10.0.1.232:9042
< t:2022-02-04 13:31:10,605 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705095072) created to shard_id=3 on host 10.0.1.232:9042
< t:2022-02-04 13:31:10,605 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 6/8 shards on host 10.0.1.232:9042 (2 missing or needs replacement)
< t:2022-02-04 13:31:10,613 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=4 on host 10.0.1.232:9042
< t:2022-02-04 13:31:10,613 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705094304) created to shard_id=4 on host 10.0.1.232:9042
< t:2022-02-04 13:31:10,613 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 7/8 shards on host 10.0.1.232:9042 (1 missing or needs replacement)
< t:2022-02-04 13:31:10,618 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=5 on host 10.0.1.232:9042
< t:2022-02-04 13:31:10,618 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705094496) created to shard_id=5 on host 10.0.1.232:9042
< t:2022-02-04 13:31:10,618 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 8/8 shards on host 10.0.1.232:9042 (0 missing or needs replacement)
< t:2022-02-04 13:31:10,618 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > All shards of host 10.0.1.232:9042 have at least one connection, closing 0 excess connections
< t:2022-02-04 13:31:10,623 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO | New session 684 of user scyllaadm.
< t:2022-02-04 13:31:10,623 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO | Started Session 684 of user scyllaadm.
< t:2022-02-04 13:31:10,625 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=6 on host 10.0.1.232:9042
< t:2022-02-04 13:31:10,625 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > All shards are already covered, closing newly opened excess connection for host 10.0.1.232:9042
< t:2022-02-04 13:31:10,625 f:libevreactor.py l:287  c:cassandra.io.libevreactor p:DEBUG > Closing connection (140344705094688) to 10.0.1.232:9042
< t:2022-02-04 13:31:10,625 f:libevreactor.py l:291  c:cassandra.io.libevreactor p:DEBUG > Closed socket to 10.0.1.232:9042
< t:2022-02-04 13:31:10,631 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=1 on host 10.0.3.173:9042
< t:2022-02-04 13:31:10,631 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705093824) created to shard_id=1 on host 10.0.3.173:9042
< t:2022-02-04 13:31:10,631 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 2/8 shards on host 10.0.3.173:9042 (6 missing or needs replacement)
< t:2022-02-04 13:31:10,637 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=2 on host 10.0.3.173:9042
< t:2022-02-04 13:31:10,637 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705094016) created to shard_id=2 on host 10.0.3.173:9042
< t:2022-02-04 13:31:10,637 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 3/8 shards on host 10.0.3.173:9042 (5 missing or needs replacement)
< t:2022-02-04 13:31:10,648 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=4 on host 10.0.3.173:9042
< t:2022-02-04 13:31:10,648 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705092576) created to shard_id=4 on host 10.0.3.173:9042
< t:2022-02-04 13:31:10,648 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 4/8 shards on host 10.0.3.173:9042 (4 missing or needs replacement)
< t:2022-02-04 13:31:10,657 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=5 on host 10.0.3.173:9042
< t:2022-02-04 13:31:10,657 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705092288) created to shard_id=5 on host 10.0.3.173:9042
< t:2022-02-04 13:31:10,657 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 5/8 shards on host 10.0.3.173:9042 (3 missing or needs replacement)
< t:2022-02-04 13:31:10,663 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:10,664 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-2 !  NOTICE | scyllaadm : TTY=unknown ; PWD=/home/scyllaadm ; USER=root ; COMMAND=/usr/bin/coredumpctl --no-pager --no-legend
< t:2022-02-04 13:31:10,664 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-2 !    INFO | pam_unix(sudo:session): session opened for user root by (uid=0)
< t:2022-02-04 13:31:10,664 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-2 !    INFO | pam_unix(sudo:session): session closed for user root
< t:2022-02-04 13:31:10,665 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:10.663: (CompactionEvent Severity.NORMAL) period_type=end event_id=6ad3581b-611e-4ed7-9bfe-c97a70416e71 duration=0s node=longevity-lwt-parallel-24h-4-6-db-node-19780877-2 shard=5 table=cqlstress_lwt_example.blogposts_not_updated_lwt_indicator compaction_process_id=b6f961c0-85be-11ec-b0dd-28314f9b82e6
< t:2022-02-04 13:31:10,668 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=6 on host 10.0.3.173:9042
< t:2022-02-04 13:31:10,668 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705092480) created to shard_id=6 on host 10.0.3.173:9042
< t:2022-02-04 13:31:10,668 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 6/8 shards on host 10.0.3.173:9042 (2 missing or needs replacement)
< t:2022-02-04 13:31:10,673 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=7 on host 10.0.3.173:9042
< t:2022-02-04 13:31:10,673 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705091808) created to shard_id=7 on host 10.0.3.173:9042
< t:2022-02-04 13:31:10,673 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 7/8 shards on host 10.0.3.173:9042 (1 missing or needs replacement)
< t:2022-02-04 13:31:10,674 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=3 on host 10.0.3.173:9042
< t:2022-02-04 13:31:10,675 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705092816) created to shard_id=3 on host 10.0.3.173:9042
< t:2022-02-04 13:31:10,675 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 8/8 shards on host 10.0.3.173:9042 (0 missing or needs replacement)
< t:2022-02-04 13:31:10,675 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > All shards of host 10.0.3.173:9042 have at least one connection, closing 0 excess connections
< t:2022-02-04 13:31:10,684 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=0 on host 10.0.3.173:9042
< t:2022-02-04 13:31:10,684 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > All shards are already covered, closing newly opened excess connection for host 10.0.3.173:9042
< t:2022-02-04 13:31:10,685 f:libevreactor.py l:287  c:cassandra.io.libevreactor p:DEBUG > Closing connection (140344705092000) to 10.0.3.173:9042
< t:2022-02-04 13:31:10,685 f:libevreactor.py l:291  c:cassandra.io.libevreactor p:DEBUG > Closed socket to 10.0.3.173:9042
< t:2022-02-04 13:31:10,686 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=1 on host 10.0.1.201:9042
< t:2022-02-04 13:31:10,686 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705092768) created to shard_id=1 on host 10.0.1.201:9042
< t:2022-02-04 13:31:10,686 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 2/8 shards on host 10.0.1.201:9042 (6 missing or needs replacement)
< t:2022-02-04 13:31:10,696 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=2 on host 10.0.1.201:9042
< t:2022-02-04 13:31:10,696 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705091616) created to shard_id=2 on host 10.0.1.201:9042
< t:2022-02-04 13:31:10,696 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 3/8 shards on host 10.0.1.201:9042 (5 missing or needs replacement)
< t:2022-02-04 13:31:10,698 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=3 on host 10.0.1.201:9042
< t:2022-02-04 13:31:10,698 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705091184) created to shard_id=3 on host 10.0.1.201:9042
< t:2022-02-04 13:31:10,698 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 4/8 shards on host 10.0.1.201:9042 (4 missing or needs replacement)
< t:2022-02-04 13:31:10,707 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=4 on host 10.0.1.201:9042
< t:2022-02-04 13:31:10,707 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705091040) created to shard_id=4 on host 10.0.1.201:9042
< t:2022-02-04 13:31:10,708 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 5/8 shards on host 10.0.1.201:9042 (3 missing or needs replacement)
< t:2022-02-04 13:31:10,709 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=5 on host 10.0.1.201:9042
< t:2022-02-04 13:31:10,709 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705090896) created to shard_id=5 on host 10.0.1.201:9042
< t:2022-02-04 13:31:10,709 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 6/8 shards on host 10.0.1.201:9042 (2 missing or needs replacement)
< t:2022-02-04 13:31:10,711 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.648 [cluster1-nio-worker-6] INFO  c.d.driver.core.RequestHandler - Query select * from blogposts where domain = ? LIMIT 1 is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,713 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.649 [cluster1-nio-worker-3] INFO  c.d.driver.core.RequestHandler - Query update blogposts set lwt_indicator = 30000000 where domain = ? and published_date = ? if lwt_indicator >=-150000 and lwt_indicator < 0 is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,713 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=7 on host 10.0.1.201:9042
< t:2022-02-04 13:31:10,713 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705090032) created to shard_id=7 on host 10.0.1.201:9042
< t:2022-02-04 13:31:10,713 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 7/8 shards on host 10.0.1.201:9042 (1 missing or needs replacement)
< t:2022-02-04 13:31:10,714 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.651 [cluster1-nio-worker-6] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,716 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.652 [cluster1-nio-worker-3] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,716 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.653 [cluster1-nio-worker-4] INFO  c.d.driver.core.RequestHandler - Query update blogposts set lwt_indicator = 20000000, author = 'text' where domain = ? and published_date = ? if lwt_indicator > 0 and lwt_indicator <= 150000 and author != 'text' is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,719 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !  NOTICE | scyllaadm : TTY=unknown ; PWD=/home/scyllaadm ; USER=root ; COMMAND=/usr/bin/mv /tmp/tmp.oZN1FHQJwU /etc/scylla/scylla.yaml
< t:2022-02-04 13:31:10,719 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO | pam_unix(sudo:session): session opened for user root by (uid=0)
< t:2022-02-04 13:31:10,719 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO | pam_unix(sudo:session): session closed for user root
< t:2022-02-04 13:31:10,719 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !  NOTICE | scyllaadm : TTY=unknown ; PWD=/home/scyllaadm ; USER=root ; COMMAND=/usr/bin/chown root:root /etc/scylla/scylla.yaml
< t:2022-02-04 13:31:10,719 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO | pam_unix(sudo:session): session opened for user root by (uid=0)
< t:2022-02-04 13:31:10,719 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.656 [cluster1-nio-worker-4] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,719 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO | pam_unix(sudo:session): session closed for user root
< t:2022-02-04 13:31:10,719 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !  NOTICE | scyllaadm : TTY=unknown ; PWD=/home/scyllaadm ; USER=root ; COMMAND=/usr/bin/chmod 644 /etc/scylla/scylla.yaml
< t:2022-02-04 13:31:10,720 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO | pam_unix(sudo:session): session opened for user root by (uid=0)
< t:2022-02-04 13:31:10,720 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO | pam_unix(sudo:session): session closed for user root
< t:2022-02-04 13:31:10,720 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:10,720 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=6 on host 10.0.1.201:9042
< t:2022-02-04 13:31:10,721 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > New connection (140344705090512) created to shard_id=6 on host 10.0.1.201:9042
< t:2022-02-04 13:31:10,721 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:10.720: (CompactionEvent Severity.NORMAL) period_type=begin event_id=59f601da-2f67-4f13-afab-13de5f8e6e60 node=longevity-lwt-parallel-24h-4-6-db-node-19780877-4 shard=0 table=system_schema.view_virtual_columns compaction_process_id=b73a8970-85be-11ec-9f12-e9f8b1d7b96b
< t:2022-02-04 13:31:10,721 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Connected to 8/8 shards on host 10.0.1.201:9042 (0 missing or needs replacement)
< t:2022-02-04 13:31:10,721 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > All shards of host 10.0.1.201:9042 have at least one connection, closing 0 excess connections
< t:2022-02-04 13:31:10,725 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > Received a connection for shard_id=0 on host 10.0.1.201:9042
< t:2022-02-04 13:31:10,725 f:thread.py       l:52   c:cassandra.pool       p:DEBUG > All shards are already covered, closing newly opened excess connection for host 10.0.1.201:9042
< t:2022-02-04 13:31:10,726 f:libevreactor.py l:287  c:cassandra.io.libevreactor p:DEBUG > Closing connection (140344705090416) to 10.0.1.201:9042
< t:2022-02-04 13:31:10,726 f:libevreactor.py l:291  c:cassandra.io.libevreactor p:DEBUG > Closed socket to 10.0.1.201:9042
< t:2022-02-04 13:31:10,731 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.668 [cluster1-nio-worker-5] INFO  c.d.driver.core.RequestHandler - Query select * from blogposts where domain = ? LIMIT 1 is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,731 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.673 [cluster1-nio-worker-2] INFO  c.d.driver.core.RequestHandler - Query update blogposts set lwt_indicator = 30000000 where domain = ? and published_date = ? if lwt_indicator >=-150000 and lwt_indicator < 0 is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,732 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.669 [cluster1-nio-worker-4] INFO  c.d.driver.core.RequestHandler - Query select * from blogposts where domain = ? LIMIT 1 is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,732 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.669 [cluster1-nio-worker-5] INFO  c.d.driver.core.RequestHandler - Query select * from blogposts where domain = ? LIMIT 1 is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,734 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.670 [cluster1-nio-worker-6] INFO  c.d.driver.core.RequestHandler - Query update blogposts set lwt_indicator = 30000000 where domain = ? and published_date = ? if lwt_indicator >=-150000 and lwt_indicator < 0 is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,738 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.680 [cluster1-nio-worker-2] INFO  c.d.driver.core.RequestHandler - Query update blogposts set lwt_indicator = 20000000, author = 'text' where domain = ? and published_date = ? if lwt_indicator > 0 and lwt_indicator <= 150000 and author != 'text' is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,740 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.676 [cluster1-nio-worker-5] INFO  c.d.driver.core.RequestHandler - Query update blogposts set lwt_indicator = 20000000, author = 'text' where domain = ? and published_date = ? if lwt_indicator > 0 and lwt_indicator <= 150000 and author != 'text' is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,744 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.686 [cluster1-nio-worker-2] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,744 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.681 [cluster1-nio-worker-6] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,745 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.681 [cluster1-nio-worker-4] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,745 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.686 [cluster1-nio-worker-2] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,746 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.682 [cluster1-nio-worker-5] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,746 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.682 [cluster1-nio-worker-5] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,746 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.683 [cluster1-nio-worker-0] DEBUG com.datastax.driver.core.Cluster - Received event EVENT UPDATED TABLE cqlstress_lwt_example.blogposts, scheduling delivery
< t:2022-02-04 13:31:10,747 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.684 [cluster1-nio-worker-0] DEBUG com.datastax.driver.core.Cluster - Received event EVENT UPDATED TABLE cqlstress_lwt_example.blogposts, scheduling delivery
< t:2022-02-04 13:31:10,747 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.689 [cluster1-nio-worker-0] DEBUG com.datastax.driver.core.Cluster - Received event EVENT UPDATED TABLE cqlstress_lwt_example.blogposts, scheduling delivery
< t:2022-02-04 13:31:10,748 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.683 [cluster1-nio-worker-2] INFO  c.d.driver.core.RequestHandler - Query update blogposts set lwt_indicator = 20000000, author = 'text' where domain = ? and published_date = ? if lwt_indicator > 0 and lwt_indicator <= 150000 and author != 'text' is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,748 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.684 [cluster1-nio-worker-0] DEBUG com.datastax.driver.core.Cluster - Received event EVENT UPDATED TABLE cqlstress_lwt_example.blogposts, scheduling delivery
< t:2022-02-04 13:31:10,748 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.689 [cluster1-nio-worker-0] DEBUG com.datastax.driver.core.Cluster - Received event EVENT UPDATED TABLE cqlstress_lwt_example.blogposts, scheduling delivery
< t:2022-02-04 13:31:10,749 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.685 [cluster1-nio-worker-5] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,749 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.685 [cluster1-nio-worker-2] INFO  c.d.driver.core.RequestHandler - Query update blogposts set lwt_indicator = 20000000, author = 'text' where domain = ? and published_date = ? if lwt_indicator > 0 and lwt_indicator <= 150000 and author != 'text' is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,750 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.687 [cluster1-nio-worker-3] INFO  c.d.driver.core.RequestHandler - Query select * from blogposts where domain = ? LIMIT 1 is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,750 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.692 [cluster1-nio-worker-4] INFO  c.d.driver.core.RequestHandler - Query update blogposts set lwt_indicator = 20000000, author = 'text' where domain = ? and published_date = ? if lwt_indicator > 0 and lwt_indicator <= 150000 and author != 'text' is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,751 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.687 [cluster1-nio-worker-2] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,751 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.688 [cluster1-nio-worker-2] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,759 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.695 [cluster1-nio-worker-2] INFO  c.d.driver.core.RequestHandler - Query update blogposts set lwt_indicator = 30000000 where domain = ? and published_date = ? if lwt_indicator >=-150000 and lwt_indicator < 0 is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,761 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.698 [cluster1-nio-worker-3] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,761 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.703 [cluster1-nio-worker-4] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,761 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.698 [cluster1-nio-worker-2] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,764 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.701 [cluster1-nio-worker-2] INFO  c.d.driver.core.RequestHandler - Query select * from blogposts where domain = ? LIMIT 1 is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,765 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.702 [cluster1-nio-worker-2] INFO  c.d.driver.core.RequestHandler - Query select * from blogposts where domain = ? LIMIT 1 is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,767 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.704 [cluster1-nio-worker-2] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,768 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.705 [cluster1-nio-worker-2] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,768 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.705 [cluster1-nio-worker-3] INFO  c.d.driver.core.RequestHandler - Query update blogposts set lwt_indicator = 30000000 where domain = ? and published_date = ? if lwt_indicator >=-150000 and lwt_indicator < 0 is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,779 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.715 [cluster1-nio-worker-3] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,786 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.723 [cluster1-nio-worker-2] INFO  c.d.driver.core.RequestHandler - Query select * from blogposts where domain = ? LIMIT 1 is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,788 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.724 [cluster1-nio-worker-2] INFO  c.d.driver.core.RequestHandler - Query update blogposts set lwt_indicator = 20000000, author = 'text' where domain = ? and published_date = ? if lwt_indicator > 0 and lwt_indicator <= 150000 and author != 'text' is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,788 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.725 [cluster1-nio-worker-2] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,790 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.726 [cluster1-nio-worker-2] INFO  c.d.driver.core.RequestHandler - Query update blogposts set lwt_indicator = 30000000 where domain = ? and published_date = ? if lwt_indicator >=-150000 and lwt_indicator < 0 is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,792 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.733 [cluster1-nio-worker-4] INFO  c.d.driver.core.RequestHandler - Query update blogposts set lwt_indicator = 30000000 where domain = ? and published_date = ? if lwt_indicator >=-150000 and lwt_indicator < 0 is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,793 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.729 [cluster1-nio-worker-2] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,793 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.735 [cluster1-nio-worker-2] INFO  c.d.driver.core.RequestHandler - Query update blogposts set lwt_indicator = 30000000 where domain = ? and published_date = ? if lwt_indicator >=-150000 and lwt_indicator < 0 is not prepared on cqlstress_lwt_example, preparing before retrying executing. Seeing this message a few times is fine, but seeing it a lot may be source of performance problems
< t:2022-02-04 13:31:10,793 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.730 [cluster1-nio-worker-2] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,794 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.736 [cluster1-nio-worker-4] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,795 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.737 [cluster1-nio-worker-2] DEBUG c.d.driver.core.RequestHandler - Scheduling retry now that query is prepared
< t:2022-02-04 13:31:10,862 f:thread.py       l:52   c:cassandra.cluster    p:DEBUG > Refreshing schema in response to schema change. {'target_type': 'TABLE', 'change_type': 'UPDATED', 'keyspace': 'cqlstress_lwt_example', 'table': 'blogposts'}
< t:2022-02-04 13:31:10,862 f:thread.py       l:52   c:cassandra.cluster    p:DEBUG > [control connection] Waiting for schema agreement
< t:2022-02-04 13:31:10,879 f:thread.py       l:52   c:cassandra.cluster    p:DEBUG > [control connection] Schemas mismatched, trying again
< t:2022-02-04 13:31:10,888 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 6] migration_manager - Update table 'cqlstress_lwt_example.blogposts' From org.apache.cassandra.config.CFMetaData@0x605003f40a80[cfId=c661b9a0-85b9-11ec-92bf-282e4f9b82e6,ksName=cqlstress_lwt_example,cfName=blogposts,cfType=Standard,comparator=org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type),comment=A table to hold blog posts,readRepairChance=0,dcLocalReadRepairChance=0,gcGraceSeconds=864000,keyValidator=org.apache.cassandra.db.marshal.Int32Type,minCompactionThreshold=4,maxCompactionThreshold=32,columnMetadata=[ColumnDefinition{name=domain, type=org.apache.cassandra.db.marshal.Int32Type, kind=PARTITION_KEY, componentIndex=0, droppedAt=-9223372036854775808}, ColumnDefinition{name=published_date, type=org.apache.cassandra.db.marshal.Int32Type, kind=CLUSTERING_COLUMN, componentIndex=0, droppedAt=-9223372036854775808}, ColumnDefinition{name=author, type=org.apache.cassandra.db.marshal.UTF8Type, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=lwt_indicator, type=org.apache.cassandra.db.marshal.Int32Type, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=title, type=org.apache.cassandra.db.marshal.UTF8Type, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=url, type=org.apache.cassandra.db.marshal.UTF8Type, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}],compactionStrategyClass=class org.apache.cassandra.db.compaction.LeveledCompactionStrategy,compactionStrategyOptions={enabled=true},compressionParameters={sstable_compression=org.apache.cassandra.io.compress.LZ4Compressor},bloomFilterFpChance=0.01,memtableFlushPeriod=0,caching={"keys":"ALL","rows_per_partition":"ALL"},cdc={},defaultTimeToLive=0,minIndexInterval=128,maxIndexInterval=2048,speculativeRetry=99.0PERCENTILE,triggers=[],isDense=false,version=77e4d76f-0c4e-38f5-9131-22570a9615b4,droppedColumns={},collections={},indices={}] To org.apache.cassandra.config.CFMetaData@0x605008757880[cfId=c661b9a0-85b9-11ec-92bf-282e4f9b82e6,ksName=cqlstress_lwt_example,cfName=blogposts,cfType=Standard,comparator=org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.Int32Type,org.apache.cassandra.db.marshal.UTF8Type),comment=A table to hold blog posts,readRepairChance=0,dcLocalReadRepairChance=0,gcGraceSeconds=864000,keyValidator=org.apache.cassandra.db.marshal.Int32Type,minCompactionThreshold=4,maxCompactionThreshold=32,columnMetadata=[ColumnDefinition{name=domain, type=org.apache.cassandra.db.marshal.Int32Type, kind=PARTITION_KEY, componentIndex=0, droppedAt=-9223372036854775808}, ColumnDefinition{name=published_date, type=org.apache.cassandra.db.marshal.Int32Type, kind=CLUSTERING_COLUMN, componentIndex=0, droppedAt=-9223372036854775808}, ColumnDefinition{name=author, type=org.apache.cassandra.db.marshal.UTF8Type, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=lwt_indicator, type=org.apache.cassandra.db.marshal.Int32Type, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=title, type=org.apache.cassandra.db.marshal.UTF8Type, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}, ColumnDefinition{name=url, type=org.apache.cassandra.db.marshal.UTF8Type, kind=REGULAR, componentIndex=null, droppedAt=-9223372036854775808}],compactionStrategyClass=class org.apache.cassandra.db.compaction.LeveledCompactionStrategy,compactionStrategyOptions={enabled=true},compressionParameters={sstable_compression=org.apache.cassandra.io.compress.LZ4Compressor},bloomFilterFpChance=0.01,memtableFlushPeriod=0,caching={"keys":"ALL","rows_per_partition":"1440"},cdc={},defaultTimeToLive=0,minIndexInterval=128,maxIndexInterval=2048,speculativeRetry=99.0PERCENTILE,triggers=[],isDense=false,version=b729e7a0-85be-11ec-97c0-471edd5fbd8a,droppedColumns={},collections={},indices={}]
< t:2022-02-04 13:31:10,890 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 0] migration_manager - Requesting schema pull from 10.0.2.66:2
< t:2022-02-04 13:31:10,890 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 0] migration_manager - Pulling schema from 10.0.2.66:2
< t:2022-02-04 13:31:10,890 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 0] migration_manager - Requesting schema pull from 10.0.2.66:1
< t:2022-02-04 13:31:10,891 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 0] migration_manager - Requesting schema pull from 10.0.2.66:6
< t:2022-02-04 13:31:10,891 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 0] migration_manager - Requesting schema pull from 10.0.2.66:3
< t:2022-02-04 13:31:10,891 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 0] migration_manager - Requesting schema pull from 10.0.2.66:7
< t:2022-02-04 13:31:10,891 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 0] migration_manager - Requesting schema pull from 10.0.2.66:4
< t:2022-02-04 13:31:10,891 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 0] migration_manager - Requesting schema pull from 10.0.2.66:5
< t:2022-02-04 13:31:10,891 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 6] schema_tables - Altering cqlstress_lwt_example.blogposts id=c661b9a0-85b9-11ec-92bf-282e4f9b82e6 version=93323a7d-d255-3a33-b5a4-82daf816e5cc
< t:2022-02-04 13:31:10,891 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 6] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,891 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 5] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,891 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 2] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,892 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 1] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,892 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 7] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,892 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 3] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,892 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 0] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,892 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 4] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,892 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 6] schema_tables - Schema version changed to 1de9ac61-073e-3c56-aaed-e89a7d5d1eef
< t:2022-02-04 13:31:10,921 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.857 [main] DEBUG com.datastax.driver.core.Connection - Connection[/10.0.1.232:9042-6, inFlight=1, closed=false] Setting keyspace cqlstress_lwt_example
< t:2022-02-04 13:31:10,923 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.859 [cluster1-nio-worker-6] DEBUG com.datastax.driver.core.Connection - Connection[/10.0.1.232:9042-6, inFlight=1, closed=false] Keyspace set to cqlstress_lwt_example
< t:2022-02-04 13:31:10,924 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:10,924 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:10,925 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:10,926 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:10,926 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:10.924: (CompactionEvent Severity.NORMAL) period_type=begin event_id=32854974-bfbf-4cf5-ac19-3ea8546bb7c0 node=longevity-lwt-parallel-24h-4-6-db-node-19780877-3 shard=0 table=system_schema.view_virtual_columns compaction_process_id=b735f590-85be-11ec-91a6-05c813d7a00f
< t:2022-02-04 13:31:10,927 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 0] migration_manager - Requesting schema pull from 10.0.2.66:1
< t:2022-02-04 13:31:10,927 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 0] migration_manager - Pulling schema from 10.0.2.66:0
< t:2022-02-04 13:31:10,927 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 0] migration_manager - Requesting schema pull from 10.0.2.66:3
< t:2022-02-04 13:31:10,927 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 0] migration_manager - Requesting schema pull from 10.0.2.66:6
< t:2022-02-04 13:31:10,928 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 0] migration_manager - Requesting schema pull from 10.0.2.66:7
< t:2022-02-04 13:31:10,928 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 0] migration_manager - Requesting schema pull from 10.0.2.66:4
< t:2022-02-04 13:31:10,928 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 6] schema_tables - Altering cqlstress_lwt_example.blogposts id=c661b9a0-85b9-11ec-92bf-282e4f9b82e6 version=93323a7d-d255-3a33-b5a4-82daf816e5cc
< t:2022-02-04 13:31:10,928 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 6] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,928 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 1] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,929 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 3] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,929 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 0] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,929 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 5] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,929 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 7] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,929 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:10.924: (CompactionEvent Severity.NORMAL) period_type=end event_id=32854974-bfbf-4cf5-ac19-3ea8546bb7c0 duration=0s node=longevity-lwt-parallel-24h-4-6-db-node-19780877-3 shard=0 table=system_schema.view_virtual_columns compaction_process_id=b735f590-85be-11ec-91a6-05c813d7a00f
< t:2022-02-04 13:31:10,930 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 4] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,930 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:10.925: (CompactionEvent Severity.NORMAL) period_type=begin event_id=4494846a-f24b-4d30-b5d6-e29149b40fdf node=longevity-lwt-parallel-24h-4-6-db-node-19780877-3 shard=0 table=system_schema.computed_columns compaction_process_id=b742ede0-85be-11ec-91a6-05c813d7a00f
< t:2022-02-04 13:31:10,930 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.865 [cluster1-worker-0] DEBUG com.datastax.driver.core.Connection - Connection[/10.0.2.66:9042-2, inFlight=1, closed=false] Setting keyspace cqlstress_lwt_example
< t:2022-02-04 13:31:10,930 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.866 [cluster1-worker-0] DEBUG com.datastax.driver.core.Connection - Connection[/10.0.1.201:9042-2, inFlight=1, closed=false] Setting keyspace cqlstress_lwt_example
< t:2022-02-04 13:31:10,930 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.866 [cluster1-worker-0] DEBUG com.datastax.driver.core.Connection - Connection[/10.0.3.173:9042-6, inFlight=1, closed=false] Setting keyspace cqlstress_lwt_example
< t:2022-02-04 13:31:10,930 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:10,930 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:10.926: (CompactionEvent Severity.NORMAL) period_type=begin event_id=c639f6d3-0b93-4cf5-81ab-fe3dac1c7be4 node=longevity-lwt-parallel-24h-4-6-db-node-19780877-3 shard=5 table=cqlstress_lwt_example.blogposts compaction_process_id=b7438a20-85be-11ec-b34f-05cd13d7a00f
< t:2022-02-04 13:31:10,931 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:10,932 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 2] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,932 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:10,933 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.868 [cluster1-nio-worker-3] DEBUG com.datastax.driver.core.Connection - Connection[/10.0.1.201:9042-2, inFlight=1, closed=false] Keyspace set to cqlstress_lwt_example
< t:2022-02-04 13:31:10,933 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.869 [cluster1-nio-worker-3] DEBUG com.datastax.driver.core.Connection - Connection[/10.0.2.66:9042-2, inFlight=1, closed=false] Keyspace set to cqlstress_lwt_example
< t:2022-02-04 13:31:10,933 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:10.930: (CompactionEvent Severity.NORMAL) period_type=end event_id=4494846a-f24b-4d30-b5d6-e29149b40fdf duration=0s node=longevity-lwt-parallel-24h-4-6-db-node-19780877-3 shard=0 table=system_schema.computed_columns compaction_process_id=b742ede0-85be-11ec-91a6-05c813d7a00f
< t:2022-02-04 13:31:10,934 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:10,935 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:10.931: (CompactionEvent Severity.NORMAL) period_type=begin event_id=ca342c41-4287-48be-bd0e-ec6d1dd97c38 node=longevity-lwt-parallel-24h-4-6-db-node-19780877-3 shard=0 table=system_schema.indexes compaction_process_id=b74781c0-85be-11ec-91a6-05c813d7a00f
< t:2022-02-04 13:31:10,936 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:10.932: (CompactionEvent Severity.NORMAL) period_type=begin event_id=bb9e5bda-e36d-4cb7-bd1b-f678c57672dd node=longevity-lwt-parallel-24h-4-6-db-node-19780877-3 shard=0 table=cqlstress_lwt_example.blogposts compaction_process_id=b74e3880-85be-11ec-91a6-05c813d7a00f
< t:2022-02-04 13:31:10,936 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.872 [cluster1-nio-worker-7] DEBUG com.datastax.driver.core.Connection - Connection[/10.0.3.173:9042-6, inFlight=1, closed=false] Keyspace set to cqlstress_lwt_example
< t:2022-02-04 13:31:10,936 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:10.934: (CompactionEvent Severity.NORMAL) period_type=end event_id=ca342c41-4287-48be-bd0e-ec6d1dd97c38 duration=0s node=longevity-lwt-parallel-24h-4-6-db-node-19780877-3 shard=0 table=system_schema.indexes compaction_process_id=b74781c0-85be-11ec-91a6-05c813d7a00f
< t:2022-02-04 13:31:10,944 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > Generating batches with [1..1] partitions and [0..1] rows (of [1..1000] total rows in the partitions)
< t:2022-02-04 13:31:10,944 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 
< t:2022-02-04 13:31:10,944 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > Sleeping 2s...
< t:2022-02-04 13:31:10,965 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:10,965 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:10,966 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:10,966 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:10.965: (CompactionEvent Severity.NORMAL) period_type=begin event_id=2ec82188-8271-42ba-a3ff-1086c98030d8 node=longevity-lwt-parallel-24h-4-6-db-node-19780877-2 shard=0 table=system_schema.view_virtual_columns compaction_process_id=b7327320-85be-11ec-a081-282a4f9b82e6
< t:2022-02-04 13:31:10,967 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-2 !    INFO |  [shard 6] schema_tables - Altering cqlstress_lwt_example.blogposts id=c661b9a0-85b9-11ec-92bf-282e4f9b82e6 version=93323a7d-d255-3a33-b5a4-82daf816e5cc
< t:2022-02-04 13:31:10,967 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-2 !    INFO |  [shard 6] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,967 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-2 !    INFO |  [shard 7] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,967 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-2 !    INFO |  [shard 3] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,968 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-2 !    INFO |  [shard 4] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,968 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-2 !    INFO |  [shard 5] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,968 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-2 !    INFO |  [shard 0] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,968 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-2 !    INFO |  [shard 2] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,969 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-2 !    INFO |  [shard 1] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:10,969 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:10,969 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:10.965: (CompactionEvent Severity.NORMAL) period_type=end event_id=2ec82188-8271-42ba-a3ff-1086c98030d8 duration=0s node=longevity-lwt-parallel-24h-4-6-db-node-19780877-2 shard=0 table=system_schema.view_virtual_columns compaction_process_id=b7327320-85be-11ec-a081-282a4f9b82e6
< t:2022-02-04 13:31:10,970 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:10,970 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-2 !    INFO |  [shard 6] schema_tables - Schema version changed to 1de9ac61-073e-3c56-aaed-e89a7d5d1eef
< t:2022-02-04 13:31:10,971 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:10,971 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:10,972 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:10,973 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:10,974 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:10.966: (CompactionEvent Severity.NORMAL) period_type=begin event_id=1904cf88-2c85-4a4a-abf5-0de78c268434 node=longevity-lwt-parallel-24h-4-6-db-node-19780877-2 shard=0 table=system_schema.computed_columns compaction_process_id=b73ecf30-85be-11ec-a081-282a4f9b82e6
< t:2022-02-04 13:31:10,975 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:10.969: (CompactionEvent Severity.NORMAL) period_type=end event_id=1904cf88-2c85-4a4a-abf5-0de78c268434 duration=0s node=longevity-lwt-parallel-24h-4-6-db-node-19780877-2 shard=0 table=system_schema.computed_columns compaction_process_id=b73ecf30-85be-11ec-a081-282a4f9b82e6
< t:2022-02-04 13:31:10,975 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:10.970: (CompactionEvent Severity.NORMAL) period_type=begin event_id=2393e18e-3d68-4404-b5dc-05720b7f426a node=longevity-lwt-parallel-24h-4-6-db-node-19780877-2 shard=0 table=system_schema.indexes compaction_process_id=b74f22e0-85be-11ec-a081-282a4f9b82e6
< t:2022-02-04 13:31:10,976 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:10.970: (CompactionEvent Severity.NORMAL) period_type=end event_id=2393e18e-3d68-4404-b5dc-05720b7f426a duration=0s node=longevity-lwt-parallel-24h-4-6-db-node-19780877-2 shard=0 table=system_schema.indexes compaction_process_id=b74f22e0-85be-11ec-a081-282a4f9b82e6
< t:2022-02-04 13:31:10,977 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:10.971: (CompactionEvent Severity.NORMAL) period_type=begin event_id=c45f4734-73d1-46a8-a599-e16d3e71c99f node=longevity-lwt-parallel-24h-4-6-db-node-19780877-2 shard=0 table=system_schema.tables compaction_process_id=b75404e0-85be-11ec-a081-282a4f9b82e6
< t:2022-02-04 13:31:10,977 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:10.972: (CompactionEvent Severity.NORMAL) period_type=end event_id=c45f4734-73d1-46a8-a599-e16d3e71c99f duration=0s node=longevity-lwt-parallel-24h-4-6-db-node-19780877-2 shard=0 table=system_schema.tables compaction_process_id=b75404e0-85be-11ec-a081-282a4f9b82e6
< t:2022-02-04 13:31:10,977 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:10.972: (CompactionEvent Severity.NORMAL) period_type=begin event_id=d23f6b23-1df9-484b-8b3a-33c23b46b6a5 node=longevity-lwt-parallel-24h-4-6-db-node-19780877-2 shard=0 table=system_schema.columns compaction_process_id=b75c9060-85be-11ec-a081-282a4f9b82e6
< t:2022-02-04 13:31:11,021 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 0] migration_manager - Requesting schema pull from 10.0.2.66:2
< t:2022-02-04 13:31:11,021 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 0] migration_manager - Pulling schema from 10.0.2.66:0
< t:2022-02-04 13:31:11,021 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 0] migration_manager - Requesting schema pull from 10.0.2.66:6
< t:2022-02-04 13:31:11,021 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 0] migration_manager - Requesting schema pull from 10.0.2.66:1
< t:2022-02-04 13:31:11,021 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 0] migration_manager - Requesting schema pull from 10.0.2.66:3
< t:2022-02-04 13:31:11,021 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 0] migration_manager - Requesting schema pull from 10.0.2.66:7
< t:2022-02-04 13:31:11,022 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 0] migration_manager - Requesting schema pull from 10.0.2.66:4
< t:2022-02-04 13:31:11,022 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 0] migration_manager - Requesting schema pull from 10.0.2.66:5
< t:2022-02-04 13:31:11,022 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 0] migration_manager - Requesting schema pull from 10.0.3.173:0
< t:2022-02-04 13:31:11,022 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 0] migration_manager - Pulling schema from 10.0.3.173:0
< t:2022-02-04 13:31:11,022 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 6] schema_tables - Altering cqlstress_lwt_example.blogposts id=c661b9a0-85b9-11ec-92bf-282e4f9b82e6 version=93323a7d-d255-3a33-b5a4-82daf816e5cc
< t:2022-02-04 13:31:11,022 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 6] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:11,022 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 3] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:11,022 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 2] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:11,022 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 5] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:11,022 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 7] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:11,023 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 4] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:11,023 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 1] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:11,023 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 0] query_processor - Column definitions for cqlstress_lwt_example.blogposts changed, invalidating related prepared statements
< t:2022-02-04 13:31:11,023 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,024 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,025 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.023: (CompactionEvent Severity.NORMAL) period_type=end event_id=59f601da-2f67-4f13-afab-13de5f8e6e60 duration=0s node=longevity-lwt-parallel-24h-4-6-db-node-19780877-4 shard=0 table=system_schema.view_virtual_columns compaction_process_id=b73a8970-85be-11ec-9f12-e9f8b1d7b96b
< t:2022-02-04 13:31:11,026 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 6] schema_tables - Schema version changed to 1de9ac61-073e-3c56-aaed-e89a7d5d1eef
< t:2022-02-04 13:31:11,026 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,027 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,028 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.023: (CompactionEvent Severity.NORMAL) period_type=begin event_id=4eb1772e-4bef-4f42-81d7-076396e0cadf node=longevity-lwt-parallel-24h-4-6-db-node-19780877-4 shard=0 table=system_schema.computed_columns compaction_process_id=b7508270-85be-11ec-9f12-e9f8b1d7b96b
< t:2022-02-04 13:31:11,029 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.026: (CompactionEvent Severity.NORMAL) period_type=end event_id=4eb1772e-4bef-4f42-81d7-076396e0cadf duration=0s node=longevity-lwt-parallel-24h-4-6-db-node-19780877-4 shard=0 table=system_schema.computed_columns compaction_process_id=b7508270-85be-11ec-9f12-e9f8b1d7b96b
< t:2022-02-04 13:31:11,029 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.027: (CompactionEvent Severity.NORMAL) period_type=begin event_id=d6313c40-2bbf-4430-9b60-041fc2d3e7fe node=longevity-lwt-parallel-24h-4-6-db-node-19780877-4 shard=0 table=system_schema.indexes compaction_process_id=b7665460-85be-11ec-9f12-e9f8b1d7b96b
< t:2022-02-04 13:31:11,037 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.979 [cluster1-worker-0] DEBUG com.datastax.driver.core.Connection - Connection[/10.0.1.201:9042-6, inFlight=1, closed=false] Setting keyspace cqlstress_lwt_example
< t:2022-02-04 13:31:11,040 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 13:31:10.981 [cluster1-nio-worker-7] DEBUG com.datastax.driver.core.Connection - Connection[/10.0.1.201:9042-6, inFlight=1, closed=false] Keyspace set to cqlstress_lwt_example
< t:2022-02-04 13:31:11,053 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > Generating batches with [1..1] partitions and [0..1] rows (of [1..1000] total rows in the partitions)
< t:2022-02-04 13:31:11,053 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > 
< t:2022-02-04 13:31:11,053 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > Sleeping 2s...
< t:2022-02-04 13:31:11,060 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > lwt_update_one_column,     39632,     666,     666,     666,    15.2,    13.2,    28.1,    67.8,    89.7,   102.6,   38.0,  0.05746,      0,      0,       0,       0,       0,       0
< t:2022-02-04 13:31:11,061 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > lwt_update_two_columns,     40353,     633,     633,     633,    15.6,    13.4,    26.9,    68.6,    84.7,    87.8,   38.0,  0.05746,      0,      0,       0,       0,       0,       0
< t:2022-02-04 13:31:11,061 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > total,         79985,    1299,    1299,    1299,    15.4,    13.2,    27.6,    68.6,    89.7,   102.6,   38.0,  0.05746,      0,      0,       0,       0,       0,       0
< t:2022-02-04 13:31:11,064 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > total,         50798,    1283,      79,      79,    15.6,    13.2,    28.8,    76.2,   105.8,   107.3,   29.0,  0.06104,      0,      0,       0,       0,       0,       0
< t:2022-02-04 13:31:11,065 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > lwt_update_one_column,     41610,     575,     575,     575,    15.3,    13.1,    25.9,    63.9,    87.7,   112.0,   39.0,  0.05733,      0,      0,       0,       0,       0,       0
< t:2022-02-04 13:31:11,065 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > lwt_update_two_columns,     41764,     719,     719,     719,    15.9,    13.4,    29.0,    81.5,    95.2,   105.9,   39.0,  0.05733,      0,      0,       0,       0,       0,       0
< t:2022-02-04 13:31:11,065 f:base.py         l:222  c:RemoteCmdRunner      p:DEBUG > total,         83374,    1294,    1294,    1294,    15.6,    13.3,    27.9,    78.8,   105.9,   112.0,   39.0,  0.05733,      0,      0,       0,       0,       0,       0
< t:2022-02-04 13:31:11,095 f:thread.py       l:52   c:cassandra.cluster    p:DEBUG > [control connection] Schemas mismatched, trying again
< t:2022-02-04 13:31:11,193 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,194 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,194 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,195 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,196 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,197 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,198 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:11+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 0] schema_tables - Schema version changed to 1de9ac61-073e-3c56-aaed-e89a7d5d1eef
< t:2022-02-04 13:31:11,198 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:11+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 0] migration_manager - Schema merge with 10.0.2.66:2 completed
< t:2022-02-04 13:31:11,198 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:11+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 0] migration_manager - Pulling schema from 10.0.2.66:2
< t:2022-02-04 13:31:11,199 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.193: (CompactionEvent Severity.NORMAL) period_type=end event_id=0578db6d-3179-49e5-b000-3ef0ccd9e2cd duration=6s node=longevity-lwt-parallel-24h-4-6-db-node-19780877-1 shard=7 table=cqlstress_lwt_example.blogposts compaction_process_id=b3720fc0-85be-11ec-9bd3-4721dd5fbd8a
< t:2022-02-04 13:31:11,200 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.194: (CompactionEvent Severity.NORMAL) period_type=begin event_id=4f42d21b-d7f3-4b6f-94e3-b3a5e152e71a node=longevity-lwt-parallel-24h-4-6-db-node-19780877-1 shard=7 table=cqlstress_lwt_example.blogposts_update_2_columns_lwt_indicator compaction_process_id=b75e1700-85be-11ec-9bd3-4721dd5fbd8a
< t:2022-02-04 13:31:11,200 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.194: (CompactionEvent Severity.NORMAL) period_type=end event_id=70d3639b-9849-4a51-969b-0d948d624654 duration=6s node=longevity-lwt-parallel-24h-4-6-db-node-19780877-1 shard=1 table=cqlstress_lwt_example.blogposts compaction_process_id=b3631ba0-85be-11ec-a9e1-4724dd5fbd8a
< t:2022-02-04 13:31:11,200 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.195: (CompactionEvent Severity.NORMAL) period_type=begin event_id=e3017a25-2f4c-4f28-b02b-65163cc8a339 node=longevity-lwt-parallel-24h-4-6-db-node-19780877-1 shard=1 table=cqlstress_lwt_example.blogposts_update_2_columns_lwt_indicator compaction_process_id=b770b4a0-85be-11ec-a9e1-4724dd5fbd8a
< t:2022-02-04 13:31:11,201 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.196: (CompactionEvent Severity.NORMAL) period_type=end event_id=d2924e75-b5a1-4ae4-ba4d-7d43c1b85310 duration=6s node=longevity-lwt-parallel-24h-4-6-db-node-19780877-1 shard=3 table=cqlstress_lwt_example.blogposts compaction_process_id=b33e7ca0-85be-11ec-8fca-4720dd5fbd8a
< t:2022-02-04 13:31:11,202 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.197: (CompactionEvent Severity.NORMAL) period_type=begin event_id=0c77eda0-2a7c-44ab-b4bc-ea24ac2eacdb node=longevity-lwt-parallel-24h-4-6-db-node-19780877-1 shard=3 table=cqlstress_lwt_example.blogposts_update_2_columns_lwt_indicator compaction_process_id=b77325a0-85be-11ec-8fca-4720dd5fbd8a
< t:2022-02-04 13:31:11,211 f:thread.py       l:52   c:cassandra.cluster    p:DEBUG > [control connection] Waiting for schema agreement
< t:2022-02-04 13:31:11,218 f:thread.py       l:52   c:cassandra.cluster    p:DEBUG > [control connection] Schemas mismatched, trying again
< t:2022-02-04 13:31:11,235 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:10+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 6] schema_tables - Schema version changed to 1de9ac61-073e-3c56-aaed-e89a7d5d1eef
< t:2022-02-04 13:31:11,249 f:stress_thread.py l:271  c:sdcm.stress_thread   p:DEBUG > Wait for 1 stress threads to verify
< t:2022-02-04 13:31:11,311 f:thread.py       l:52   c:cassandra.cluster    p:DEBUG > [control connection] Schemas mismatched, trying again
< t:2022-02-04 13:31:11,330 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,331 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,332 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.330: (CompactionEvent Severity.NORMAL) period_type=end event_id=d6313c40-2bbf-4430-9b60-041fc2d3e7fe duration=0s node=longevity-lwt-parallel-24h-4-6-db-node-19780877-4 shard=0 table=system_schema.indexes compaction_process_id=b7665460-85be-11ec-9f12-e9f8b1d7b96b
< t:2022-02-04 13:31:11,332 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:11+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 0] schema_tables - Schema version changed to 1de9ac61-073e-3c56-aaed-e89a7d5d1eef
< t:2022-02-04 13:31:11,332 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:11+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-4 !    INFO |  [shard 0] migration_manager - Schema merge with 10.0.3.173:0 completed
< t:2022-02-04 13:31:11,333 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.331: (CompactionEvent Severity.NORMAL) period_type=begin event_id=d2750615-f1c5-44f7-b2aa-5a60686f06f9 node=longevity-lwt-parallel-24h-4-6-db-node-19780877-4 shard=0 table=system_schema.views compaction_process_id=b77e7040-85be-11ec-9f12-e9f8b1d7b96b
< t:2022-02-04 13:31:11,346 f:thread.py       l:52   c:cassandra.cluster    p:DEBUG > [control connection] Waiting for schema agreement
< t:2022-02-04 13:31:11,362 f:thread.py       l:52   c:cassandra.cluster    p:DEBUG > [control connection] Schemas mismatched, trying again
< t:2022-02-04 13:31:11,374 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,375 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,375 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,376 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,376 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,378 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.374: (CompactionEvent Severity.NORMAL) period_type=end event_id=d23f6b23-1df9-484b-8b3a-33c23b46b6a5 duration=0s node=longevity-lwt-parallel-24h-4-6-db-node-19780877-2 shard=0 table=system_schema.columns compaction_process_id=b75c9060-85be-11ec-a081-282a4f9b82e6
< t:2022-02-04 13:31:11,379 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,382 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.374: (CompactionEvent Severity.NORMAL) period_type=begin event_id=152aac1f-01a3-4dfe-aa7d-fb0c1ecd35bc node=longevity-lwt-parallel-24h-4-6-db-node-19780877-2 shard=0 table=system_schema.keyspaces compaction_process_id=b7656a00-85be-11ec-a081-282a4f9b82e6
< t:2022-02-04 13:31:11,383 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.375: (CompactionEvent Severity.NORMAL) period_type=end event_id=152aac1f-01a3-4dfe-aa7d-fb0c1ecd35bc duration=0s node=longevity-lwt-parallel-24h-4-6-db-node-19780877-2 shard=0 table=system_schema.keyspaces compaction_process_id=b7656a00-85be-11ec-a081-282a4f9b82e6
< t:2022-02-04 13:31:11,383 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.376: (CompactionEvent Severity.NORMAL) period_type=begin event_id=b631bcae-73e6-4aa0-86f9-b6a68b18815c node=longevity-lwt-parallel-24h-4-6-db-node-19780877-2 shard=0 table=system_schema.scylla_tables compaction_process_id=b76b3660-85be-11ec-a081-282a4f9b82e6
< t:2022-02-04 13:31:11,384 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.376: (CompactionEvent Severity.NORMAL) period_type=begin event_id=a20b4f6e-d718-40f7-8b93-4b2718b397dc node=longevity-lwt-parallel-24h-4-6-db-node-19780877-2 shard=4 table=cqlstress_lwt_example.blogposts compaction_process_id=b7708d90-85be-11ec-a8de-282d4f9b82e6
< t:2022-02-04 13:31:11,385 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,386 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,386 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.379: (CompactionEvent Severity.NORMAL) period_type=begin event_id=f9734302-6e4f-4b2a-9e0c-3f0eb8c284f8 node=longevity-lwt-parallel-24h-4-6-db-node-19780877-2 shard=6 table=cqlstress_lwt_example.blogposts compaction_process_id=b77102c0-85be-11ec-b520-282f4f9b82e6
< t:2022-02-04 13:31:11,387 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,389 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.385: (CompactionEvent Severity.NORMAL) period_type=end event_id=b631bcae-73e6-4aa0-86f9-b6a68b18815c duration=0s node=longevity-lwt-parallel-24h-4-6-db-node-19780877-2 shard=0 table=system_schema.scylla_tables compaction_process_id=b76b3660-85be-11ec-a081-282a4f9b82e6
< t:2022-02-04 13:31:11,388 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,390 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.386: (CompactionEvent Severity.NORMAL) period_type=begin event_id=994a9d8c-c199-411c-bf2e-f54083e86646 node=longevity-lwt-parallel-24h-4-6-db-node-19780877-2 shard=0 table=system_schema.dropped_columns compaction_process_id=b771ed20-85be-11ec-a081-282a4f9b82e6
< t:2022-02-04 13:31:11,391 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.387: (CompactionEvent Severity.NORMAL) period_type=end event_id=994a9d8c-c199-411c-bf2e-f54083e86646 duration=0s node=longevity-lwt-parallel-24h-4-6-db-node-19780877-2 shard=0 table=system_schema.dropped_columns compaction_process_id=b771ed20-85be-11ec-a081-282a4f9b82e6
< t:2022-02-04 13:31:11,392 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.388: (CompactionEvent Severity.NORMAL) period_type=begin event_id=b544f12d-4120-4c3c-98f0-ef2ec9c0e091 node=longevity-lwt-parallel-24h-4-6-db-node-19780877-2 shard=7 table=cqlstress_lwt_example.blogposts compaction_process_id=b77b3bf0-85be-11ec-b43d-28304f9b82e6
< t:2022-02-04 13:31:11,427 f:thread.py       l:52   c:cassandra.cluster    p:DEBUG > [control connection] Schemas mismatched, trying again
< t:2022-02-04 13:31:11,437 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:11+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 0] schema_tables - Schema version changed to 1de9ac61-073e-3c56-aaed-e89a7d5d1eef
< t:2022-02-04 13:31:11,437 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:11+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 0] migration_manager - Schema merge with 10.0.2.66:0 completed
< t:2022-02-04 13:31:11,437 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:11+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 0] migration_manager - Pulling schema from 10.0.2.66:0
< t:2022-02-04 13:31:11,438 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,439 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:11+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 0] schema_tables - Schema version changed to 1de9ac61-073e-3c56-aaed-e89a7d5d1eef
< t:2022-02-04 13:31:11,440 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:11+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-3 !    INFO |  [shard 0] migration_manager - Schema merge with 10.0.2.66:0 completed
< t:2022-02-04 13:31:11,440 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.438: (CompactionEvent Severity.NORMAL) period_type=begin event_id=d0f94f0b-558a-49de-a15b-9069c77acd8f node=longevity-lwt-parallel-24h-4-6-db-node-19780877-3 shard=0 table=system_schema.columns compaction_process_id=b79f8cd0-85be-11ec-91a6-05c813d7a00f
< t:2022-02-04 13:31:11,599 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,600 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,601 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,601 f:file_logger.py  l:89   c:sdcm.sct_events.file_logger p:DEBUG > 2022-02-04 13:31:11.599: (CompactionEvent Severity.NORMAL) period_type=begin event_id=ec7d9367-a17d-4fda-acff-8af8cf85165c node=longevity-lwt-parallel-24h-4-6-db-node-19780877-1 shard=0 table=system_schema.columns compaction_process_id=b78bddc0-85be-11ec-8142-471ddd5fbd8a
< t:2022-02-04 13:31:11,601 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:11+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 0] schema_tables - Schema version changed to 1de9ac61-073e-3c56-aaed-e89a7d5d1eef
< t:2022-02-04 13:31:11,602 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:11+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO |  [shard 0] migration_manager - Schema merge with 10.0.2.66:2 completed
< t:2022-02-04 13:31:11,602 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,603 f:events_processes.py l:146  c:sdcm.sct_events.events_processes p:DEBUG > Get process `MainDevice' from EventsProcessesRegistry[lod_dir=/home/ubuntu/sct-results/20220204-122801-530898,id=0x7fa4ea02bac0,default=True]
< t:2022-02-04 13:31:11,604 f:db_log_reader.py l:113  c:sdcm.db_log_reader   p:DEBUG > 2022-02-04T13:31:11+00:00 longevity-lwt-parallel-24h-4-6-db-node-19780877-1 !    INFO | Segmentation fault on shard 5.

Coredump:

2022-02-04 13:38:00.733: (CoreDumpEvent Severity.ERROR) period_type=one-time event_id=bfc8bdb3-2797-47dc-8fad-6ae18be63801 node=Node longevity-lwt-parallel-24h-4-6-db-node-19780877-1 [3.226.248.24 | 10.0.1.232] (seed: True)
corefile_url=[https://storage.cloud.google.com/upload.scylladb.com/core.scylla.113.74176b16711c47859951540f8461f71d.10260.1643981471000000000000/core.scylla.113.74176b16711c47859951540f8461f71d.10260.1643981471000000000000.gz](https://www.google.com/url?q=https://storage.cloud.google.com/upload.scylladb.com/core.scylla.113.74176b16711c47859951540f8461f71d.10260.1643981471000000000000/core.scylla.113.74176b16711c47859951540f8461f71d.10260.1643981471000000000000.gz&source=gmail-html&ust=1644070705730000&usg=AOvVaw2E106Uh2g2NVaGdwxli8yX)
backtrace=           PID: 10260 (scylla)
UID: 113 (scylla)
GID: 119 (scylla)
Signal: 11 (SEGV)
Timestamp: Fri 2022-02-04 13:31:11 UTC (3min 26s ago)
Command Line: /usr/bin/scylla --blocked-reactor-notify-ms 100 --abort-on-lsa-bad-alloc 1 --abort-on-seastar-bad-alloc --abort-on-internal-error 1 --abort-on-ebadf 1 --enable-sstable-key-validation 1 --log-to-syslog 1 --log-to-stdout 0 --default-log-level info --network-stack posix --io-properties-file=/etc/scylla.d/io_properties.yaml --cpuset 0-7 --lock-memory=1
Executable: /opt/scylladb/libexec/scylla
Control Group: /scylla.slice/scylla-server.slice/scylla-server.service
Unit: scylla-server.service
Slice: scylla-server.slice
Boot ID: 74176b16711c47859951540f8461f71d
Machine ID: 8617053db12a423d87e7cf78375ab77d
Hostname: longevity-lwt-parallel-24h-4-6-db-node-19780877-1
Storage: /var/lib/systemd/coredump/core.scylla.113.74176b16711c47859951540f8461f71d.10260.1643981471000000000000
Message: Process 10260 (scylla) of user 113 dumped core.
Stack trace of thread 10265:
#0  0x0000000001c7b3bc _ZN4cql34expr12_GLOBAL__N_19get_valueERKNS0_12column_valueERKNS1_21column_value_eval_bagE (scylla + 0x1a7b3bc)
#1  0x0000000001c74c75 _ZZN4cql34expr12_GLOBAL__N_115is_satisfied_byERKNS0_15binary_operatorERKNS1_21column_value_eval_bagEENK3$_8clERKNS0_12column_valueE (scylla + 0x1a74c75)
#2  0x0000000001c71a66 _ZNSt8__detail9__variant17__gen_vtable_implINS0_12_Multi_arrayIPFNS0_21__deduce_visit_resultIbEER18overloaded_functorIJZN4cql34expr12_GLOBAL__N_115is_satisfied_byERKNS7_10expressionERKNS8_21column_value_eval_bagEE4$_25ZNS8_15is_satisfied_byESB_SE_E4$_24ZNS8_15is_satisfied_byESB_SE_E4$_26ZNS8_15is_satisfied_byESB_SE_E4$_27ZNS8_15is_satisfied_byESB_SE_E4$_28ZNS8_15is_satisfied_byESB_SE_E4$_29ZNS8_15is_satisfied_byESB_SE_E4$_30ZNS8_15is_satisfied_byESB_SE_E4$_31ZNS8_15is_satisfied_byESB_SE_E4$_32ZNS8_15is_satisfied_byESB_SE_E4$_33ZNS8_15is_satisfied_byESB_SE_E4$_34ZNS8_15is_satisfied_byESB_SE_E4$_35ZNS8_15is_satisfied_byESB_SE_E4$_36ZNS8_15is_satisfied_byESB_SE_E4$_37ZNS8_15is_satisfied_byESB_SE_E4$_38ZNS8_15is_satisfied_byESB_SE_E4$_39EERSt7variantIJNS7_11conjunctionENS7_15binary_operatorENS7_12column_valueENS7_5tokenENS7_21unresolved_identifierENS7_25column_mutation_attributeENS7_13function_callENS7_4castENS7_15field_selectionENS7_4nullENS7_13bind_variableENS7_16untyped_constantENS7_8constantENS7_17tuple_constructorENS7_22collection_constructorENS7_20usertype_constructorEEEEJEEESt16integer_sequenceImJLm1EEEE14__visit_invokeESW_S1F_ (scylla + 0x1a71a66)
#3  0x0000000001c719e2 _ZNSt8__detail9__variant17__gen_vtable_implINS0_12_Multi_arrayIPFNS0_21__deduce_visit_resultIbEER18overloaded_functorIJZN4cql34expr12_GLOBAL__N_115is_satisfied_byERKNS7_10expressionERKNS8_21column_value_eval_bagEE4$_25ZNS8_15is_satisfied_byESB_SE_E4$_24ZNS8_15is_satisfied_byESB_SE_E4$_26ZNS8_15is_satisfied_byESB_SE_E4$_27ZNS8_15is_satisfied_byESB_SE_E4$_28ZNS8_15is_satisfied_byESB_SE_E4$_29ZNS8_15is_satisfied_byESB_SE_E4$_30ZNS8_15is_satisfied_byESB_SE_E4$_31ZNS8_15is_satisfied_byESB_SE_E4$_32ZNS8_15is_satisfied_byESB_SE_E4$_33ZNS8_15is_satisfied_byESB_SE_E4$_34ZNS8_15is_satisfied_byESB_SE_E4$_35ZNS8_15is_satisfied_byESB_SE_E4$_36ZNS8_15is_satisfied_byESB_SE_E4$_37ZNS8_15is_satisfied_byESB_SE_E4$_38ZNS8_15is_satisfied_byESB_SE_E4$_39EERSt7variantIJNS7_11conjunctionENS7_15binary_operatorENS7_12column_valueENS7_5tokenENS7_21unresolved_identifierENS7_25column_mutation_attributeENS7_13function_callENS7_4castENS7_15field_selectionENS7_4nullENS7_13bind_variableENS7_16untyped_constantENS7_8constantENS7_17tuple_constructorENS7_22collection_constructorENS7_20usertype_constructorEEEEJEEESt16integer_sequenceImJLm0EEEE14__visit_invokeESW_S1F_ (scylla + 0x1a719e2)
#4  0x0000000001c5de88 _ZN4cql34expr15is_satisfied_byERKNS0_10expressionERKSt6vectorIN7seastar13basic_sstringIajLj31ELb0EEESaIS7_EESB_RKN5query15result_row_viewEPSE_RKNS_9selection9selectionERKNS_13query_optionsE (scylla + 0x1a5de88)
#5  0x00000000028cf78e _ZZNK2db4view28view_filter_checking_visitor16check_if_matchesERK21clustering_key_prefixRKN5query15result_row_viewES8_ENKUlOT_E_clIRKN7seastar10shared_ptrIN4cql312restrictions25single_column_restrictionEEEEEDaSA_ (scylla + 0x26cf78e)
#6  0x00000000028cf371 _ZNK5query11result_view7consumeIRN2db4view28view_filter_checking_visitorEEEvRKNS_15partition_sliceEOT_ (scylla + 0x26cf371)
#7  0x000000000287d877 _ZN2db4view19matches_view_filterERK6schemaRK9view_infoRK13partition_keyRK14clustering_rowNSt6chrono10time_pointI8gc_clockNSD_8durationIlSt5ratioILl1ELl1EEEEEE (scylla + 0x267d877)
#8  0x000000000288615c _ZN2db4view12view_updates15generate_updateERK13partition_keyRK14clustering_rowRKSt8optionalIS5_ENSt6chrono10time_pointI8gc_clockNSC_8durationIlSt5ratioILl1ELl1EEEEEE (scylla + 0x268615c)
#9  0x000000000288e893 _ZN2db4view19view_update_builder10on_resultsEv (scylla + 0x268e893)
#10 0x0000000002889bdb _ZN2db4view19view_update_builder10build_someEv (scylla + 0x2689bdb)
#11 0x000000000116e025 _ZNK5table35generate_and_propagate_view_updatesERKN7seastar13lw_shared_ptrIK6schemaEE13reader_permitOSt6vectorIN2db4view13view_and_baseESaISB_EEO8mutationNS0_18optimized_optionalI20flat_mutation_readerEEN7tracing15trace_state_ptrENSt6chrono10time_pointI8gc_clockNSM_8durationIlSt5ratioILl1ELl1EEEEEE (scylla + 0xf6e025)
#12 0x000000000118b401 _ZNK5table28do_push_view_replica_updatesEN7seastar13lw_shared_ptrIK6schemaEE8mutationNSt6chrono10time_pointINS0_12lowres_clockENS6_8durationIlSt5ratioILl1ELl1000EEEEEE15mutation_sourceN7tracing15trace_state_ptrER28reader_concurrency_semaphoreRKNS0_17io_priority_classE8enum_setI10super_enumIN5query15partition_slice6optionEJLSQ_0ELSQ_1ELSQ_2ELSQ_3ELSQ_4ELSQ_5ELSQ_6ELSQ_7ELSQ_8ELSQ_9ELSQ_10ELSQ_11ELSQ_12EEEE (scylla + 0xf8b401)
#13 0x0000000001188f46 _ZNK5table25push_view_replica_updatesERKN7seastar13lw_shared_ptrIK6schemaEERK15frozen_mutationNSt6chrono10time_pointINS0_12lowres_clockENSA_8durationIlSt5ratioILl1ELl1000EEEEEEN7tracing15trace_state_ptrER28reader_concurrency_semaphore (scylla + 0xf88f46)
#14 0x0000000001067626 _ZN8database8do_applyEN7seastar13lw_shared_ptrIK6schemaEERK15frozen_mutationN7tracing15trace_state_ptrENSt6chrono10time_pointINS0_12lowres_clockENSA_8durationIlSt5ratioILl1ELl1000EEEEEENS0_10bool_classIN2db14force_sync_tagEEE (scylla + 0xe67626)
#15 0x00000000010eb50b _ZN7seastar20noncopyable_functionIFNS_6futureIvEEP8databaseNS_13lw_shared_ptrIK6schemaEERK15frozen_mutationN7tracing15trace_state_ptrENSt6chrono10time_pointINS_12lowres_clockENSE_8durationIlSt5ratioILl1ELl1000EEEEEENS_10bool_classIN2db14force_sync_tagEEEEE17direct_vtable_forISt7_Mem_fnIMS3_FS2_S8_SB_SD_SL_SP_EEE4callEPKSR_S4_S8_SB_SD_SL_SP_ (scylla + 0xeeb50b)
#16 0x000000000110b08d _ZN7seastar20noncopyable_functionIFNS_6futureIvEEP8databaseNS_13lw_shared_ptrIK6schemaEERK15frozen_mutationN7tracing15trace_state_ptrENSt6chrono10time_pointINS_12lowres_clockENSE_8durationIlSt5ratioILl1ELl1000EEEEEENS_10bool_classIN2db14force_sync_tagEEEEE17direct_vtable_forIZNS_35inheriting_concrete_execution_stageIS2_JS4_S8_SB_SD_SL_SP_EE20make_stage_for_groupENS_16scheduling_groupEEUlS4_S8_SB_SD_SL_SP_E_E4callEPKSR_S4_S8_SB_SD_SL_SP_ (scylla + 0xf0b08d)
#17 0x00000000010d6e1c _ZN7seastar24concrete_execution_stageINS_6futureIvEEJP8databaseNS_13lw_shared_ptrIK6schemaEERK15frozen_mutationN7tracing15trace_state_ptrENSt6chrono10time_pointINS_12lowres_clockENSE_8durationIlSt5ratioILl1ELl1000EEEEEENS_10bool_classIN2db14force_sync_tagEEEEE8do_flushEv (scylla + 0xed6e1c)
#18 0x0000000003d79516 _ZN7seastar11lambda_taskIZNS_15execution_stage5flushEvE3$_5E15run_and_disposeEv (scylla + 0x3b79516)
#19 0x0000000003dc3b05 _ZN7seastar7reactor14run_some_tasksEv (scylla + 0x3bc3b05)
#20 0x0000000003dc4ef8 _ZN7seastar7reactor6do_runEv (scylla + 0x3bc4ef8)
#21 0x0000000003de2526 _ZNSt17_Function_handlerIFvvEZN7seastar3smp9configureEN5boost15program_options13variables_mapENS1_14reactor_configEE4$_98E9_M_invokeERKSt9_Any_data (scylla + 0x3be2526)
#22 0x0000000003d9c13b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x3b9c13b)
#23 0x00007f7f6d986299 start_thread (libpthread.so.0 + 0x9299)
#24 0x00007f7f6ce98353 __clone (libc.so.6 + 0x100353)
Stack trace of thread 10264:
#0  0x00007f7f6e1e9731 LZ4_compress_fast_extState (liblz4.so.1 + 0x6731)
#1  0x00007f7f6e1ea88f LZ4_compress_fast (liblz4.so.1 + 0x788f)
#2  0x0000000001645c36 _ZNK13lz4_processor8compressEPKcmPcm (scylla + 0x1445c36)
#3  0x000000000193d8d7 _ZN30compressed_file_data_sink_implI11crc32_utilsL24compressed_checksum_mode1EE3putEN7seastar16temporary_bufferIcEE (scylla + 0x173d8d7)
#4  0x00000000011d9eef _ZN7seastar13output_streamIcE10slow_writeEPKcm (scylla + 0xfd9eef)
#5  0x00000000018ebb83 _ZN8sstables2mc6writer15write_clusteredERK14clustering_rowm (scylla + 0x16ebb83)
#6  0x00000000018ed27c _ZN8sstables2mc6writer15write_clusteredI14clustering_rowEEvRKT_ (scylla + 0x16ed27c)
#7  0x00000000018ecc13 _ZN8sstables2mc6writer7consumeEO14clustering_row (scylla + 0x16ecc13)
#8  0x000000000198f696 _ZN22compact_mutation_stateIL19emit_only_live_rows0EL20compact_for_sstables1EE7consumeIN8sstables25compacting_sstable_writerE33noop_compacted_fragments_consumerEEN7seastar10bool_classINS7_18stop_iteration_tagEEEO14clustering_rowRT_RT0_ (scylla + 0x178f696)
#9  0x000000000198d4e5 _ZNO17mutation_fragment7consumeIN20flat_mutation_reader4impl16consumer_adapterI35stable_flattened_mutations_consumerI22compact_for_compactionIN8sstables25compacting_sstable_writerE33noop_compacted_fragments_consumerEEEEEEDcRT_ (scylla + 0x178d4e5)
#10 0x000000000198b67d _ZZZN8sstables10compaction5setupI33noop_compacted_fragments_consumerEEN7seastar6futureIvEET_ENUl20flat_mutation_readerE_clES7_ENUlvE_clEv (scylla + 0x178b67d)
#11 0x000000000198b294 _ZN7seastar20noncopyable_functionIFvvEE17direct_vtable_forIZNS_5asyncIZZN8sstables10compaction5setupI33noop_compacted_fragments_consumerEENS_6futureIvEET_ENUl20flat_mutation_readerE_clESC_EUlvE_JEEENS_8futurizeINSt9result_ofIFNSt5decayISB_E4typeEDpNSH_IT0_E4typeEEE4typeEE4typeENS_17thread_attributesEOSB_DpOSK_EUlvE_E4callEPKS2_ (scylla + 0x178b294)
#12 0x0000000004034962 _ZN7seastar14thread_context4mainEv (scylla + 0x3e34962)
Stack trace of thread 10261:
#0  0x00000000023e553d _ZNK4cql312restrictions38single_column_primary_key_restrictionsI21clustering_key_prefixE44num_prefix_columns_that_need_not_be_filteredEv (scylla + 0x21e553d)
#1  0x00000000023e541c _ZNK4cql312restrictions38single_column_primary_key_restrictionsI21clustering_key_prefixE15needs_filteringERK6schema (scylla + 0x21e541c)
#2  0x00000000023a4e29 _ZNK4cql312restrictions22statement_restrictions14need_filteringEv (scylla + 0x21a4e29)
#3  0x0000000001df3954 _ZNK4cql310statements16select_statement10do_executeERN7service13storage_proxyERNS2_11query_stateERKNS_13query_optionsE (scylla + 0x1bf3954)
#4  0x0000000001e94dc8 _ZN7seastar20noncopyable_functionIFNS_6futureINS_10shared_ptrIN13cql_transport8messages14result_messageEEEEEPKN4cql310statements16select_statementERN7service13storage_proxyERNSD_11query_stateERKNS8_13query_optionsEEE17direct_vtable_forISt7_Mem_fnIMSA_KFS7_SF_SH_SK_EEE4callEPKSM_SC_SF_SH_SK_ (scylla + 0x1c94dc8)
#5  0x0000000001e94e0d _ZN7seastar20noncopyable_functionIFNS_6futureINS_10shared_ptrIN13cql_transport8messages14result_messageEEEEEPKN4cql310statements16select_statementERN7service13storage_proxyERNSD_11query_stateERKNS8_13query_optionsEEE17direct_vtable_forIZNS_35inheriting_concrete_execution_stageIS7_JSC_SF_SH_SK_EE20make_stage_for_groupENS_16scheduling_groupEEUlSC_SF_SH_SK_E_E4callEPKSM_SC_SF_SH_SK_ (scylla + 0x1c94e0d)
#6  0x0000000001e70be5 _ZN7seastar24concrete_execution_stageINS_6futureINS_10shared_ptrIN13cql_transport8messages14result_messageEEEEEJPKN4cql310statements16select_statementERN7service13storage_proxyERNSD_11query_stateERKNS8_13query_optionsEEE8do_flushEv (scylla + 0x1c70be5)
#7  0x0000000003d79516 _ZN7seastar11lambda_taskIZNS_15execution_stage5flushEvE3$_5E15run_and_disposeEv (scylla + 0x3b79516)
#8  0x0000000003dc3b05 _ZN7seastar7reactor14run_some_tasksEv (scylla + 0x3bc3b05)
#9  0x0000000003dc4ef8 _ZN7seastar7reactor6do_runEv (scylla + 0x3bc4ef8)
#10 0x0000000003de2526 _ZNSt17_Function_handlerIFvvEZN7seastar3smp9configureEN5boost15program_options13variables_mapENS1_14reactor_configEE4$_98E9_M_invokeERKSt9_Any_data (scylla + 0x3be2526)
#11 0x0000000003d9c13b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x3b9c13b)
#12 0x00007f7f6d986299 start_thread (libpthread.so.0 + 0x9299)
#13 0x00007f7f6ce98353 __clone (libc.so.6 + 0x100353)
Stack trace of thread 10267:
#0  0x00007f7f6ce92e0d syscall (libc.so.6 + 0xfae0d)
#1  0x0000000003e06cc3 _ZN7seastar18preempt_io_context24reset_preemption_monitorEv (scylla + 0x3c06cc3)
#2  0x0000000003dc39f0 _ZN7seastar7reactor14run_some_tasksEv (scylla + 0x3bc39f0)
#3  0x0000000003dc4ef8 _ZN7seastar7reactor6do_runEv (scylla + 0x3bc4ef8)
#4  0x0000000003de2526 _ZNSt17_Function_handlerIFvvEZN7seastar3smp9configureEN5boost15program_options13variables_mapENS1_14reactor_configEE4$_98E9_M_invokeERKSt9_Any_data (scylla + 0x3be2526)
#5  0x0000000003d9c13b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x3b9c13b)
#6  0x00007f7f6d986299 start_thread (libpthread.so.0 + 0x9299)
#7  0x00007f7f6ce98353 __clone (libc.so.6 + 0x100353)
Stack trace of thread 10266:
#0  0x0000000003d8ddd4 free (scylla + 0x3b8ddd4)
#1  0x0000000001485449 _ZN3rowD1Ev (scylla + 0x1285449)
#2  0x00000000012d2b47 _ZN17mutation_fragment12destroy_dataEv (scylla + 0x10d2b47)
#3  0x000000000198b68d _ZZZN8sstables10compaction5setupI33noop_compacted_fragments_consumerEEN7seastar6futureIvEET_ENUl20flat_mutation_readerE_clES7_ENUlvE_clEv (scylla + 0x178b68d)
#4  0x000000000198b294 _ZN7seastar20noncopyable_functionIFvvEE17direct_vtable_forIZNS_5asyncIZZN8sstables10compaction5setupI33noop_compacted_fragments_consumerEENS_6futureIvEET_ENUl20flat_mutation_readerE_clESC_EUlvE_JEEENS_8futurizeINSt9result_ofIFNSt5decayISB_E4typeEDpNSH_IT0_E4typeEEE4typeEE4typeENS_17thread_attributesEOSB_DpOSK_EUlvE_E4callEPKS2_ (scylla + 0x178b294)
#5  0x0000000004034962 _ZN7seastar14thread_context4mainEv (scylla + 0x3e34962)
Stack trace of thread 10270:
#0  0x00007f7f6d98f94c read (libpthread.so.0 + 0x1294c)
#1  0x0000000003e05685 _ZN7seastar11thread_pool4workENS_13basic_sstringIcjLj15ELb1EEE (scylla + 0x3c05685)
#2  0x0000000003e059f0 _ZNSt17_Function_handlerIFvvEZN7seastar11thread_poolC1EPNS1_7reactorENS1_13basic_sstringIcjLj15ELb1EEEE3$_0E9_M_invokeERKSt9_Any_data (scylla + 0x3c059f0)
#3  0x0000000003d9c13b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x3b9c13b)
#4  0x00007f7f6d986299 start_thread (libpthread.so.0 + 0x9299)
#5  0x00007f7f6ce98353 __clone (libc.so.6 + 0x100353)
Stack trace of thread 10271:
#0  0x00007f7f6d98f94c read (libpthread.so.0 + 0x1294c)
#1  0x0000000003e05685 _ZN7seastar11thread_pool4workENS_13basic_sstringIcjLj15ELb1EEE (scylla + 0x3c05685)
#2  0x0000000003e059f0 _ZNSt17_Function_handlerIFvvEZN7seastar11thread_poolC1EPNS1_7reactorENS1_13basic_sstringIcjLj15ELb1EEEE3$_0E9_M_invokeERKSt9_Any_data (scylla + 0x3c059f0)
#3  0x0000000003d9c13b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x3b9c13b)
#4  0x00007f7f6d986299 start_thread (libpthread.so.0 + 0x9299)
#5  0x00007f7f6ce98353 __clone (libc.so.6 + 0x100353)
Stack trace of thread 10275:
#0  0x00007f7f6d98f94c read (libpthread.so.0 + 0x1294c)
#1  0x0000000003e05685 _ZN7seastar11thread_pool4workENS_13basic_sstringIcjLj15ELb1EEE (scylla + 0x3c05685)
#2  0x0000000003e059f0 _ZNSt17_Function_handlerIFvvEZN7seastar11thread_poolC1EPNS1_7reactorENS1_13basic_sstringIcjLj15ELb1EEEE3$_0E9_M_invokeERKSt9_Any_data (scylla + 0x3c059f0)
#3  0x0000000003d9c13b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x3b9c13b)
#4  0x00007f7f6d986299 start_thread (libpthread.so.0 + 0x9299)
#5  0x00007f7f6ce98353 __clone (libc.so.6 + 0x100353)
Stack trace of thread 10262:
#0  0x00000000012d1095 _ZNK16compound_wrapperI21clustering_key_prefix26clustering_key_prefix_viewE4sizeERK6schema (scylla + 0x10d1095)
#1  0x00000000012cf95b _ZNK10bound_view11tri_compareclERK21clustering_key_prefixiS3_i (scylla + 0x10cf95b)
#2  0x00000000015ccbd8 _ZN24clustering_ranges_walker10advance_toE26position_in_partition_view9tombstone (scylla + 0x13ccbd8)
#3  0x00000000018abbc3 _ZN8sstables24mutation_fragment_filter5applyE26position_in_partition_view9tombstone (scylla + 0x16abbc3)
#4  0x00000000018a5442 _ZN8sstables2mx17mp_row_consumer_m17consume_row_startERKSt6vectorI27fragmented_temporary_bufferSaIS3_EE (scylla + 0x16a5442)
#5  0x00000000018cde66 _ZN8sstables2mx27data_consume_rows_context_m16do_process_stateEv.resume (scylla + 0x16cde66)
#6  0x00000000018b84ce _ZN13data_consumer24continuous_data_consumerIN8sstables2mx27data_consume_rows_context_mEEclEN7seastar16temporary_bufferIcEE (scylla + 0x16b84ce)
#7  0x00000000018b7c9c _ZZN7seastar12input_streamIcE7consumeISt17reference_wrapperIN8sstables2mx27data_consume_rows_context_mEEEENS_6futureIvEEOT_ENUlvE_clEv (scylla + 0x16b7c9c)
#8  0x00000000018bc9fa _ZZZN8sstables2mx26mx_sstable_mutation_reader11fill_bufferEvENKUlvE1_clEvENKUlvE0_clEv (scylla + 0x16bc9fa)
#9  0x00000000018b4d28 _ZZN8sstables2mx26mx_sstable_mutation_reader11fill_bufferEvENKUlvE1_clEv (scylla + 0x16b4d28)
#10 0x00000000018957cc _ZN8sstables2mx26mx_sstable_mutation_reader11fill_bufferEv (scylla + 0x16957cc)
#11 0x0000000001897e0d _ZThn64_N8sstables2mx26mx_sstable_mutation_reader11fill_bufferEv (scylla + 0x1697e0d)
#12 0x000000000160b4af _ZN7seastar8futurizeINS_6futureINS_10bool_classINS_18stop_iteration_tagEEEEEE6invokeIRZN23flat_mutation_reader_v24impl16consume_pausableIZ15downgrade_to_v1S8_EN19transforming_reader8consumerEEENS1_IvEET_EUlvE_JEEES5_OSE_DpOT0_ (scylla + 0x140b4af)
#13 0x0000000001608251 _ZZ15downgrade_to_v123flat_mutation_reader_v2EN19transforming_reader11fill_bufferEv (scylla + 0x1408251)
#14 0x000000000154c8e3 _ZN22mutation_reader_mergerclEv (scylla + 0x134c8e3)
#15 0x0000000001584141 _ZZN14merging_readerI22mutation_reader_mergerE11fill_bufferEvENKUlvE_clEv (scylla + 0x1384141)
#16 0x00000000015832bb _ZN14merging_readerI22mutation_reader_mergerE11fill_bufferEv (scylla + 0x13832bb)
#17 0x000000000198b564 _ZZZN8sstables10compaction5setupI33noop_compacted_fragments_consumerEEN7seastar6futureIvEET_ENUl20flat_mutation_readerE_clES7_ENUlvE_clEv (scylla + 0x178b564)
#18 0x000000000198b294 _ZN7seastar20noncopyable_functionIFvvEE17direct_vtable_forIZNS_5asyncIZZN8sstables10compaction5setupI33noop_compacted_fragments_consumerEENS_6futureIvEET_ENUl20flat_mutation_readerE_clESC_EUlvE_JEEENS_8futurizeINSt9result_ofIFNSt5decayISB_E4typeEDpNSH_IT0_E4typeEEE4typeEE4typeENS_17thread_attributesEOSB_DpOSK_EUlvE_E4callEPKS2_ (scylla + 0x178b294)
#19 0x0000000004034962 _ZN7seastar14thread_context4mainEv (scylla + 0x3e34962)
Stack trace of thread 10269:
#0  0x00007f7f6d98f94c read (libpthread.so.0 + 0x1294c)
#1  0x0000000003e05685 _ZN7seastar11thread_pool4workENS_13basic_sstringIcjLj15ELb1EEE (scylla + 0x3c05685)
#2  0x0000000003e059f0 _ZNSt17_Function_handlerIFvvEZN7seastar11thread_poolC1EPNS1_7reactorENS1_13basic_sstringIcjLj15ELb1EEEE3$_0E9_M_invokeERKSt9_Any_data (scylla + 0x3c059f0)
#3  0x0000000003d9c13b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x3b9c13b)
#4  0x00007f7f6d986299 start_thread (libpthread.so.0 + 0x9299)
#5  0x00007f7f6ce98353 __clone (libc.so.6 + 0x100353)
Stack trace of thread 10273:
#0  0x00007f7f6d98f94c read (libpthread.so.0 + 0x1294c)
#1  0x0000000003e05685 _ZN7seastar11thread_pool4workENS_13basic_sstringIcjLj15ELb1EEE (scylla + 0x3c05685)
#2  0x0000000003e059f0 _ZNSt17_Function_handlerIFvvEZN7seastar11thread_poolC1EPNS1_7reactorENS1_13basic_sstringIcjLj15ELb1EEEE3$_0E9_M_invokeERKSt9_Any_data (scylla + 0x3c059f0)
#3  0x0000000003d9c13b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x3b9c13b)
#4  0x00007f7f6d986299 start_thread (libpthread.so.0 + 0x9299)
#5  0x00007f7f6ce98353 __clone (libc.so.6 + 0x100353)
Stack trace of thread 10268:
#0  0x00007f7f6d98f94c read (libpthread.so.0 + 0x1294c)
#1  0x0000000003e05685 _ZN7seastar11thread_pool4workENS_13basic_sstringIcjLj15ELb1EEE (scylla + 0x3c05685)
#2  0x0000000003e059f0 _ZNSt17_Function_handlerIFvvEZN7seastar11thread_poolC1EPNS1_7reactorENS1_13basic_sstringIcjLj15ELb1EEEE3$_0E9_M_invokeERKSt9_Any_data (scylla + 0x3c059f0)
#3  0x0000000003d9c13b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x3b9c13b)
#4  0x00007f7f6d986299 start_thread (libpthread.so.0 + 0x9299)
#5  0x00007f7f6ce98353 __clone (libc.so.6 + 0x100353)
Stack trace of thread 10274:
#0  0x00007f7f6d98f94c read (libpthread.so.0 + 0x1294c)
#1  0x0000000003e05685 _ZN7seastar11thread_pool4workENS_13basic_sstringIcjLj15ELb1EEE (scylla + 0x3c05685)
#2  0x0000000003e059f0 _ZNSt17_Function_handlerIFvvEZN7seastar11thread_poolC1EPNS1_7reactorENS1_13basic_sstringIcjLj15ELb1EEEE3$_0E9_M_invokeERKSt9_Any_data (scylla + 0x3c059f0)
#3  0x0000000003d9c13b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x3b9c13b)
#4  0x00007f7f6d986299 start_thread (libpthread.so.0 + 0x9299)
#5  0x00007f7f6ce98353 __clone (libc.so.6 + 0x100353)
Stack trace of thread 10260:
#0  0x00007f7f6d98f94c read (libpthread.so.0 + 0x1294c)
#1  0x0000000003de8e2c _ZN7seastar9file_desc4readEPvm (scylla + 0x3be8e2c)
#2  0x0000000003e0690b _ZN7seastar25task_quota_aio_completion13complete_withEl (scylla + 0x3c0690b)
#3  0x0000000003e06c1c _ZN7seastar18preempt_io_context24reset_preemption_monitorEv (scylla + 0x3c06c1c)
#4  0x0000000003dc39f0 _ZN7seastar7reactor14run_some_tasksEv (scylla + 0x3bc39f0)
#5  0x0000000003dc4ef8 _ZN7seastar7reactor6do_runEv (scylla + 0x3bc4ef8)
#6  0x0000000003dc413d _ZN7seastar7reactor3runEv (scylla + 0x3bc413d)
#7  0x0000000003d6d9bf _ZN7seastar12app_template14run_deprecatedEiPPcOSt8functionIFvvEE (scylla + 0x3b6d9bf)
#8  0x0000000003d6cd37 _ZN7seastar12app_template3runEiPPcOSt8functionIFNS_6futureIiEEvEE (scylla + 0x3b6cd37)
#9  0x0000000000f404b6 main (scylla + 0xd404b6)
#10 0x00007f7f6cdbfb75 __libc_start_main (libc.so.6 + 0x27b75)
#11 0x0000000000f3d66e _start (scylla + 0xd3d66e)
Stack trace of thread 10272:
#0  0x00007f7f6d98f94c read (libpthread.so.0 + 0x1294c)
#1  0x0000000003e05685 _ZN7seastar11thread_pool4workENS_13basic_sstringIcjLj15ELb1EEE (scylla + 0x3c05685)
#2  0x0000000003e059f0 _ZNSt17_Function_handlerIFvvEZN7seastar11thread_poolC1EPNS1_7reactorENS1_13basic_sstringIcjLj15ELb1EEEE3$_0E9_M_invokeERKSt9_Any_data (scylla + 0x3c059f0)
#3  0x0000000003d9c13b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x3b9c13b)
#4  0x00007f7f6d986299 start_thread (libpthread.so.0 + 0x9299)
#5  0x00007f7f6ce98353 __clone (libc.so.6 + 0x100353)
Stack trace of thread 10263:
#0  0x00000000033d2e30 _ZN13reader_permit14resource_units5resetE16reader_resources (scylla + 0x31d2e30)
#1  0x00000000012d23f2 _ZN17mutation_fragmentC1ERK6schema13reader_permitO14clustering_row (scylla + 0x10d23f2)
#2  0x000000000160b19c _ZN7seastar8futurizeINS_6futureINS_10bool_classINS_18stop_iteration_tagEEEEEE6invokeIRZN23flat_mutation_reader_v24impl16consume_pausableIZ15downgrade_to_v1S8_EN19transforming_reader8consumerEEENS1_IvEET_EUlvE_JEEES5_OSE_DpOT0_ (scylla + 0x140b19c)
#3  0x000000000160827b _ZZ15downgrade_to_v123flat_mutation_reader_v2EN19transforming_reader11fill_bufferEv (scylla + 0x140827b)
#4  0x000000000154c8e3 _ZN22mutation_reader_mergerclEv (scylla + 0x134c8e3)
#5  0x0000000001584141 _ZZN14merging_readerI22mutation_reader_mergerE11fill_bufferEvENKUlvE_clEv (scylla + 0x1384141)
#6  0x00000000015832bb _ZN14merging_readerI22mutation_reader_mergerE11fill_bufferEv (scylla + 0x13832bb)
#7  0x000000000198b564 _ZZZN8sstables10compaction5setupI33noop_compacted_fragments_consumerEEN7seastar6futureIvEET_ENUl20flat_mutation_readerE_clES7_ENUlvE_clEv (scylla + 0x178b564)
#8  0x000000000198b294 _ZN7seastar20noncopyable_functionIFvvEE17direct_vtable_forIZNS_5asyncIZZN8sstables10compaction5setupI33noop_compacted_fragments_consumerEENS_6futureIvEET_ENUl20flat_mutation_readerE_clESC_EUlvE_JEEENS_8futurizeINSt9result_ofIFNSt5decayISB_E4typeEDpNSH_IT0_E4typeEEE4typeEE4typeENS_17thread_attributesEOSB_DpOSK_EUlvE_E4callEPKS2_ (scylla + 0x178b294)
#9  0x0000000004034962 _ZN7seastar14thread_context4mainEv (scylla + 0x3e34962)
download_instructions=gsutil cp gs://[upload.scylladb.com/core.scylla.113.74176b16711c47859951540f8461f71d.10260.1643981471000000000000/core.scylla.113.74176b16711c47859951540f8461f71d.10260.1643981471000000000000.gz](https://www.google.com/url?q=http://upload.scylladb.com/core.scylla.113.74176b16711c47859951540f8461f71d.10260.1643981471000000000000/core.scylla.113.74176b16711c47859951540f8461f71d.10260.1643981471000000000000.gz&source=gmail-html&ust=1644070705730000&usg=AOvVaw0NlQeDjCVG033qZi4-d1Mo) .
gunzip /var/lib/systemd/coredump/core.scylla.113.74176b16711c47859951540f8461f71d.10260.1643981471000000000000.gz

====================================

Restore Monitor Stack command: $ hydra investigate show-monitor 19780877-77f4-4aad-a107-9361526e92b9
Restore monitor on AWS instance using Jenkins job
Show all stored logs command: $ hydra investigate show-logs 19780877-77f4-4aad-a107-9361526e92b9

Test id: 19780877-77f4-4aad-a107-9361526e92b9

Logs:
grafana - [https://cloudius-jenkins-test.s3.amazonaws.com/19780877-77f4-4aad-a107-9361526e92b9/20220204_134927/grafana-screenshot-longevity-lwt-parallel-schema-changes-with-disruptive-24h-test-scylla-per-server-metrics-nemesis-20220204_135053-longevity-lwt-parallel-24h-4-6-monitor-node-19780877-1.png](https://www.google.com/url?q=https://cloudius-jenkins-test.s3.amazonaws.com/19780877-77f4-4aad-a107-9361526e92b9/20220204_134927/grafana-screenshot-longevity-lwt-parallel-schema-changes-with-disruptive-24h-test-scylla-per-server-metrics-nemesis-20220204_135053-longevity-lwt-parallel-24h-4-6-monitor-node-19780877-1.png%255D(https://cloudius-jenkins-test.s3.amazonaws.com/19780877-77f4-4aad-a107-9361526e92b9/20220204_134927/grafana-screenshot-longevity-lwt-parallel-schema-changes-with-disruptive-24h-test-scylla-per-server-metrics-nemesis-20220204_135053-longevity-lwt-parallel-24h-4-6-monitor-node-19780877-1.png)&source=gmail-html&ust=1644072009806000&usg=AOvVaw1eO4wHVmKpnL_FAG6JXoBN)
grafana - [https://cloudius-jenkins-test.s3.amazonaws.com/19780877-77f4-4aad-a107-9361526e92b9/20220204_134927/grafana-screenshot-overview-20220204_134927-longevity-lwt-parallel-24h-4-6-monitor-node-19780877-1.png](https://www.google.com/url?q=https://cloudius-jenkins-test.s3.amazonaws.com/19780877-77f4-4aad-a107-9361526e92b9/20220204_134927/grafana-screenshot-overview-20220204_134927-longevity-lwt-parallel-24h-4-6-monitor-node-19780877-1.png%255D(https://cloudius-jenkins-test.s3.amazonaws.com/19780877-77f4-4aad-a107-9361526e92b9/20220204_134927/grafana-screenshot-overview-20220204_134927-longevity-lwt-parallel-24h-4-6-monitor-node-19780877-1.png)&source=gmail-html&ust=1644072009806000&usg=AOvVaw2NxFkTR1xCyauc1-nNBgfs)
db-cluster - [https://cloudius-jenkins-test.s3.amazonaws.com/19780877-77f4-4aad-a107-9361526e92b9/20220204_135945/db-cluster-19780877.tar.gz](https://www.google.com/url?q=https://cloudius-jenkins-test.s3.amazonaws.com/19780877-77f4-4aad-a107-9361526e92b9/20220204_135945/db-cluster-19780877.tar.gz%255D(https://cloudius-jenkins-test.s3.amazonaws.com/19780877-77f4-4aad-a107-9361526e92b9/20220204_135945/db-cluster-19780877.tar.gz)&source=gmail-html&ust=1644072009806000&usg=AOvVaw14VZ6kRycjEMpCgbTvCQ-f)
loader-set - [https://cloudius-jenkins-test.s3.amazonaws.com/19780877-77f4-4aad-a107-9361526e92b9/20220204_135945/loader-set-19780877.tar.gz](https://www.google.com/url?q=https://cloudius-jenkins-test.s3.amazonaws.com/19780877-77f4-4aad-a107-9361526e92b9/20220204_135945/loader-set-19780877.tar.gz%255D(https://cloudius-jenkins-test.s3.amazonaws.com/19780877-77f4-4aad-a107-9361526e92b9/20220204_135945/loader-set-19780877.tar.gz)&source=gmail-html&ust=1644072009806000&usg=AOvVaw1-ulTYda7--wGcbKx_rKs_)
monitor-set - [https://cloudius-jenkins-test.s3.amazonaws.com/19780877-77f4-4aad-a107-9361526e92b9/20220204_135945/monitor-set-19780877.tar.gz](https://www.google.com/url?q=https://cloudius-jenkins-test.s3.amazonaws.com/19780877-77f4-4aad-a107-9361526e92b9/20220204_135945/monitor-set-19780877.tar.gz%255D(https://cloudius-jenkins-test.s3.amazonaws.com/19780877-77f4-4aad-a107-9361526e92b9/20220204_135945/monitor-set-19780877.tar.gz)&source=gmail-html&ust=1644072009806000&usg=AOvVaw1e0Ar7nYH-jXzAvSaEBGLV)
sct - [https://cloudius-jenkins-test.s3.amazonaws.com/19780877-77f4-4aad-a107-9361526e92b9/20220204_135945/sct-runner-19780877.tar.gz](https://www.google.com/url?q=https://cloudius-jenkins-test.s3.amazonaws.com/19780877-77f4-4aad-a107-9361526e92b9/20220204_135945/sct-runner-19780877.tar.gz%255D(https://cloudius-jenkins-test.s3.amazonaws.com/19780877-77f4-4aad-a107-9361526e92b9/20220204_135945/sct-runner-19780877.tar.gz)&source=gmail-html&ust=1644072009806000&usg=AOvVaw3YJukkH2zsIvAdePQsoLr9)

Jenkins job URL

@psarna
Copy link
Contributor

psarna commented Feb 4, 2022

Hm, I wonder if it can be due to views and their base tables having mismatched schema versions. @eliransin worked on the code regarding such mismatches, I'll try to dig through git history and see if I find any suspects or missing backports in that area.

@psarna
Copy link
Contributor

psarna commented Feb 4, 2022

Nope, at least gdb suggests that base schema has the same uuid as the uuid that view has in its "base uuid" field:

(gdb) p this->_base._raw._id
$5 = {most_sig_bits = 5133691054414107116, least_sig_bits = -7329783304959484384}
(gdb) p this->_view->_raw._base_id
$6 = {most_sig_bits = 5133691054414107116, least_sig_bits = -7329783304959484384}

@psarna
Copy link
Contributor

psarna commented Feb 4, 2022

but, I see in the schema that the view was created with custom WHERE clause, maybe it's part of the root cause. The WHERE clause is domain IS NOT null AND published_date IS NOT null AND lwt_indicator >= -150000 AND lwt_indicator < 0, I'll keep digging

@psarna
Copy link
Contributor

psarna commented Feb 4, 2022

I verified via gdb that the restriction that we check against is correct (it represents the lwt_indicator >= -150000 AND lwt_indicator < 0 part of the clause). The next natural step is to check whether the row variable was correct, but it's optimized out... so I can't easily validate if it contains some bogus data.

@psarna
Copy link
Contributor

psarna commented Feb 4, 2022

The final root cause is definitely that we try to operate on managed_bytes which points to a null pointer, but that's not very helpful on its own. We do something wrong in this line, which ends up in creating bogus managed bytes object:
https://github.com/scylladb/scylla/blob/8f5148e921af5282e002f4a875e198630b45f3aa/cql3/expr/expression.cc#L139

@psarna
Copy link
Contributor

psarna commented Feb 4, 2022

Ok, I still haven't found the original root cause, but I have some conclusions. Long story short, we try to compare two column_definition instances when evaluating if a restriction is satisfied by some condition, and we do compare identical columns, but the comparison is made by comparing pointers, so the code falsely assumes that they aren't the same columns. Then, a subtle bug (or rather, lack of a panic check) picks a wrong value from a vector based on this failed comparison, which results in a segfault.

This function:
https://github.com/scylladb/scylla/blob/8f5148e921af5282e002f4a875e198630b45f3aa/cql3/expr/expression.cc#L111
takes a column and a selection and wants to find an index of this column's index in the selection, assuming that it's there.

column_definition instance in selection:

(gdb) p *(bag.row_data.sel._columns._M_impl._M_start + 3)

$99 = (const column_definition *) 0x601003fe5538

(gdb) p **(bag.row_data.sel._columns._M_impl._M_start + 3)
$100 = {_name = {u = {external = {str = 0x69646e695f74776c <error: Cannot access memory at address 0x69646e695f74776c>, size = 1869898083, pad = 114 'r'}, internal = {
        str = "lwt_indicator\000\000\000\030$\261\b\020`\000\000\b\000\000\000\000\000", size = 13 '\r'}}, static npos = <optimized out>}, _dropped_at = -9223372036854775808,
  _is_atomic = true, _is_counter = false, _is_view_virtual = column_view_virtual::no, _computation = {
    _M_t = {<std::__uniq_ptr_impl<column_computation, std::default_delete<column_computation> >> = {
        _M_t = {<std::_Tuple_impl<0, column_computation*, std::default_delete<column_computation> >> = {<std::_Tuple_impl<1, std::default_delete<column_computation> >> = {<std::_Head_base<1, std::default_delete<column_computation>, true>> = {_M_head_impl = {<No data fields>}}, <No data fields>}, <std::_Head_base<0, column_computation*, false>> = {
              _M_head_impl = 0x0}, <No data fields>}, <No data fields>}}, <No data fields>}}, _thrift_bits = {is_on_all_components = 0 '\000'}, type = {_b = 0x6010002f1000,
    _p = 0x6010002f1000}, id = 1, ordinal_id = (unknown: 0x3), kind = column_kind::regular_column, column_specification = {_p = 0x60100879a780}}

column_definition instance passed in the column parameter:

(gdb) p cdef
$101 = (const column_definition *) 0x601008647838
(gdb) p *cdef
$102 = {_name = {u = {external = {str = 0x69646e695f74776c <error: Cannot access memory at address 0x69646e695f74776c>, size = 1869898083, pad = 114 'r'}, internal = {
        str = "lwt_indicator\000\000\000\030́\b\020`\000\000\b\000\000\000\000\000", size = 13 '\r'}}, static npos = <optimized out>}, _dropped_at = -9223372036854775808,
  _is_atomic = true, _is_counter = false, _is_view_virtual = column_view_virtual::no, _computation = {
    _M_t = {<std::__uniq_ptr_impl<column_computation, std::default_delete<column_computation> >> = {
        _M_t = {<std::_Tuple_impl<0, column_computation*, std::default_delete<column_computation> >> = {<std::_Tuple_impl<1, std::default_delete<column_computation> >> = {<std::_Head_base<1, std::default_delete<column_computation>, true>> = {_M_head_impl = {<No data fields>}}, <No data fields>}, <std::_Head_base<0, column_computation*, false>> = {
              _M_head_impl = 0x0}, <No data fields>}, <No data fields>}}, <No data fields>}}, _thrift_bits = {is_on_all_components = 0 '\000'}, type = {_b = 0x6010002f1000,
    _p = 0x6010002f1000}, id = 1, ordinal_id = (unknown: 0x3), kind = column_kind::regular_column, column_specification = {_p = 0x601008900460}}

These are indeed definition of the same column, but they have different pointers, and index_of from this line:
https://github.com/scylladb/scylla/blob/8f5148e921af5282e002f4a875e198630b45f3aa/cql3/expr/expression.cc#L139
is implemented by performing pointer comparison.

Also, index_of from this line returns -1 when it doesn't find the column, and this condition is not checked here - which results in taking a value from index -1 from this vector, which is bogus and results in a segfault.

These pointers are different most likely because somehow they come from two schemas - one that was updated lately and one old one perhaps? I have no idea why, but at least we more or less know why the segfault happened.

@cvybhu I think that we also need to add a panic-check in this line:
https://github.com/scylladb/scylla/blob/8f5148e921af5282e002f4a875e198630b45f3aa/cql3/expr/expression.cc#L139

if index_of returned -1, then it would be safer to throw, instead of silently taking a value from the wrong index.

@psarna
Copy link
Contributor

psarna commented Feb 4, 2022

Hmm I already went AFK, but the symptom above would definitely happen when the cached "select_statement" from view_info class would not be invalidated when new definition of the base class arrives. @eliransin ideas why/how it could happen?

@psarna
Copy link
Contributor

psarna commented Feb 4, 2022

If it's a temporary race, throwing when we detect a mismatch described above would fix the issue. I'm not sure however if it's temporary or not, I'll continue the investigation after the weekend

@roydahan
Copy link

roydahan commented Feb 7, 2022

BTW, we have a fairly easy reproducer for this, so if needs to run with additional assertions or special build to shed more light where it comes from, we should be able to do it quite quickly.

psarna added a commit to psarna/scylla that referenced this issue Feb 7, 2022
As observed in scylladb#10026, after schema changes it somehow happened
that a column defition that does not match any of the base table
columns was passed to expression verification code.
The function that looks up the index of a column happens to return
-1 when it doesn't find anything, so using this returned index
without checking if it's nonnegative results in accessing invalid
vector data, and a segfault or silent memory corruption.
Therefore, an explicit check is added to see if the column was actually
found. This serves two purposes:
 - avoiding segfaults/memory corruption
 - making it easier to investigate the root cause of scylladb#10026
@DoronArazii DoronArazii modified the milestones: 5.1, 5.2 Oct 12, 2022
@nyh
Copy link
Contributor

nyh commented Nov 2, 2022

@gleb-cloudius suspects this is the same issue #11542 and I suspect it may also be the same issue as #9059 - both those issues involve a view build happening in parallel with altering the base table (even in a trivial way).

Piotr Sarna had a pull request for this: #10040

We should go back to this issue and the pull request, and consider merging it (but @gleb-cloudius, did it fix your problem?)

@nyh nyh self-assigned this Nov 2, 2022
@gleb-cloudius
Copy link
Contributor

@gleb-cloudius suspects this is the same issue #11542 and I suspect it may also be the same issue as #9059 - both those issues involve a view build happening in parallel with altering the base table (even in a trivial way).

By this point I am more than suspect that #11542 is the dup of this one. I am sure of it. But the bug is much more severe that what you suggest. There is no race involved. In my reproducer I can change the schema while there is no load running and later when load starts it fails.

Piotr Sarna had a pull request for this: #10040

We should go back to this issue and the pull request, and consider merging it (but @gleb-cloudius, did it fix your problem?)

Did not check. I can try.

@gleb-cloudius
Copy link
Contributor

Piotr Sarna had a pull request for this: #10040
We should go back to this issue and the pull request, and consider merging it (but @gleb-cloudius, did it fix your problem?)

Did not check. I can try.

Yes, it does. So we have a fix for a year but we just forgot about it. I wonder how MV is even used by anyone with the bug.

@nyh
Copy link
Contributor

nyh commented Nov 3, 2022

@gleb-cloudius the only problem with Piotr's patch was that we didn't know if it solved any bug because we didn't have a good reliable reproducing test for it and Eliran was going to test it but I guess never got around to it.

If I understand correctly, today you have an easy to reproduce test for this issue and its fix. Is it a unit/functional test (would have been really nice...) or part of a bigger and slower test?

@gleb-cloudius
Copy link
Contributor

@gleb-cloudius the only problem with Piotr's patch was that we didn't know if it solved any bug because we didn't have a good reliable reproducing test for it and Eliran was going to test it but I guess never got around to it.

If I understand correctly, today you have an easy to reproduce test for this issue and its fix. Is it a unit/functional test (would have been really nice...) or part of a bigger and slower test?

I have a bigger and slower reproducer (but 100% one) here #11542 (comment), but I am sure that it is possible to reduce it to one insert and one update.

@nyh
Copy link
Contributor

nyh commented Nov 3, 2022

@gleb-cloudius I checked that #10040 does not fix #9059, so either #9059 is not a duplicate of this one despite the partial similarity, or #10040 is an incorrect or incomplete fix.

@nyh
Copy link
Contributor

nyh commented Nov 3, 2022

Yes, it does. So we have a fix for a year but we just forgot about it. I wonder how MV is even used by anyone with the bug.

I tried to create a simpler reproducer than what you have with cassandra-stress, but simpler things seem to work without failing. I'm still trying to find a simpler reproducer, but I guess this bug does not affect everyone using MV but a more specific combination of features.

@gleb-cloudius
Copy link
Contributor

gleb-cloudius commented Nov 6, 2022 via email

@roydahan
Copy link

roydahan commented Nov 9, 2022

@gleb-cloudius / @nyh, I'm reading this issue and it's suggested duplicated (#10192, #11542 ) and trying to understand the impact of them.
It looks like (any?) schema change with LWT tables, causing errors/crashes / client write failures.

Is it that bad?

@juliayakovlev
Copy link

Reproduced with 2022.2.0~rc4-20221106.f5714e0db12f

2022-11-09 18:39:07.771: (DisruptionEvent Severity.NORMAL) period_type=begin event_id=bcf0a028-3879-472d-9b49-73ebb45bbfae: nemesis_name=ToggleTableGcMode target_node=Node longevity-lwt-1loader-3h-2022-2-db-node-fcbccec7-3 [13.51.178.98 | 10.0.3.253] (seed: False)

Running command "cqlsh --no-color   --request-timeout=120 --connect-timeout=60  -e "ALTER MATERIALIZED VIEW  cqlstress_lwt_example.blogposts_update_2_columns_lwt_indicator_after_update WITH tombstone_gc = {'mode': 'repair'};" 10.0.3.253 9042"...

2022-11-09 18:39:17.919 <2022-11-09 18:39:17.000>: (DatabaseLogEvent Severity.ERROR) period_type=one-time event_id=1a22e038-6d53-4737-945d-2d3c32fcddd4: type=RUNTIME_ERROR regex=std::runtime_error line_number=9302 node=longevity-lwt-1loader-3h-2022-2-db-node-fcbccec7-3
2022-11-09T18:39:17+00:00 longevity-lwt-1loader-3h-2022-2-db-node-fcbccec7-3      !ERR | scylla[7777]:  [shard 2] storage_proxy - exception during mutation write to 10.0.3.253: std::runtime_error (Column definition lwt_indicator does not match any column in the query selection)

Installation details

Kernel Version: 5.15.0-1022-aws
Scylla version (or git commit hash): 2022.2.0~rc4-20221106.f5714e0db12f with build-id f4a927b8a00fbcd8d48640835192aeaa7968b1f2
Relocatable Package: http://downloads.scylladb.com/downloads/scylla-enterprise/relocatable/scylladb-2022.2/scylla-enterprise-x86_64-package-2022.2.0-rc4.0.20221106.f5714e0db12f.tar.gz
Cluster size: 4 nodes (i3.2xlarge)

Scylla Nodes used in this run:

  • longevity-lwt-1loader-3h-2022-2-db-node-fcbccec7-5 (13.51.238.255 | 10.0.2.58) (shards: 7)
  • longevity-lwt-1loader-3h-2022-2-db-node-fcbccec7-4 (16.171.54.142 | 10.0.0.125) (shards: 7)
  • longevity-lwt-1loader-3h-2022-2-db-node-fcbccec7-3 (13.51.178.98 | 10.0.3.253) (shards: 7)
  • longevity-lwt-1loader-3h-2022-2-db-node-fcbccec7-2 (13.49.23.153 | 10.0.2.107) (shards: 7)
  • longevity-lwt-1loader-3h-2022-2-db-node-fcbccec7-1 (13.49.246.176 | 10.0.1.210) (shards: 7)

OS / Image: ami-0a67b1713841a5df7 (aws: eu-north-1)

Test: longevity-lwt-1loader-3h-test
Test id: fcbccec7-e31f-4933-83fb-3d36d2f4921e
Test name: enterprise-2022.2/longevity/longevity-lwt-1loader-3h-test
Test config file(s):

Issue description

>>>>>>>
Your description here...
<<<<<<<

  • Restore Monitor Stack command: $ hydra investigate show-monitor fcbccec7-e31f-4933-83fb-3d36d2f4921e
  • Restore monitor on AWS instance using Jenkins job
  • Show all stored logs command: $ hydra investigate show-logs fcbccec7-e31f-4933-83fb-3d36d2f4921e

Logs:

Jenkins job URL

@gleb-cloudius
Copy link
Contributor

@gleb-cloudius / @nyh, I'm reading this issue and it's suggested duplicated (#10192, #11542 ) and trying to understand the impact of them. It looks like (any?) schema change with LWT tables, causing errors/crashes / client write failures.

It is not clear that LWT is needed at all. The bug is in the MV code and LWT test uses views.

Is it that bad?

Well, it looks like MV is not supposed to be usable at all, but the fact that only this test ever hits it suggest that the bug is in a corner case. Until investigated further by MV people (person) it is hard to tell. May be everyone else are just lucky.

@nyh
Copy link
Contributor

nyh commented Nov 13, 2022

It is not clear that LWT is needed at all. The bug is in the MV code and LWT test uses views.
...
Until investigated further by MV people (person) it is hard to tell. May be everyone else are just lucky.

I wish that other people and not just me cared more about distilling long and slow to minimalistic tests that can reproduce bugs in a fraction of a second and point the finger on exactly which feature is to blame. I'll try to do this today, because a statement like "it's not clear that LWT is needed at all" just means that we haven't finished distilling this test yet - and someone needs to do that. I guess it will have to be me.

Also, even without distilling this test, @gleb-cloudius already claims that #10040 fixes it. But as I noted in #10040, the patch as it currently stands breaks other tests, so the patch probably needs to be changed before it goes in. And if we do change it, I'd like to have a test that can check this fix without needing to ask Gleb to run his reproducer for me each time.

Well, it looks like MV is not supposed to be usable at all

@tzach I think @gleb-cloudius is asking you to document that MV is not just "experimental" (like in Cassandra), it is in fact "not supposed to be usable at all". I'm guessing you won't like this statement, so we need to think what needs to be done to make it not true (if it's even true).

@nyh
Copy link
Contributor

nyh commented Nov 15, 2022

@gleb-cloudius I have a simple cql-pytest reproducer which doesn't need any concurrency or lots of data. Three details of your bigger reproducer ended up being critical for reproducing it in a smaller test:

  1. There must be a filter on a base-regular-column-that-becomes-a-view-key-column defining the view - as there is in your reproducer in Load failed after "memtable_flush_period_in_ms" property has been modified #11542.
  2. There must be a modification to the base table (even just a comment) and one write after it. No need for any concurrency at all.
  3. There must be an insert to the base table before the modification (if there is none, the view code doesn't notice and wrongly cache the base table).

Note that "view building" is not listed here. It's not needed to reproduce this bug. Normal view updates break just as well.

The reproducer:

# Reproducer for issue #11542: We have a table with with a materialized
# view with a filter and some data, at which point we modify the base table
# (e.g., add some silly comment) and then try to modify the data.
# The last modification used to fail, logging "Column definition v does not
# match any column in the query selection".
# The same test without the silly base-table modification works, and so does
# the same test without the filter in the materialized view that uses the
# base-regular column v. So does the same test without pre-modification data.
#
# This test is Scylla-only because Cassandra does not support filtering
# on a base-regular column v that is only a key column in the view.
def test_view_update_and_alter_base(cql, test_keyspace, scylla_only):
    with new_test_table(cql, test_keyspace, 'p int primary key, v int') as table:
        with new_materialized_view(cql, table, '*', 'v, p', 'v >= 0 and p is not null') as mv:
            cql.execute(f'INSERT INTO {table} (p,v) VALUES (1,1)')
            # In our tests, MV writes are synchronous, so we can read
            # immediately
            assert len(list(cql.execute(f"SELECT v from {mv}"))) == 1
            # Alter the base table, with a silly comment change that doesn't
            # change anything important - but still the base schema changes.
            cql.execute(f"ALTER TABLE {table} WITH COMMENT = '{unique_name()}'")
            # Try to modify an item. This failed in #11542.
            cql.execute(f'UPDATE {table} SET v=-1 WHERE p=1')
            assert len(list(cql.execute(f"SELECT v from {mv}"))) == 0

@gleb-cloudius
Copy link
Contributor

@gleb-cloudius I have a simple cql-pytest reproducer which doesn't need any concurrency or lots of data. Two parts of reproducer ended up being critical for reproducing it:

Great. BTW as I noted here #10026 (comment) no concurrency is needed with the long repro as well.

@nyh
Copy link
Contributor

nyh commented Nov 15, 2022

Great. BTW as I noted here #10026 (comment) no concurrency is needed with the long repro as well.

Right. Your comment inspired me to dropped my attempts to do things concurrently with the view building, and instead try to figure out what was really important for this failure. Now to fix it (probaby #10040 or something based on it, I need to check why other tests begin to fail).

nyh added a commit to nyh/scylla that referenced this issue Nov 15, 2022
When we write to a materialized view, we need to know some information
defined in the base table such as the columns in its schema. We have
a "view_info" object that tracks each view and its base.

This view_info object has a couple of mutable attributes which are
used to lazily-calculate and cache the SELECT statement needed to
read from the base table. If the base-table schema ever changes -
and the code calls set_base_info() at that point - we need to forget
this cached statement. If we don't (as before this patch), the SELECT
will use the wrong schema and writes will no longer work.

This patch also includes a reproducing test that failed before this
patch, and passes afterwords. The test creates a base table with a
view that has a non-trivial SELECT (it has a filter on one of the
base-regular columns), makes a benign modification to the base table
(just a silly addition of a comment), and then tries to write to the
view - and before this patch it fails.

Fixes scylladb#10026
Fixes scylladb#11542
avikivity pushed a commit that referenced this issue Dec 5, 2022
When we write to a materialized view, we need to know some information
defined in the base table such as the columns in its schema. We have
a "view_info" object that tracks each view and its base.

This view_info object has a couple of mutable attributes which are
used to lazily-calculate and cache the SELECT statement needed to
read from the base table. If the base-table schema ever changes -
and the code calls set_base_info() at that point - we need to forget
this cached statement. If we don't (as before this patch), the SELECT
will use the wrong schema and writes will no longer work.

This patch also includes a reproducing test that failed before this
patch, and passes afterwords. The test creates a base table with a
view that has a non-trivial SELECT (it has a filter on one of the
base-regular columns), makes a benign modification to the base table
(just a silly addition of a comment), and then tries to write to the
view - and before this patch it fails.

Fixes #10026
Fixes #11542

(cherry picked from commit 2f2f01b)
avikivity pushed a commit that referenced this issue Dec 5, 2022
When we write to a materialized view, we need to know some information
defined in the base table such as the columns in its schema. We have
a "view_info" object that tracks each view and its base.

This view_info object has a couple of mutable attributes which are
used to lazily-calculate and cache the SELECT statement needed to
read from the base table. If the base-table schema ever changes -
and the code calls set_base_info() at that point - we need to forget
this cached statement. If we don't (as before this patch), the SELECT
will use the wrong schema and writes will no longer work.

This patch also includes a reproducing test that failed before this
patch, and passes afterwords. The test creates a base table with a
view that has a non-trivial SELECT (it has a filter on one of the
base-regular columns), makes a benign modification to the base table
(just a silly addition of a comment), and then tries to write to the
view - and before this patch it fails.

Fixes #10026
Fixes #11542

(cherry picked from commit 2f2f01b)
@avikivity
Copy link
Member

Backported to 5.0, 5.1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment