Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test_interrupt_build_process dtest failed with schema_registry - Tried to build a global schema for view ks.t_by_v2 with an uninitialized base info #14011

Closed
bhalevy opened this issue May 24, 2023 · 43 comments · Fixed by #14861
Assignees
Labels
area/materialized views P1 Urgent symptom/ci stability Issues that failed in ScyllaDB CI - tests and framework tests/dtest type/bug
Milestone

Comments

@bhalevy
Copy link
Member

bhalevy commented May 24, 2023

Seen in https://jenkins.scylladb.com/view/master/job/scylla-master/job/dtest-daily-release/256/artifact/logs-full.release.016/1684893864334_materialized_views_test.py%3A%3ATestInterruptBuildProcess%3A%3Atest_interrupt_build_process_with_resharding_max_to_half_test/node2.log

Scylla version 5.4.0~dev-0.20230524.88fd7f711108 with build-id 5d5b24db70618b9af2ce3ae97752a85522312cbb starting ...
...
INFO  2023-05-24 01:53:46,804 [shard  0] init - Scylla version 5.4.0~dev-0.20230524.88fd7f711108 initialization completed.
INFO  2023-05-24 01:57:45,154 [shard  0] view - Building view ks.t_by_v, starting at token minimum token
INFO  2023-05-24 01:57:45,222 [shard  0] migration_manager - Requesting schema pull from 127.0.75.3:13
INFO  2023-05-24 01:57:45,222 [shard  0] migration_manager - Pulling schema from 127.0.75.3:13
INFO  2023-05-24 01:57:45,223 [shard  0] migration_manager - Requesting schema pull from 127.0.75.3:20
INFO  2023-05-24 01:57:45,227 [shard  0] migration_manager - Requesting schema pull from 127.0.75.3:1
INFO  2023-05-24 01:57:45,227 [shard  0] migration_manager - Requesting schema pull from 127.0.75.3:15
INFO  2023-05-24 01:57:45,229 [shard  0] migration_manager - Requesting schema pull from 127.0.75.3:3
INFO  2023-05-24 01:57:45,232 [shard  0] migration_manager - Requesting schema pull from 127.0.75.3:31
INFO  2023-05-24 01:57:45,235 [shard  0] migration_manager - Requesting schema pull from 127.0.75.1:5
INFO  2023-05-24 01:57:45,235 [shard  0] migration_manager - Pulling schema from 127.0.75.1:0
INFO  2023-05-24 01:57:45,235 [shard  0] migration_manager - Requesting schema pull from 127.0.75.1:7
INFO  2023-05-24 01:57:45,236 [shard  0] migration_manager - Requesting schema pull from 127.0.75.1:18
INFO  2023-05-24 01:57:45,238 [shard  0] schema_tables - Altering ks.t id=fff88220-f9d5-11ed-81ba-e13a14d6d4e6 version=1511e4dd-27f6-35f2-a7c9-bca005c30d17
INFO  2023-05-24 01:57:45,239 [shard  0] schema_tables - Creating ks.t_by_v2 id=603cfa80-f9d6-11ed-81ba-e13a14d6d4e6 version=ebf7abfb-7778-3d27-9f58-c47c2d5af4c6
WARN  2023-05-24 01:57:45,244 [shard 12] storage_proxy - Failed to apply mutation from 127.0.75.3#12: data_dictionary::no_such_column_family (Can't find a column family with UUID 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6)
WARN  2023-05-24 01:57:45,244 [shard 11] storage_proxy - Failed to apply mutation from 127.0.75.3#11: data_dictionary::no_such_column_family (Can't find a column family with UUID 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6)
WARN  2023-05-24 01:57:45,244 [shard  1] storage_proxy - Failed to apply mutation from 127.0.75.3#1: data_dictionary::no_such_column_family (Can't find a column family with UUID 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6)
WARN  2023-05-24 01:57:45,244 [shard 26] storage_proxy - Failed to apply mutation from 127.0.75.3#26: data_dictionary::no_such_column_family (Can't find a column family with UUID 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6)
INFO  2023-05-24 01:57:45,245 [shard  3] query_processor - Column definitions for ks.t changed, invalidating related prepared statements
WARN  2023-05-24 01:57:45,245 [shard  2] storage_proxy - Failed to apply mutation from 127.0.75.3#2: data_dictionary::no_such_column_family (Can't find a column family with UUID 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6)
WARN  2023-05-24 01:57:45,247 [shard  7] storage_proxy - Failed to apply mutation from 127.0.75.1#7: data_dictionary::no_such_column_family (Can't find a column family with UUID 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6)
WARN  2023-05-24 01:57:45,247 [shard 10] storage_proxy - Failed to apply mutation from 127.0.75.1#10: data_dictionary::no_such_column_family (Can't find a column family with UUID 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6)
ERROR 2023-05-24 01:57:45,248 [shard 24] schema_registry - Tried to build a global schema for view ks.t_by_v2 with an uninitialized base info, at: 0x59eb98e 0x59ebf40 0x59ec228 0x5622487 0x1e17696 0x2e94915 0x2f6f263 0x2f6d509 0x2f5be89 0x2f71e71 0x15fd1d2 0x15fbd6a 0x15fb429 0x15f9473 0x15f8811 0x59d24f5 0x59d20b3 0x59d2b59 0x5652b94 0x5653e17 0x56764b1 0x562321a /jenkins/workspace/scylla-master/dtest-daily-release/scylla/.ccm/scylla-repository/88fd7f71110878a8bcd317aa9f5183df722def76/libreloc/libc.so.6+0x8b12c /jenkins/workspace/scylla-master/dtest-daily-release/scylla/.ccm/scylla-repository/88fd7f71110878a8bcd317aa9f5183df722def76/libreloc/libc.so.6+0x10cbbf
   --------
   seastar::continuation<seastar::internal::promise_base_with_type<void>, seastar::rpc::server::connection::process()::$_22::operator()()::{lambda()#1}::operator()() const::{lambda()#2}::operator()()::{lambda(std::tuple<std::optional<unsigned long>, unsigned long, long, std::optional<seastar::rpc::rcv_buf> >)#1}, seastar::future<std::tuple<std::optional<unsigned long>, unsigned long, long, std::optional<seastar::rpc::rcv_buf> > >::then_impl_nrvo<seastar::rpc::server::connection::process()::$_22::operator()()::{lambda()#1}::operator()() const::{lambda()#2}::operator()()::{lambda(std::tuple<std::optional<unsigned long>, unsigned long, long, std::optional<seastar::rpc::rcv_buf> >)#1}, seastar::future<void> >(seastar::rpc::server::connection::process()::$_22::operator()()::{lambda()#1}::operator()() const::{lambda()#2}::operator()()::{lambda(std::tuple<std::optional<unsigned long>, unsigned long, long, std::optional<seastar::rpc::rcv_buf> >)#1}&&)::{lambda(seastar::internal::promise_base_with_type<void>&&, seastar::rpc::server::connection::process()::$_22::operator()()::{lambda()#1}::operator()() const::{lambda()#2}::operator()()::{lambda(std::tuple<std::optional<unsigned long>, unsigned long, long, std::optional<seastar::rpc::rcv_buf> >)#1}&, seastar::future_state<std::tuple<std::optional<unsigned long>, unsigned long, long, std::optional<seastar::rpc::rcv_buf> > >&&)#1}, std::tuple<std::optional<unsigned long>, unsigned long, long, std::optional<seastar::rpc::rcv_buf> > >
   --------
   seastar::internal::do_until_state<seastar::rpc::server::connection::process()::$_22::operator()()::{lambda()#1}::operator()() const::{lambda()#1}, seastar::rpc::server::connection::process()::$_22::operator()()::{lambda()#1}::operator()() const::{lambda()#2}>
   --------
   seastar::continuation<seastar::internal::promise_base_with_type<void>, seastar::rpc::server::connection::process()::$_23, seastar::future<void>::then_wrapped_nrvo<seastar::future<void>, seastar::rpc::server::connection::process()::$_23>(seastar::rpc::server::connection::process()::$_23&&)::{lambda(seastar::internal::promise_base_with_type<void>&&, seastar::rpc::server::connection::process()::$_23&, seastar::future_state<seastar::internal::monostate>&&)#1}, void>
   --------
   seastar::continuation<seastar::internal::promise_base_with_type<void>, seastar::future<void>::finally_body<seastar::rpc::server::connection::process()::$_24, false>, seastar::future<void>::then_wrapped_nrvo<seastar::future<void>, seastar::future<void>::finally_body<seastar::rpc::server::connection::process()::$_24, false> >(seastar::future<void>::finally_body<seastar::rpc::server::connection::process()::$_24, false>&&)::{lambda(seastar::internal::promise_base_with_type<void>&&, seastar::future<void>::finally_body<seastar::rpc::server::connection::process()::$_24, false>&, seastar::future_state<seastar::internal::monostate>&&)#1}, void>
Aborting on shard 24.
Backtrace:
  0x5641a78
  0x5676002
  /jenkins/workspace/scylla-master/dtest-daily-release/scylla/.ccm/scylla-repository/88fd7f71110878a8bcd317aa9f5183df722def76/libreloc/libc.so.6+0x3cb1f
  /jenkins/workspace/scylla-master/dtest-daily-release/scylla/.ccm/scylla-repository/88fd7f71110878a8bcd317aa9f5183df722def76/libreloc/libc.so.6+0x8ce5b
  /jenkins/workspace/scylla-master/dtest-daily-release/scylla/.ccm/scylla-repository/88fd7f71110878a8bcd317aa9f5183df722def76/libreloc/libc.so.6+0x3ca75
  /jenkins/workspace/scylla-master/dtest-daily-release/scylla/.ccm/scylla-repository/88fd7f71110878a8bcd317aa9f5183df722def76/libreloc/libc.so.6+0x267fb
  0x5622523
  0x1e17696
  0x2e94915
  0x2f6f263
  0x2f6d509
  0x2f5be89
  0x2f71e71
  0x15fd1d2
  0x15fbd6a
  0x15fb429
  0x15f9473
  0x15f8811
  0x59d24f5
  0x59d20b3
  0x59d2b59
  0x5652b94
  0x5653e17
  0x56764b1
  0x562321a
  /jenkins/workspace/scylla-master/dtest-daily-release/scylla/.ccm/scylla-repository/88fd7f71110878a8bcd317aa9f5183df722def76/libreloc/libc.so.6+0x8b12c
  /jenkins/workspace/scylla-master/dtest-daily-release/scylla/.ccm/scylla-repository/88fd7f71110878a8bcd317aa9f5183df722def76/libreloc/libc.so.6+0x10cbbf

Decoded:

[Backtrace #0]
void seastar::backtrace<seastar::current_backtrace_tasklocal()::$_3>(seastar::current_backtrace_tasklocal()::$_3&&) at ./build/release/seastar/./seastar/include/seastar/util/backtrace.hh:60
 (inlined by) seastar::current_backtrace_tasklocal() at ./build/release/seastar/./seastar/src/util/backtrace.cc:86
seastar::current_tasktrace() at ./build/release/seastar/./seastar/src/util/backtrace.cc:137
seastar::current_backtrace() at ./build/release/seastar/./seastar/src/util/backtrace.cc:170
void log_error_and_backtrace<std::basic_string_view<char, std::char_traits<char> > >(seastar::logger&, std::basic_string_view<char, std::char_traits<char> > const&) at ./build/release/seastar/./seastar/src/core/on_internal_error.cc:38
 (inlined by) seastar::on_internal_error(seastar::logger&, std::basic_string_view<char, std::char_traits<char> >) at ./build/release/seastar/./seastar/src/core/on_internal_error.cc:42
global_schema_ptr at ./schema/schema_registry.cc:331
service::storage_proxy::mutate_locally(seastar::lw_shared_ptr<schema const> const&, frozen_mutation const&, tracing::trace_state_ptr, seastar::bool_class<db::force_sync_tag>, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::smp_service_group, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>) at ./service/storage_proxy.cc:2790
operator() at ./service/storage_proxy.cc:511
 (inlined by) operator() at ./service/storage_proxy.cc:443
...

After restart view building eventually succeeded:

INFO  2023-05-24 02:03:39,779 [shard  0] init - Scylla version 5.4.0~dev-0.20230524.88fd7f711108 initialization completed.
INFO  2023-05-24 02:03:39,779 [shard  0] view - Building view ks.t_by_v2, starting at token minimum token
INFO  2023-05-24 02:03:39,780 [shard  0] view - Building view ks.t_by_v, starting at token minimum token
ERROR 2023-05-24 02:03:39,781 [shard  6] view - Error applying view update to 127.0.75.3 (view: ks.t_by_v2, base token: -9221727486224115472, view token: -8987928484838087894): exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
ERROR 2023-05-24 02:03:39,781 [shard  1] view - Error applying view update to 127.0.75.3 (view: ks.t_by_v2, base token: -9222967543607267894, view token: -1947216961110107968): exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
ERROR 2023-05-24 02:03:39,781 [shard  0] view - Error applying view update to 127.0.75.3 (view: ks.t_by_v2, base token: -9223297786983086897, view token: 7882880537297481526): exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
ERROR 2023-05-24 02:03:39,781 [shard 11] view - Error applying view update to 127.0.75.3 (view: ks.t_by_v2, base token: -9220419336659099601, view token: 8895350330199977635): exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
ERROR 2023-05-24 02:03:39,781 [shard 15] view - Error applying view update to 127.0.75.3 (view: ks.t_by_v2, base token: -9219346447766400967, view token: 2990881561334808888): exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:39,782 [shard  6] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:39,782 [shard  1] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:39,782 [shard 11] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:39,782 [shard 15] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:39,782 [shard  0] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
ERROR 2023-05-24 02:03:39,783 [shard 10] view - Error applying view update to 127.0.75.3 (view: ks.t_by_v2, base token: -9216028065849746303, view token: 8512825797236005302): exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
ERROR 2023-05-24 02:03:39,783 [shard  9] view - Error applying view update to 127.0.75.3 (view: ks.t_by_v2, base token: -9220924248064531910, view token: -1648106516271635869): exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
ERROR 2023-05-24 02:03:39,783 [shard 13] view - Error applying view update to 127.0.75.3 (view: ks.t_by_v2, base token: -9219859730883390703, view token: -7918806164541364003): exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
ERROR 2023-05-24 02:03:39,783 [shard  2] view - Error applying view update to 127.0.75.3 (view: ks.t_by_v2, base token: -9222736443889999821, view token: -5319330787090229318): exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:39,783 [shard 10] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:39,783 [shard  9] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:39,783 [shard 13] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:39,783 [shard  2] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
ERROR 2023-05-24 02:03:39,783 [shard  8] view - Error applying view update to 127.0.75.3 (view: ks.t_by_v2, base token: -9216704778994998342, view token: -6963653130800501646): exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
ERROR 2023-05-24 02:03:39,783 [shard 16] view - Error applying view update to 127.0.75.3 (view: ks.t_by_v2, base token: -9210072177718897089, view token: 6882416012201346086): exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:39,783 [shard  8] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:39,783 [shard 16] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
ERROR 2023-05-24 02:03:39,783 [shard  5] view - Error applying view update to 127.0.75.3 (view: ks.t_by_v2, base token: -9221834874718566760, view token: -4748649548113281064): exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
ERROR 2023-05-24 02:03:39,783 [shard  7] view - Error applying view update to 127.0.75.3 (view: ks.t_by_v2, base token: -9216902851336773461, view token: -277866046175722077): exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
ERROR 2023-05-24 02:03:39,783 [shard  3] view - Error applying view update to 127.0.75.3 (view: ks.t_by_v2, base token: -9222508416203745186, view token: -7926617861274549675): exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:39,783 [shard  5] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:39,784 [shard  7] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
ERROR 2023-05-24 02:03:39,784 [shard 12] view - Error applying view update to 127.0.75.3 (view: ks.t_by_v2, base token: -9220089467296997912, view token: 1164057663306335344): exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:39,784 [shard  3] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:39,784 [shard 12] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
ERROR 2023-05-24 02:03:39,784 [shard 14] view - Error applying view update to 127.0.75.3 (view: ks.t_by_v2, base token: -9214907925253619717, view token: -8987524013768975590): exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:39,784 [shard 14] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
ERROR 2023-05-24 02:03:39,785 [shard  4] view - Error applying view update to 127.0.75.3 (view: ks.t_by_v2, base token: -9217669740212181123, view token: -7761093163043046939): exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:39,785 [shard  4] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:40,782 [shard  6] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:40,782 [shard  1] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:40,782 [shard 11] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:40,782 [shard 15] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:40,782 [shard  0] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:40,784 [shard 13] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:40,784 [shard 10] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:40,785 [shard  8] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:40,785 [shard 16] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:40,785 [shard  5] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:40,785 [shard  2] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:40,785 [shard  7] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:40,785 [shard  3] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:40,785 [shard 14] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:40,785 [shard 12] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:40,785 [shard  9] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
WARN  2023-05-24 02:03:40,786 [shard  4] view - Error executing build step for base ks.t: exceptions::unavailable_exception (Cannot achieve consistency level for cl ONE. Requires 1, alive 0)
INFO  2023-05-24 02:03:41,714 [shard  0] storage_service - Node 127.0.75.3 state jump to normal
INFO  2023-05-24 02:03:41,715 [shard  0] gossip - InetAddress 127.0.75.3 is now UP, status = NORMAL
INFO  2023-05-24 02:03:41,730 [shard  1] compaction - [Compact system.peers 34ca2020-f9d7-11ed-a0b3-a37fe78104ff] Compacting [/jenkins/workspace/scylla-master/dtest-daily-release/scylla/.dtest/dtest-mgwwr68d/test/node2/data/system/peers-37f71aca7dc2383ba70672528af04d4f/me-137-big-Data.db:level=0:origin=memtable,/jenkins/workspace/scylla-master/dtest-daily-release/scylla/.dtest/dtest-mgwwr68d/test/node2/data/system/peers-37f71aca7dc2383ba70672528af04d4f/me-120-big-Data.db:level=0:origin=compaction]
INFO  2023-05-24 02:03:41,737 [shard  1] compaction - [Compact system.peers 34ca2020-f9d7-11ed-a0b3-a37fe78104ff] Compacted 2 sstables to [/jenkins/workspace/scylla-master/dtest-daily-release/scylla/.dtest/dtest-mgwwr68d/test/node2/data/system/peers-37f71aca7dc2383ba70672528af04d4f/me-154-big-Data.db:level=0]. 23kB to 11kB (~50% of original) in 5ms = 2MB/s. ~256 total partitions merged to 1.
INFO  2023-05-24 02:03:41,740 [shard  3] compaction - [Compact system.scylla_local 34cba6c0-f9d7-11ed-92b1-a37de78104ff] Compacting [/jenkins/workspace/scylla-master/dtest-daily-release/scylla/.dtest/dtest-mgwwr68d/test/node2/data/system/scylla_local-2972ec7ffb2038ddaac1d876f2e3fcbd/me-377-big-Data.db:level=0:origin=memtable,/jenkins/workspace/scylla-master/dtest-daily-release/scylla/.dtest/dtest-mgwwr68d/test/node2/data/system/scylla_local-2972ec7ffb2038ddaac1d876f2e3fcbd/me-360-big-Data.db:level=0:origin=compaction]
INFO  2023-05-24 02:03:41,748 [shard  3] compaction - [Compact system.scylla_local 34cba6c0-f9d7-11ed-92b1-a37de78104ff] Compacted 2 sstables to [/jenkins/workspace/scylla-master/dtest-daily-release/scylla/.dtest/dtest-mgwwr68d/test/node2/data/system/scylla_local-2972ec7ffb2038ddaac1d876f2e3fcbd/me-394-big-Data.db:level=0]. 11kB to 6kB (~51% of original) in 5ms = 1MB/s. ~256 total partitions merged to 1.
INFO  2023-05-24 02:03:44,948 [shard  0] view - Finished building view ks.t_by_v2
INFO  2023-05-24 02:03:44,949 [shard  0] view - Finished building view ks.t_by_v
INFO  2023-05-24 02:03:54,717 [shard  0] storage_service - Node 127.0.75.3 state jump to normal
@bhalevy
Copy link
Member Author

bhalevy commented May 24, 2023

Cc @eliransin

@DoronArazii
Copy link

@nyh / @cvybhu can you please help with triaging this issue

@nyh
Copy link
Contributor

nyh commented Jul 6, 2023

@nyh / @cvybhu can you please help with triaging this issue

I'm completely unfamiliar with this code which @eliransin wrote so I can't help triage it beyond saying that this is code which @eliransin wrote. If Eliran asks me, I can go debug this issue (by reading the code) just anyone else can. Unassigning it from myself until Eliran decides to assign it to me.

@nyh nyh removed their assignment Jul 6, 2023
@mykaul mykaul added P1 Urgent symptom/ci stability Issues that failed in ScyllaDB CI - tests and framework labels Jul 9, 2023
@eliransin
Copy link
Contributor

This is an instance of:
Tried to build a global schema for view ks.t_by_v2 with an uninitialized base info

This means that we have some views with uninitialized base info. Will need to investigate further.
Just for good order:
One operational systems it will not happen since it is caused by a call to on_internal_error.

I will further investigate how can this happen.

  1. Will have a look at the coredump to see if I can understand the circumstances that let to this from there.
  2. Will try to review the view handling code if I can spot any race that can lead to this happening.

This is something that we should investigate since views that doesn't have base schema attached are considered to be
'read only views'.

@bhalevy
Copy link
Member Author

bhalevy commented Jul 9, 2023

One operational systems it will not happen since it is caused by a call to on_internal_error.

Don't build in that...

If !s->view_info()->base_info(), which is the case, without the internal error, the code will just segfault a couple of statements below, in

s->view_info()->set_base_info(s->view_info()->make_base_dependent_view_info(*_base_schema));

@eliransin
Copy link
Contributor

Don't build in that...

Obviously, this is a safeguard against ever having an incomplete view schemas registering.
IIUC - in the operational case it will cause an exception instead of an abort (no?)

I am not saying it is a great state to be in, however it will only cause the view to be impossible to write to
and not what we see (all the chain starts in get_schema_for_write)

@eliransin
Copy link
Contributor

@cvybhu please have a look in parallel to me. This is a high priority issue.

@bhalevy
Copy link
Member Author

bhalevy commented Jul 9, 2023

Don't build in that...

Obviously, this is a safeguard against ever having an incomplete view schemas registering. IIUC - in the operational case it will cause an exception instead of an abort (no?)

How come? *_base_schema will dereference a null pointer and segfault.

I am not saying it is a great state to be in, however it will only cause the view to be impossible to write to and not what we see (all the chain starts in get_schema_for_write)

@DoronArazii DoronArazii removed the triage/master Looking for assignee label Jul 9, 2023
@DoronArazii DoronArazii added this to the 5.4 milestone Jul 9, 2023
@eliransin
Copy link
Contributor

The resulting schema entry in the registry is missing the base info:

(gdb) f
#4  0x0000000001e17697 in global_schema_ptr::global_schema_ptr (this=0x7f42b93d14d0, ptr=...) at schema/schema_registry.cc:331
331                 on_internal_error(slogger, format("Tried to build a global schema for view {}.{} with an uninitialized base info", s->ks_name(), s->cf_name()));
(gdb) info locals
s = {_p = 0x6180047f5408}
ensure_registry_entry = <optimized out>
(gdb) p *((schema*)0x6180047f5400)._view_info
$14 = {_schema = @0x6180047f5400, _raw = {_base_id = {id = fff88220-f9d5-11ed-81ba-e13a14d6d4e6}, _base_name = "t", _include_all_columns = true, _where_clause = "v2 IS NOT null AND id IS NOT null"}, _select_statement = {_b = 0x0, _p = 0x0}, 
  _partition_slice = std::optional<query::partition_slice> [no contained value], _base_info = {_p = 0x0}, _has_computed_column_depending_on_base_non_primary_key = false}

@eliransin
Copy link
Contributor

How come? *_base_schema will dereference a null pointer and segfault.

/// Report an internal error
///
/// Depending on the value passed to set_abort_on_internal_error, this
/// will either abort or throw a std::runtime_error.
/// In both cases an error will be logged with \p logger, containing
/// \p reason and the current backtrace.
[[noreturn]] void on_internal_error(logger& logger, std::string_view reason);

@bhalevy It is going to throw an exception, how would we ever get to the line you are referring?

@bhalevy
Copy link
Member Author

bhalevy commented Jul 9, 2023

@bhalevy It is going to throw an exception, how would we ever get to the line you are referring?

Sorry, you're right.
I confused it with on_internal_error_noexcept that goes on if it doesn't abort on internal error.

@eliransin
Copy link
Contributor

(gdb) p *(((schema_registry_entry*)(0x60000483cb40))->_schema)->_view_info
$86 = {_schema = @0x600005c3d400, _raw = {_base_id = {id = fff88220-f9d5-11ed-81ba-e13a14d6d4e6}, _base_name = "t", _include_all_columns = true, _where_clause = "v2 IS NOT null AND id IS NOT null"}, _select_statement = {_b = 0x0, _p = 0x0}, 
  _partition_slice = std::optional<query::partition_slice> [no contained value], _base_info = {_p = 0x6000057df460}, _has_computed_column_depending_on_base_non_primary_key = false}
(gdb) p *(((schema_registry_entry*)(0x618003d7d2c0))->_schema)->_view_info
$87 = {_schema = @0x6180047f5400, _raw = {_base_id = {id = fff88220-f9d5-11ed-81ba-e13a14d6d4e6}, _base_name = "t", _include_all_columns = true, _where_clause = "v2 IS NOT null AND id IS NOT null"}, _select_statement = {_b = 0x0, _p = 0x0}, 
  _partition_slice = std::optional<query::partition_slice> [no contained value], _base_info = {_p = 0x0}, _has_computed_column_depending_on_base_non_primary_key = false}

This shows that the registry on shard 0 has a view with the base info but shard 24 doesn't.
I will now investigate how does the information propagates between the different shards.

@eliransin
Copy link
Contributor

Lets see if the table even exists on shard 24 (ks.t)

(gdb) scylla shard
Current shard is 24
(gdb) scylla tables
   24 {id = 600e9780-f9d6-11ed-817e-1d014f6642ff} v={id = 3fa25385-70f9-3109-b072-c4acda53d693} "ks"."t_by_v"                                 (replica::table*)0x618005108000
   24 {id = fff88220-f9d5-11ed-81ba-e13a14d6d4e6} v={id = ad3136ba-6d2d-30e5-9057-dd4cfd12da61} "ks"."t"                                      (replica::table*)0x618004f78000
   24 {id = 5a1ff267-ace0-3f12-8563-cfae6103c65e} v={id = bdec57a3-b234-334b-be9c-7b1f33113995} "system"."sstable_activity"                   (replica::table*)0x618003c78000
   24 {id = 618f817b-005f-3678-b8a4-53f3930b8e86} v={id = 6d4dbcff-f05b-3dc3-95ad-f79f7b10504d} "system"."size_estimates"                     (replica::table*)0x618004200000
   24 {id = b4dbb7b4-dc49-3fb5-b3bf-ce6e434832ca} v={id = 25cae56a-8d75-39f3-a146-3756ab4981c7} "system"."compaction_history"                 (replica::table*)0x618003c70000
   24 {id = afddfb9d-bc1e-3068-8056-eed6c302ba09} v={id = b6240810-eeb7-36d5-9411-43b2d68dddab} "system_schema"."tables"                      (replica::table*)0x6180043f0000
   24 {id = 0bcaffd4-0c83-3ead-ad13-dc1d5015b77c} v={id = 0aa4d3a2-ed95-3ecd-aba0-cd75622ad290} "system"."cdc_local"                          (replica::table*)0x618003cd8000
   24 {id = 4b3c50a9-ea87-3d76-9101-6dbc9c38494a} v={id = a28aa0b3-6def-30a7-9fbe-ce78b3f3c9b9} "system"."built_views"                        (replica::table*)0x618003cb0000
   24 {id = 8e097009-c753-3518-a0ec-217f75f5dffa} v={id = 042dd699-0b97-31f0-b1b9-e3a79ae2826e} "system"."repair_history"                     (replica::table*)0x618003f80000
   24 {id = ead8bbc5-f146-3ae1-9f71-0b11f9a1d296} v={id = 3d66a862-1acb-3c04-b832-2abc90fd9b13} "system"."large_cells"                        (replica::table*)0x618004250000
   24 {id = abac5682-dea6-31c5-b535-b3d6cffd0fb6} v={id = e79ca8ba-6556-3f7d-925a-7f20cf57938c} "system_schema"."keyspaces"                   (replica::table*)0x6180043e0000
   24 {id = fa0ea2bd-608f-3e74-9b1e-b84b46b33adf} v={id = 73c8abde-708f-3e5c-a1b5-52c823cb7b33} "system_schema"."scylla_keyspaces"            (replica::table*)0x618004420000
   24 {id = 9f5c6374-d485-3229-9a0a-5094af9ad1e3} v={id = bbb3743b-351f-3023-b4fc-09a9be37d529} "system"."IndexInfo"                          (replica::table*)0x618004280000
   24 {id = 0ebf001c-c1d1-3693-9a63-c3d96ac53318} v={id = 18d3fcf6-cf49-3114-8dec-b49847c557f8} "system_traces"."sessions_time_idx"           (replica::table*)0x618004ba8000
   24 {id = 0290003c-977e-397c-ac3e-fdfdc01d626b} v={id = e2a2e804-49e4-3597-9f16-39fd9475835c} "system"."batchlog"                           (replica::table*)0x6180042a0000
   24 {id = 2666e205-73ef-38b3-90fe-fecf96e8f0c7} v={id = 0b74fdd1-e96d-309e-a14e-a5bcd7ac885d} "system"."hints"                              (replica::table*)0x618004288000
   24 {id = 38c19fd0-fb86-3310-a4b7-0d0cc66628aa} v={id = 6da9a85c-7ae0-3917-aead-54a2f65a57a8} "system"."truncated"                          (replica::table*)0x618003cd0000
   24 {id = 2972ec7f-fb20-38dd-aac1-d876f2e3fcbd} v={id = 5f0b407d-eedf-3845-a48e-dbb9673d10e1} "system"."scylla_local"                       (replica::table*)0x618004258000
   24 {id = a04c7bfd-1e13-36c9-a44d-f22da352281d} v={id = e3fb736c-6956-3990-a31d-9a482279e3fc} "system"."scylla_views_builds_in_progress"    (replica::table*)0x618003cb8000
   24 {id = 33211d19-46b4-3e9c-a90a-6a9f87f1e3d0} v={id = 876a85ee-f7f9-3f44-a78e-9b1d4e11b023} "system"."token_ring"                         (replica::table*)0x618003d68000
   24 {id = 0191a53e-40f0-31d4-9171-b0d19ffb17b4} v={id = 2cc890b8-9ade-3afc-ab63-043c8017608e} "system"."scylla_table_schema_history"        (replica::table*)0x618004270000
   24 {id = 4df70b66-6b05-3251-95a1-32b54005fd48} v={id = 582d7071-1ef0-37c8-adc6-471a13636139} "system_schema"."triggers"                    (replica::table*)0x6180041a8000
   24 {id = 8a7fe624-96b0-34b1-b90e-f71bddcdd2d3} v={id = 04fa9920-9369-3a96-be39-6dd9fdc816b6} "system"."large_partitions"                   (replica::table*)0x618004238000
   24 {id = b7b7f0c2-fd0a-3410-8c05-3ef614bb7c2d} v={id = 6dd372de-72fb-3e1b-9b8c-f97738a67fe9} "system"."paxos"                              (replica::table*)0x6180042b0000
   24 {id = 59dfeaea-8db2-3341-91ef-109974d81484} v={id = 7f874a05-72b8-3c21-9acb-baa164fc351a} "system"."peer_events"                        (replica::table*)0x618003c50000
   24 {id = 7ad54392-bcdd-35a6-8417-4e047860b377} v={id = 7fa82c2e-5b67-37dd-8e5e-2079e18f1536} "system"."local"                              (replica::table*)0x618003c40000
   24 {id = 40550f66-0858-39a0-9430-f27fc08034e9} v={id = aee3acb0-7926-317b-848e-db7bc3721695} "system"."large_rows"                         (replica::table*)0x618004240000
   24 {id = 9786ac1c-dd58-3201-a7cd-ad556410c985} v={id = 5b58bb47-96e7-3f57-accf-0bfca4dbbc6e} "system_schema"."views"                       (replica::table*)0x6180041b8000
   24 {id = 37f71aca-7dc2-383b-a706-72528af04d4f} v={id = f6f6871f-8c86-3eca-ac0b-2a2e848e395d} "system"."peers"                              (replica::table*)0x618003c48000
   24 {id = 6b8c7359-a843-33f2-a1d8-5dc6a187436f} v={id = 9c33b17d-1e73-331d-be51-1c3eb37fe6c3} "system_auth"."role_attributes"               (replica::table*)0x618004a70000
   24 {id = 55d76438-4e55-3f8b-9f6e-676d4af3976d} v={id = dd15a078-409b-350d-9bef-c5f3520832d8} "system"."range_xfers"                        (replica::table*)0x618003c58000
   24 {id = 55080ab0-5d9c-3886-90a4-acb25fe1f77b} v={id = 540263f1-40db-3869-8a38-3baadedc222d} "system"."compactions_in_progress"            (replica::table*)0x618003c60000
   24 {id = 3e9372bc-f440-3892-899e-7377c6584b44} v={id = 1c654a28-fa53-3446-bfb4-99803d604b46} "system"."config"                             (replica::table*)0x618003d70000
   24 {id = 9504b32b-a121-32b5-bc2d-fee5bc149f15} v={id = 7d629a33-3ae5-38c2-9985-3d34de772679} "system"."protocol_servers"                   (replica::table*)0x618003da0000
   24 {id = 4a9392ae-1937-39f6-a263-01568fd6a3f6} v={id = 24ef45f6-e8d5-3506-897b-66f102ea0cd2} "system"."snapshots"                          (replica::table*)0x618003d88000
   24 {id = 96489b79-80be-3e14-a701-66a0b9159450} v={id = 329ed804-55b3-3eee-ad61-d85317b96097} "system_schema"."functions"                   (replica::table*)0x6180041d0000
   24 {id = 8b5611ad-b90c-3883-855a-bcc6ddc54f33} v={id = 6af1527e-249e-3bd1-9346-00886d88df34} "system"."versions"                           (replica::table*)0x618003d90000
   24 {id = 8826e8e9-e16a-3728-8753-3bc1fc713c25} v={id = cb5fa493-8404-37b0-9f4d-6f71249f549b} "system_traces"."events"                      (replica::table*)0x618004a78000
   24 {id = 92b4e363-0578-33ea-9bca-e9fcb9dd98bb} v={id = 6fcecce8-f2ff-393c-9862-e95a81c68465} "system"."raft_state"                         (replica::table*)0x618003d98000
   24 {id = c999823b-87bc-36d9-a378-a90ffab08f00} v={id = 367546c6-8904-3783-84e5-f31c1f00bbe0} "system"."runtime_info"                       (replica::table*)0x618004058000
   24 {id = f9706768-aa1e-3d87-9e5c-51a3927c2870} v={id = 01e49b76-4d62-3c7a-b190-d65932b739fa} "system_traces"."node_slow_log_time_idx"      (replica::table*)0x618004ae8000
   24 {id = ca0f635d-8630-3609-8d93-a1fc06f2a5e5} v={id = 51273ebf-e6f9-3cd5-b455-ea09ad29796f} "system"."clients"                            (replica::table*)0x618004098000
   24 {id = b7f2c108-78cd-3c80-9cd5-d609b2bd149c} v={id = b905cce5-1474-3085-819a-7592453e2fb9} "system"."views_builds_in_progress"           (replica::table*)0x618003ca0000
   24 {id = 234d2227-dd63-3d37-ac5f-c013e2ea9e6e} v={id = f26c2f02-4110-35f8-b311-58e187fd9aef} "system_distributed_everywhere"."cdc_generation_descriptions_v2" (replica::table*)0x618004bb8000
   24 {id = fb70ea0a-1bf9-3772-a5ad-26960611b035} v={id = 67d729c6-31a8-3b49-8c7f-600590c0189c} "system"."cluster_status"                     (replica::table*)0x6180040a0000
   24 {id = 08843b63-45dc-3be2-9798-a0418295cfaa} v={id = c777531c-15f7-326f-8ebe-39fd0265c8c9} "system_schema"."view_virtual_columns"        (replica::table*)0x618004410000
   24 {id = 5e7583b5-f3f4-3af1-9a39-b7e1d6f5f11f} v={id = 7426bc6c-4c2f-3200-8ad8-4329610ed59a} "system_schema"."dropped_columns"             (replica::table*)0x6180041a0000
   24 {id = 5a8b1ca8-6602-3f77-a045-9273d308917a} v={id = de51b2ce-5e4d-3b7d-a75f-2204332ce8d1} "system_schema"."types"                       (replica::table*)0x6180041c0000
   24 {id = 5bc52802-de25-35ed-aeab-188eecebb090} v={id = 328933ee-e0fd-3d09-8f5b-697b08b4a351} "system_auth"."roles"                         (replica::table*)0x618004a68000
   24 {id = 924c5587-2e3a-345b-b10c-12f37c1ba895} v={id = 4b53e92c-0368-3d5c-b959-2ec1bfd1a59f} "system_schema"."aggregates"                  (replica::table*)0x6180041d8000
   24 {id = 0feb57ac-311f-382f-ba6d-9024d305702f} v={id = 99c40462-8687-304e-abe3-2bdbef1f25aa} "system_schema"."indexes"                     (replica::table*)0x618004408000
   24 {id = 08d8cb28-9202-3371-a968-ad926e0fdc37} v={id = 550a8c78-fb67-33d5-81e1-a6e00afc723c} "system_schema"."scylla_aggregates"           (replica::table*)0x618004430000
   24 {id = cc7c7069-3740-33c1-92a4-c3de78dbd2c4} v={id = 2b8c4439-de76-31e0-807f-3b7290a975d7} "system_schema"."computed_columns"            (replica::table*)0x618004418000
   24 {id = 0bf73fd7-65b2-36b0-85e5-658131d5df36} v={id = beaac7bd-dc89-3bc1-8388-97f2e24b9064} "system_distributed"."cdc_streams_descriptions_v2" (replica::table*)0x618004ba0000
   24 {id = 0ecdaa87-f8fb-3e60-88d1-74fb36fe5c0d} v={id = 6efaf78e-4b42-30ff-8761-64f2a5dfbde5} "system_auth"."role_members"                  (replica::table*)0x618004bb0000
   24 {id = 5582b59f-8e4e-35e1-b913-3acada51eb04} v={id = 21ceaf9f-c2dd-3276-aeb3-b56343ad17f1} "system_distributed"."view_build_status"      (replica::table*)0x618004a60000
   24 {id = b8c556bd-212d-37ad-9484-690c73a5994b} v={id = 61a37990-ef28-3e16-a341-77d732a9b6b7} "system_distributed"."service_levels"         (replica::table*)0x618004c00000
   24 {id = bfcc4e62-5b63-3aa1-a1c3-6f5e47f3325c} v={id = 2d7ee9e1-a2c0-3a84-888b-209f6193931d} "system_traces"."node_slow_log"               (replica::table*)0x618004c08000
   24 {id = fdf455c4-cfec-3e00-9719-d7a45436c89d} v={id = c912c0ed-a96a-3f5f-9325-6f8b3a6c0dc8} "system_distributed"."cdc_generation_timestamps" (replica::table*)0x618004b20000
   24 {id = 24101c25-a2ae-3af7-87c1-b40ee1aca33f} v={id = d33236d4-9bdd-3c09-abf0-a0bc5edc2526} "system_schema"."columns"                     (replica::table*)0x618004198000
   24 {id = 5d912ff1-f759-3665-b2c8-8042ab5103dd} v={id = 38e5e56b-155d-39b5-a0d2-8e5dcad42a26} "system_schema"."scylla_tables"               (replica::table*)0x6180043f8000
   24 {id = c5e99f16-8677-3914-b17e-960613512345} v={id = 28fb4f6c-f8f2-3fa0-a550-fc4896f20a49} "system_traces"."sessions"                    (replica::table*)0x618004ae0000

It does.
But interestingly enough the actual view we are trying to mutate doesn't:

(gdb) up
#4  0x0000000001e17697 in global_schema_ptr::global_schema_ptr (this=0x7f42b93d14d0, ptr=...) at schema/schema_registry.cc:331
331                 on_internal_error(slogger, format("Tried to build a global schema for view {}.{} with an uninitialized base info", s->ks_name(), s->cf_name()));
(gdb) scylla schema s
(schema*) 0x6180047f5400 ks="ks" cf="t_by_v2" id={id = 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6} version={id = ebf7abfb-7778-3d27-9f58-c47c2d5af4c6}

Lets check shard 0:

(gdb) scylla shard 0
Switched to thread 60
(gdb) scylla tables
    0 {id = 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6} v={id = ebf7abfb-7778-3d27-9f58-c47c2d5af4c6} "ks"."t_by_v2"                                (replica::table*)0x6000058e8000
    0 {id = 600e9780-f9d6-11ed-817e-1d014f6642ff} v={id = 3fa25385-70f9-3109-b072-c4acda53d693} "ks"."t_by_v"                                 (replica::table*)0x6000059f8000
    0 {id = fff88220-f9d5-11ed-81ba-e13a14d6d4e6} v={id = ad3136ba-6d2d-30e5-9057-dd4cfd12da61} "ks"."t"                                      (replica::table*)0x600005cd0000
    0 {id = 5a1ff267-ace0-3f12-8563-cfae6103c65e} v={id = bdec57a3-b234-334b-be9c-7b1f33113995} "system"."sstable_activity"                   (replica::table*)0x600000ec8000
    0 {id = 618f817b-005f-3678-b8a4-53f3930b8e86} v={id = 6d4dbcff-f05b-3dc3-95ad-f79f7b10504d} "system"."size_estimates"                     (replica::table*)0x600000ef8000
    0 {id = b4dbb7b4-dc49-3fb5-b3bf-ce6e434832ca} v={id = 25cae56a-8d75-39f3-a146-3756ab4981c7} "system"."compaction_history"                 (replica::table*)0x600000ec0000
    0 {id = afddfb9d-bc1e-3068-8056-eed6c302ba09} v={id = b6240810-eeb7-36d5-9411-43b2d68dddab} "system_schema"."tables"                      (replica::table*)0x6000049e0000
    0 {id = 0bcaffd4-0c83-3ead-ad13-dc1d5015b77c} v={id = 0aa4d3a2-ed95-3ecd-aba0-cd75622ad290} "system"."cdc_local"                          (replica::table*)0x600000410000
    0 {id = 4b3c50a9-ea87-3d76-9101-6dbc9c38494a} v={id = a28aa0b3-6def-30a7-9fbe-ce78b3f3c9b9} "system"."built_views"                        (replica::table*)0x600000ff0000
    0 {id = 8e097009-c753-3518-a0ec-217f75f5dffa} v={id = 042dd699-0b97-31f0-b1b9-e3a79ae2826e} "system"."repair_history"                     (replica::table*)0x600000f48000
    0 {id = ead8bbc5-f146-3ae1-9f71-0b11f9a1d296} v={id = 3d66a862-1acb-3c04-b832-2abc90fd9b13} "system"."large_cells"                        (replica::table*)0x600000f20000
    0 {id = abac5682-dea6-31c5-b535-b3d6cffd0fb6} v={id = e79ca8ba-6556-3f7d-925a-7f20cf57938c} "system_schema"."keyspaces"                   (replica::table*)0x600004238000
    0 {id = fa0ea2bd-608f-3e74-9b1e-b84b46b33adf} v={id = 73c8abde-708f-3e5c-a1b5-52c823cb7b33} "system_schema"."scylla_keyspaces"            (replica::table*)0x600004a50000
    0 {id = 9f5c6374-d485-3229-9a0a-5094af9ad1e3} v={id = bbb3743b-351f-3023-b4fc-09a9be37d529} "system"."IndexInfo"                          (replica::table*)0x600000e58000
    0 {id = 0ebf001c-c1d1-3693-9a63-c3d96ac53318} v={id = 18d3fcf6-cf49-3114-8dec-b49847c557f8} "system_traces"."sessions_time_idx"           (replica::table*)0x600004f80000
    0 {id = 0290003c-977e-397c-ac3e-fdfdc01d626b} v={id = e2a2e804-49e4-3597-9f16-39fd9475835c} "system"."batchlog"                           (replica::table*)0x600000e70000
    0 {id = 2666e205-73ef-38b3-90fe-fecf96e8f0c7} v={id = 0b74fdd1-e96d-309e-a14e-a5bcd7ac885d} "system"."hints"                              (replica::table*)0x600000e60000
    0 {id = 38c19fd0-fb86-3310-a4b7-0d0cc66628aa} v={id = 6da9a85c-7ae0-3917-aead-54a2f65a57a8} "system"."truncated"                          (replica::table*)0x600000408000
    0 {id = 2972ec7f-fb20-38dd-aac1-d876f2e3fcbd} v={id = 5f0b407d-eedf-3845-a48e-dbb9673d10e1} "system"."scylla_local"                       (replica::table*)0x600000f28000
    0 {id = a04c7bfd-1e13-36c9-a44d-f22da352281d} v={id = e3fb736c-6956-3990-a31d-9a482279e3fc} "system"."scylla_views_builds_in_progress"    (replica::table*)0x600000400000
    0 {id = 33211d19-46b4-3e9c-a90a-6a9f87f1e3d0} v={id = 876a85ee-f7f9-3f44-a78e-9b1d4e11b023} "system"."token_ring"                         (replica::table*)0x600000498000
    0 {id = 0191a53e-40f0-31d4-9171-b0d19ffb17b4} v={id = 2cc890b8-9ade-3afc-ab63-043c8017608e} "system"."scylla_table_schema_history"        (replica::table*)0x600000f40000
    0 {id = 4df70b66-6b05-3251-95a1-32b54005fd48} v={id = 582d7071-1ef0-37c8-adc6-471a13636139} "system_schema"."triggers"                    (replica::table*)0x6000042d8000
    0 {id = 8a7fe624-96b0-34b1-b90e-f71bddcdd2d3} v={id = 04fa9920-9369-3a96-be39-6dd9fdc816b6} "system"."large_partitions"                   (replica::table*)0x600000f00000
    0 {id = b7b7f0c2-fd0a-3410-8c05-3ef614bb7c2d} v={id = 6dd372de-72fb-3e1b-9b8c-f97738a67fe9} "system"."paxos"                              (replica::table*)0x600000e80000
    0 {id = 59dfeaea-8db2-3341-91ef-109974d81484} v={id = 7f874a05-72b8-3c21-9acb-baa164fc351a} "system"."peer_events"                        (replica::table*)0x600000ea8000
    0 {id = 7ad54392-bcdd-35a6-8417-4e047860b377} v={id = 7fa82c2e-5b67-37dd-8e5e-2079e18f1536} "system"."local"                              (replica::table*)0x600000e90000
    0 {id = 40550f66-0858-39a0-9430-f27fc08034e9} v={id = aee3acb0-7926-317b-848e-db7bc3721695} "system"."large_rows"                         (replica::table*)0x600000f10000
    0 {id = 9786ac1c-dd58-3201-a7cd-ad556410c985} v={id = 5b58bb47-96e7-3f57-accf-0bfca4dbbc6e} "system_schema"."views"                       (replica::table*)0x6000042f0000
    0 {id = 37f71aca-7dc2-383b-a706-72528af04d4f} v={id = f6f6871f-8c86-3eca-ac0b-2a2e848e395d} "system"."peers"                              (replica::table*)0x600000e98000
    0 {id = 6b8c7359-a843-33f2-a1d8-5dc6a187436f} v={id = 9c33b17d-1e73-331d-be51-1c3eb37fe6c3} "system_auth"."role_attributes"               (replica::table*)0x600004e28000
    0 {id = 55d76438-4e55-3f8b-9f6e-676d4af3976d} v={id = dd15a078-409b-350d-9bef-c5f3520832d8} "system"."range_xfers"                        (replica::table*)0x600000eb0000
    0 {id = 55080ab0-5d9c-3886-90a4-acb25fe1f77b} v={id = 540263f1-40db-3869-8a38-3baadedc222d} "system"."compactions_in_progress"            (replica::table*)0x600000eb8000
    0 {id = 3e9372bc-f440-3892-899e-7377c6584b44} v={id = 1c654a28-fa53-3446-bfb4-99803d604b46} "system"."config"                             (replica::table*)0x6000004a8000
    0 {id = 9504b32b-a121-32b5-bc2d-fee5bc149f15} v={id = 7d629a33-3ae5-38c2-9985-3d34de772679} "system"."protocol_servers"                   (replica::table*)0x6000004e0000
    0 {id = 4a9392ae-1937-39f6-a263-01568fd6a3f6} v={id = 24ef45f6-e8d5-3506-897b-66f102ea0cd2} "system"."snapshots"                          (replica::table*)0x6000004c0000
    0 {id = 96489b79-80be-3e14-a701-66a0b9159450} v={id = 329ed804-55b3-3eee-ad61-d85317b96097} "system_schema"."functions"                   (replica::table*)0x600004a00000
    0 {id = 8b5611ad-b90c-3883-855a-bcc6ddc54f33} v={id = 6af1527e-249e-3bd1-9346-00886d88df34} "system"."versions"                           (replica::table*)0x6000004c8000
    0 {id = 8826e8e9-e16a-3728-8753-3bc1fc713c25} v={id = cb5fa493-8404-37b0-9f4d-6f71249f549b} "system_traces"."events"                      (replica::table*)0x600004e30000
    0 {id = 92b4e363-0578-33ea-9bca-e9fcb9dd98bb} v={id = 6fcecce8-f2ff-393c-9862-e95a81c68465} "system"."raft_state"                         (replica::table*)0x6000004d0000
    0 {id = c999823b-87bc-36d9-a378-a90ffab08f00} v={id = 367546c6-8904-3783-84e5-f31c1f00bbe0} "system"."runtime_info"                       (replica::table*)0x600000590000
    0 {id = f9706768-aa1e-3d87-9e5c-51a3927c2870} v={id = 01e49b76-4d62-3c7a-b190-d65932b739fa} "system_traces"."node_slow_log_time_idx"      (replica::table*)0x600004e40000
    0 {id = ca0f635d-8630-3609-8d93-a1fc06f2a5e5} v={id = 51273ebf-e6f9-3cd5-b455-ea09ad29796f} "system"."clients"                            (replica::table*)0x6000005c8000
    0 {id = b7f2c108-78cd-3c80-9cd5-d609b2bd149c} v={id = b905cce5-1474-3085-819a-7592453e2fb9} "system"."views_builds_in_progress"           (replica::table*)0x600000fe8000
    0 {id = 234d2227-dd63-3d37-ac5f-c013e2ea9e6e} v={id = f26c2f02-4110-35f8-b311-58e187fd9aef} "system_distributed_everywhere"."cdc_generation_descriptions_v2" (replica::table*)0x600004f90000
    0 {id = fb70ea0a-1bf9-3772-a5ad-26960611b035} v={id = 67d729c6-31a8-3b49-8c7f-600590c0189c} "system"."cluster_status"                     (replica::table*)0x6000005d0000
    0 {id = 08843b63-45dc-3be2-9798-a0418295cfaa} v={id = c777531c-15f7-326f-8ebe-39fd0265c8c9} "system_schema"."view_virtual_columns"        (replica::table*)0x600004a38000
    0 {id = 5e7583b5-f3f4-3af1-9a39-b7e1d6f5f11f} v={id = 7426bc6c-4c2f-3200-8ad8-4329610ed59a} "system_schema"."dropped_columns"             (replica::table*)0x6000042d0000
    0 {id = 5a8b1ca8-6602-3f77-a045-9273d308917a} v={id = de51b2ce-5e4d-3b7d-a75f-2204332ce8d1} "system_schema"."types"                       (replica::table*)0x6000042f8000
    0 {id = 5bc52802-de25-35ed-aeab-188eecebb090} v={id = 328933ee-e0fd-3d09-8f5b-697b08b4a351} "system_auth"."roles"                         (replica::table*)0x600004e20000
    0 {id = 924c5587-2e3a-345b-b10c-12f37c1ba895} v={id = 4b53e92c-0368-3d5c-b959-2ec1bfd1a59f} "system_schema"."aggregates"                  (replica::table*)0x600004a08000
    0 {id = 0feb57ac-311f-382f-ba6d-9024d305702f} v={id = 99c40462-8687-304e-abe3-2bdbef1f25aa} "system_schema"."indexes"                     (replica::table*)0x600004a30000
    0 {id = 08d8cb28-9202-3371-a968-ad926e0fdc37} v={id = 550a8c78-fb67-33d5-81e1-a6e00afc723c} "system_schema"."scylla_aggregates"           (replica::table*)0x600004a58000
    0 {id = cc7c7069-3740-33c1-92a4-c3de78dbd2c4} v={id = 2b8c4439-de76-31e0-807f-3b7290a975d7} "system_schema"."computed_columns"            (replica::table*)0x600004a48000
    0 {id = 0bf73fd7-65b2-36b0-85e5-658131d5df36} v={id = beaac7bd-dc89-3bc1-8388-97f2e24b9064} "system_distributed"."cdc_streams_descriptions_v2" (replica::table*)0x600004d88000
    0 {id = 0ecdaa87-f8fb-3e60-88d1-74fb36fe5c0d} v={id = 6efaf78e-4b42-30ff-8761-64f2a5dfbde5} "system_auth"."role_members"                  (replica::table*)0x600004f88000
    0 {id = 5582b59f-8e4e-35e1-b913-3acada51eb04} v={id = 21ceaf9f-c2dd-3276-aeb3-b56343ad17f1} "system_distributed"."view_build_status"      (replica::table*)0x600004f98000
    0 {id = b8c556bd-212d-37ad-9484-690c73a5994b} v={id = 61a37990-ef28-3e16-a341-77d732a9b6b7} "system_distributed"."service_levels"         (replica::table*)0x600004e38000
    0 {id = bfcc4e62-5b63-3aa1-a1c3-6f5e47f3325c} v={id = 2d7ee9e1-a2c0-3a84-888b-209f6193931d} "system_traces"."node_slow_log"               (replica::table*)0x600004b70000
    0 {id = fdf455c4-cfec-3e00-9719-d7a45436c89d} v={id = c912c0ed-a96a-3f5f-9325-6f8b3a6c0dc8} "system_distributed"."cdc_generation_timestamps" (replica::table*)0x600004e48000
    0 {id = 24101c25-a2ae-3af7-87c1-b40ee1aca33f} v={id = d33236d4-9bdd-3c09-abf0-a0bc5edc2526} "system_schema"."columns"                     (replica::table*)0x6000042c8000
    0 {id = 5d912ff1-f759-3665-b2c8-8042ab5103dd} v={id = 38e5e56b-155d-39b5-a0d2-8e5dcad42a26} "system_schema"."scylla_tables"               (replica::table*)0x6000049e8000
    0 {id = c5e99f16-8677-3914-b17e-960613512345} v={id = 28fb4f6c-f8f2-3fa0-a550-fc4896f20a49} "system_traces"."sessions"                    (replica::table*)0x600004b78000

The view does exist on shard 0, so it is probably a shard that still haven't heard about the view - which causes this.

So the main problem is that we have at least one shard that haven't heard about the view yet.
We probably pull the schema from somewhere and don't attach it the view to the base table, will dig further into what really happened on shard 24.

@eliransin
Copy link
Contributor

So my first conclusion is that the internal error was doing its job, meaning, preventing corruption of the schema registry (or at least further corruption of the schema registry).
Now we will need to figure out how did this happen ( didn't we send the old base along with the view - which we probably should have or at least we should have attached our newest representation of the base).
Some other thoughts about this process, it looks like we could do an internal pull (from another shard) before we try to pull the schema from the network.

@eliransin
Copy link
Contributor

Some shards knows about the mv and some don't:

  0 {id = 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6} v={id = ebf7abfb-7778-3d27-9f58-c47c2d5af4c6} "ks"."t_by_v2"                                (replica::table*)0x6000058e8000
    1 {id = 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6} v={id = ebf7abfb-7778-3d27-9f58-c47c2d5af4c6} "ks"."t_by_v2"                                (replica::table*)0x601004df8000
    2 {id = 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6} v={id = ebf7abfb-7778-3d27-9f58-c47c2d5af4c6} "ks"."t_by_v2"                                (replica::table*)0x602005128000
    3 {id = 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6} v={id = ebf7abfb-7778-3d27-9f58-c47c2d5af4c6} "ks"."t_by_v2"                                (replica::table*)0x6030051b8000
    5 {id = 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6} v={id = ebf7abfb-7778-3d27-9f58-c47c2d5af4c6} "ks"."t_by_v2"                                (replica::table*)0x606004df8000
    6 {id = 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6} v={id = ebf7abfb-7778-3d27-9f58-c47c2d5af4c6} "ks"."t_by_v2"                                (replica::table*)0x605005208000
    7 {id = 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6} v={id = ebf7abfb-7778-3d27-9f58-c47c2d5af4c6} "ks"."t_by_v2"                                (replica::table*)0x607005088000
    8 {id = 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6} v={id = ebf7abfb-7778-3d27-9f58-c47c2d5af4c6} "ks"."t_by_v2"                                (replica::table*)0x608005078000
    9 {id = 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6} v={id = ebf7abfb-7778-3d27-9f58-c47c2d5af4c6} "ks"."t_by_v2"                                (replica::table*)0x609005060000
   10 {id = 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6} v={id = ebf7abfb-7778-3d27-9f58-c47c2d5af4c6} "ks"."t_by_v2"                                (replica::table*)0x60a005088000
   11 {id = 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6} v={id = ebf7abfb-7778-3d27-9f58-c47c2d5af4c6} "ks"."t_by_v2"                                (replica::table*)0x60b005068000
   12 {id = 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6} v={id = ebf7abfb-7778-3d27-9f58-c47c2d5af4c6} "ks"."t_by_v2"                                (replica::table*)0x60c005030000
   14 {id = 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6} v={id = ebf7abfb-7778-3d27-9f58-c47c2d5af4c6} "ks"."t_by_v2"                                (replica::table*)0x60e0051c8000
   16 {id = 603cfa80-f9d6-11ed-81ba-e13a14d6d4e6} v={id = ebf7abfb-7778-3d27-9f58-c47c2d5af4c6} "ks"."t_by_v2"                                (replica::table*)0x610005068000

This is out of 32 shards.
Possible reproducer - see what happens when a view is slow to propagate the view creation to other shards.

@nyh
Copy link
Contributor

nyh commented Jul 11, 2023

One operational systems it will not happen since it is caused by a call to on_internal_error.

Don't build in that...

If !s->view_info()->base_info(), which is the case, without the internal error, the code will just segfault a couple of statements below, in

s->view_info()->set_base_info(s->view_info()->make_base_dependent_view_info(*_base_schema));

I don't understand why the line you mentioned will cause a segfault, but in any case, the whole point of on_internal_error() and why it exists is that in release mode, when it doesn't crash the entire Scylla, it throws an exception - so you can never get "a few lines down" in the code.

@nyh
Copy link
Contributor

nyh commented Jul 11, 2023

So my first conclusion is that the internal error was doing its job, meaning, preventing corruption of the schema registry (or at least further corruption of the schema registry). Now we will need to figure out how did this happen ( didn't we send the old base along with the view - which we probably should have or at least we should have attached our newest representation of the base). Some other thoughts about this process, it looks like we could do an internal pull (from another shard) before we try to pull the schema from the network.

A guess: Imagine that we add a view to a pre-existing table. This sends a new version of both base table schema (it now has an added view) and the view schema to all other nodes. It then soon starts to "build" the pre-existing data and send view updates. If for some reason on one one of the shards on some node hasn't heard yet about a specific view version in a mutation, as I think you said it asks to "pull" this version, which will get it the view schema which is refers to the new version of the base table, but this shard doesn't have that version yet - in the pull it only received the view schema it asked for, not the base. I think it needs to pull the base schema as well?

@mykaul
Copy link
Contributor

mykaul commented Nov 5, 2023

@bhalevy
Copy link
Member Author

bhalevy commented Nov 6, 2023

@eliransin - could the crash @ https://jenkins.scylladb.com/job/scylla-5.4/job/rolling-upgrade/job/rolling-upgrade-ami-test/3/ is due to this issue?

@mykaul @eliransin it seems so

https://cloudius-jenkins-test.s3.amazonaws.com/b9cafe83-c77d-43f3-ae4a-b6185fff62f6/20231102_165854/db-cluster-b9cafe83.tar.gz
longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5/system.log:

Nov 02 16:27:03.825407 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:  [shard 0:stre] schema_tables - Schema version changed to 134546be-1871-3005-8438-607482e7f0a4
Nov 02 16:27:04.499800 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:  [shard 0:goss] rpc - client 10.4.1.150:7001: unknown verb exception 6 ignored

^^^ That node gossiper was stopped via the api

Nov 02 16:27:08.177495 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:  [shard 0:stat] migration_manager - Update table 'mview.users' From org.apache.cassandra.config.CFMetaData@0x60001e28a700[cfId=1ec5fd50-795f-11ee-9015-1e958eef90c3,ksName=mview,cfName=users,...
Nov 02 16:27:08.180447 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:  [shard 0:stat] migration_manager - Update view 'mview.users_by_first_name' From org.apache.cassandra.config.CFMetaData@0x60002d8f5880[cfId=20439ca0-795f-11ee-9015-1e958eef90c3,ksName=mview,cfName=users_by_first_name,...
Nov 02 16:27:08.183552 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:  [shard 0:stat] migration_manager - Update view 'mview.users_by_last_name' From org.apache.cassandra.config.CFMetaData@0x60001b34ca80[cfId=20db96e0-795f-11ee-9015-1e958eef90c3,ksName=mview,cfName=users_by_last_name,...
Nov 02 16:27:08.283523 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:  [shard 0:stre] schema_tables - Altering mview.users id=1ec5fd50-795f-11ee-9015-1e958eef90c3 version=97b22269-4190-3ce4-a0de-b5f2e51513e5
Nov 02 16:27:08.292180 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:  [shard 0:stre] schema_tables - Altering mview.users_by_first_name id=20439ca0-795f-11ee-9015-1e958eef90c3 version=36e8e0aa-ab50-33c1-a558-1cadcea01815
Nov 02 16:27:08.300962 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:  [shard 0:stre] schema_tables - Altering mview.users_by_last_name id=20db96e0-795f-11ee-9015-1e958eef90c3 version=865cf74b-a6d2-3200-9f3c-c18d8911a3cf
Nov 02 16:27:08.314016 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:  [shard 0:stre] query_processor - Column definitions for mview.users changed, invalidating related prepared statements
Nov 02 16:27:08.323230 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:  [shard 7:stre] query_processor - Column definitions for mview.users changed, invalidating related prepared statements
Nov 02 16:27:08.323810 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:  [shard 6:stre] query_processor - Column definitions for mview.users changed, invalidating related prepared statements
Nov 02 16:27:08.323992 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:  [shard 1:stre] query_processor - Column definitions for mview.users changed, invalidating related prepared statements
Nov 02 16:27:08.324057 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:  [shard 3:stre] query_processor - Column definitions for mview.users changed, invalidating related prepared statements
Nov 02 16:27:08.325011 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:  [shard 5:stre] query_processor - Column definitions for mview.users changed, invalidating related prepared statements
Nov 02 16:27:08.325338 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:  [shard 9:stre] query_processor - Column definitions for mview.users changed, invalidating related prepared statements
Nov 02 16:27:08.326615 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:  [shard 2:stre] query_processor - Column definitions for mview.users changed, invalidating related prepared statements
Nov 02 16:27:08.327035 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:  [shard 8:stre] query_processor - Column definitions for mview.users changed, invalidating related prepared statements
Nov 02 16:27:08.330561 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:  [shard 4:stat] schema_registry - Tried to build a global schema for view mview.users_by_last_name with an uninitialized base info, at: 0x5f987ae 0x5f98d70 0x5f99048 0x5a6aa07 0x1fbf09b 0x30f6d02 0x32fbd7f 0x138d51a 0x5a9bb0f 0x5a9cde7 0x5ac0833 0x5a6b74a /opt/scylladb/libreloc/libc.so.6+0x8c946 /opt/scylladb/libreloc/libc.so.6+0x11286f
                                                                                               --------
                                                                                               seastar::internal::coroutine_traits_base<void>::promise_type
                                                                                               --------
                                                                                               seastar::coroutine::all<seastar::future<void>, seastar::future<void> >::intermediate_task<0ul>
Nov 02 16:27:08.330694 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]: Aborting on shard 4.
Nov 02 16:27:08.330694 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]: Backtrace:
Nov 02 16:27:08.330694 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:   0x5a8a258
Nov 02 16:27:08.330694 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:   0x5ac0242
Nov 02 16:27:08.330694 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:   /opt/scylladb/libreloc/libc.so.6+0x3dbaf
Nov 02 16:27:08.330694 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:   /opt/scylladb/libreloc/libc.so.6+0x8e883
Nov 02 16:27:08.330694 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:   /opt/scylladb/libreloc/libc.so.6+0x3dafd
Nov 02 16:27:08.330694 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:   /opt/scylladb/libreloc/libc.so.6+0x2687e
Nov 02 16:27:08.330694 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:   0x5a6aa87
Nov 02 16:27:08.330694 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:   0x1fbf09b
Nov 02 16:27:08.330694 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:   0x30f6d02
Nov 02 16:27:08.330694 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:   0x32fbd7f
Nov 02 16:27:08.330694 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:   0x138d51a
Nov 02 16:27:08.330694 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:   0x5a9bb0f
Nov 02 16:27:08.330694 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:   0x5a9cde7
Nov 02 16:27:08.330694 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:   0x5ac0833
Nov 02 16:27:08.330694 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:   0x5a6b74a
Nov 02 16:27:08.330694 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:   /opt/scylladb/libreloc/libc.so.6+0x8c946
Nov 02 16:27:08.330694 longevity-tls-2tb-48h-1dis-2nondis--db-node-b9cafe83-5 scylla[6141]:   /opt/scylladb/libreloc/libc.so.6+0x11286f

Decoded:

[Backtrace #0]
void seastar::backtrace<seastar::backtrace_buffer::append_backtrace()::{lambda(seastar::frame)#1}>(seastar::backtrace_buffer::append_backtrace()::{lambda(seastar::frame)#1}&&) at ./build/release/seastar/./seastar/include/seastar/util/backtrace.hh:64
 (inlined by) seastar::backtrace_buffer::append_backtrace() at ./build/release/seastar/./seastar/src/core/reactor.cc:825
 (inlined by) seastar::print_with_backtrace(seastar::backtrace_buffer&, bool) at ./build/release/seastar/./seastar/src/core/reactor.cc:855
seastar::print_with_backtrace(char const*, bool) at ./build/release/seastar/./seastar/src/core/reactor.cc:867
 (inlined by) seastar::sigabrt_action() at ./build/release/seastar/./seastar/src/core/reactor.cc:4030
 (inlined by) operator() at ./build/release/seastar/./seastar/src/core/reactor.cc:4006
 (inlined by) __invoke at ./build/release/seastar/./seastar/src/core/reactor.cc:4002
/data/scylla-s3-reloc.cache/by-build-id/f59b16b61ac1631394eebf369635ef29f7918e60/extracted/scylla/libreloc/libc.so.6: ELF 64-bit LSB shared object, x86-64, version 1 (GNU/Linux), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, BuildID[sha1]=c9f62793b9e886eb1b95077d4f26fe2b4aa1ac25, for GNU/Linux 3.2.0, not stripped

__GI___sigaction at :?
__pthread_kill_implementation at ??:?
__GI_raise at :?
__GI_abort at :?
seastar::on_internal_error(seastar::logger&, std::basic_string_view<char, std::char_traits<char> >) at ./build/release/seastar/./seastar/src/core/on_internal_error.cc:57
global_schema_ptr at ./schema/schema_registry.cc:380
service::storage_proxy::mutate_locally(seastar::lw_shared_ptr<schema const> const&, frozen_mutation const&, tracing::trace_state_ptr, seastar::bool_class<db::force_sync_tag>, std::chrono::time_point<seastar::lowres_clock, std::chrono::duration<long, std::ratio<1l, 1000000000l> > >, seastar::smp_service_group, std::variant<std::monostate, db::per_partition_rate_limit::account_only, db::per_partition_rate_limit::account_and_enforce>) at ./service/storage_proxy.cc:2927
operator() at ./service/storage_proxy.cc:571
 (inlined by) operator() at ./service/storage_proxy.cc:497
std::__n4861::coroutine_handle<seastar::internal::coroutine_traits_base<void>::promise_type>::resume() const at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/coroutine:240
 (inlined by) seastar::internal::coroutine_traits_base<void>::promise_type::run_and_dispose() at ././seastar/include/seastar/core/coroutine.hh:125
seastar::reactor::run_tasks(seastar::reactor::task_queue&) at ./build/release/seastar/./seastar/src/core/reactor.cc:2651
 (inlined by) seastar::reactor::run_some_tasks() at ./build/release/seastar/./seastar/src/core/reactor.cc:3114
seastar::reactor::do_run() at ./build/release/seastar/./seastar/src/core/reactor.cc:3283
operator() at ./build/release/seastar/./seastar/src/core/reactor.cc:4496
 (inlined by) void std::__invoke_impl<void, seastar::smp::configure(seastar::smp_options const&, seastar::reactor_options const&)::$_0&>(std::__invoke_other, seastar::smp::configure(seastar::smp_options const&, seastar::reactor_options const&)::$_0&) at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/invoke.h:61
 (inlined by) std::enable_if<is_invocable_r_v<void, seastar::smp::configure(seastar::smp_options const&, seastar::reactor_options const&)::$_0&>, void>::type std::__invoke_r<void, seastar::smp::configure(seastar::smp_options const&, seastar::reactor_options const&)::$_0&>(seastar::smp::configure(seastar::smp_options const&, seastar::reactor_options const&)::$_0&) at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/invoke.h:111
 (inlined by) std::_Function_handler<void (), seastar::smp::configure(seastar::smp_options const&, seastar::reactor_options const&)::$_0>::_M_invoke(std::_Any_data const&) at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/std_function.h:290
std::function<void ()>::operator()() const at /usr/bin/../lib/gcc/x86_64-redhat-linux/13/../../../../include/c++/13/bits/std_function.h:591
 (inlined by) seastar::posix_thread::start_routine(void*) at ./build/release/seastar/./seastar/src/core/posix.cc:90

@mykaul mykaul modified the milestones: 6.0, 5.4 Nov 6, 2023
@eliransin
Copy link
Contributor

Yes, it's the same issue.
I will resume work on #14861

@enaydanov
Copy link
Contributor

Got the same error as #15235 but it was closed as a duplicate of this issue. So, report it here:

2023-11-04T06:32:32.435+00:00 parallel-topology-schema-changes-mu-db-node-d8343c00-3      !ERR | scylla[6147]:  [shard 6:stmt] schema_registry - Tried to build a global schema for view keyspace1.sec_ind_c2_index with an uninitialized base info, at: 0x5fb3d3e 0x5fb4300 0x5fb45d8 0x5a7f5f7 0x1fb85ab 0x3127ca0 0x312744f 0x32718ce 0x324c524 0x3277063 0x179bda1 0x179a475 0x1799839 0x1798ed5 0x17959c1 0x1794d3e 0x5f96619 0x5f961e0 0x5f96c89 0x5ab08df 0x5ab1bb7 0x5ad5803 0x5a8033a /opt/scylladb/libreloc/libc.so.6+0x8c946 /opt/scylladb/libreloc/libc.so.6+0x11286f
2023-11-04 06:49:29.888 <2023-11-04 06:32:31.000>: (CoreDumpEvent Severity.ERROR) period_type=one-time event_id=bb61558f-577b-4fbf-ba59-c377662294b4 node=Node parallel-topology-schema-changes-mu-db-node-d8343c00-3 [54.74.160.119 | 10.4.8.254] (seed: True)
corefile_url=https://storage.cloud.google.com/upload.scylladb.com/core.scylla.112.ed05a280769b4043ab0b1a6fe25decc8.6147.1699079551000000/core.scylla.112.ed05a280769b4043ab0b1a6fe25decc8.6147.1699079551000000.gz
backtrace=           PID: 6147 (scylla)
UID: 112 (scylla)
GID: 118 (scylla)
Signal: 6 (ABRT)
Timestamp: Sat 2023-11-04 06:32:31 UTC (3min 7s ago)
Command Line: /usr/bin/scylla --blocked-reactor-notify-ms 25 --abort-on-lsa-bad-alloc 1 --abort-on-seastar-bad-alloc --abort-on-internal-error 1 --abort-on-ebadf 1 --enable-sstable-key-validation 1 --log-to-syslog 1 --log-to-stdout 0 --default-log-level info --network-stack posix --io-properties-file=/etc/scylla.d/io_properties.yaml --cpuset 1-7 --lock-memory=1
Executable: /opt/scylladb/libexec/scylla
Control Group: /scylla.slice/scylla-server.slice/scylla-server.service
Unit: scylla-server.service
Slice: scylla-server.slice
Boot ID: ed05a280769b4043ab0b1a6fe25decc8
Machine ID: 56f46567c67f4b20b85156250aa4e1db
Hostname: parallel-topology-schema-changes-mu-db-node-d8343c00-3
Storage: /var/lib/systemd/coredump/core.scylla.112.ed05a280769b4043ab0b1a6fe25decc8.6147.1699079551000000 (present)
Disk Size: 57.3G
Message: Process 6147 (scylla) of user 112 dumped core.
Found module linux-vdso.so.1 with build-id: 6e705915615da4a4962618575192afc972d382b0
Found module libcrypto.so.3 with build-id: d871cd7f304722b26629ffb37ed31a5f6258b14d
Metadata for module libcrypto.so.3 owned by FDO found: {
"type" : "rpm",
"name" : "openssl",
"version" : "3.0.9-2.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libssl.so.3 with build-id: 782d564b4727cdfd9eb0b5c18dc9d609cf653666
Metadata for module libssl.so.3 owned by FDO found: {
"type" : "rpm",
"name" : "openssl",
"version" : "3.0.9-2.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libzstd.so.1 with build-id: 95fc1c046547d542d2913ebc72290732112d8b9b
Metadata for module libzstd.so.1 owned by FDO found: {
"type" : "rpm",
"name" : "zstd",
"version" : "1.5.5-1.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module liblzma.so.5 with build-id: 71fbd52ad4c7efd194c0eca39ad53642e99d26b9
Metadata for module liblzma.so.5 owned by FDO found: {
"type" : "rpm",
"name" : "xz",
"version" : "5.4.1-1.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libcap.so.2 with build-id: 2fe8ebb5811811046e04d8f4c000696cb9b24e24
Metadata for module libcap.so.2 owned by FDO found: {
"type" : "rpm",
"name" : "libcap",
"version" : "2.48-6.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libicudata.so.72 with build-id: 64ea912ec6a442517d6e32c2bdfa629379739b45
Metadata for module libicudata.so.72 owned by FDO found: {
"type" : "rpm",
"name" : "icu",
"version" : "72.1-2.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libffi.so.8 with build-id: f5312719df7de0c6ae9d42ed18680d8016bb59d4
Metadata for module libffi.so.8 owned by FDO found: {
"type" : "rpm",
"name" : "libffi",
"version" : "3.4.4-2.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libunistring.so.2 with build-id: 364176cb9ffd75bd9dd1b2b4e042b63c6d19c769
Found module ld.so with build-id: e9acc1337a1e04bb32f30e60c19e69eb21ee3357
Found module libc.so.6 with build-id: c9f62793b9e886eb1b95077d4f26fe2b4aa1ac25
Found module libgcc_s.so.1 with build-id: 4a47fd620e9c37fa732513e4bd83ddaf4ec6d989
Found module libstdc++.so.6 with build-id: 65f174befecd93031976ef77c0c77fdcea95fdc8
Found module libboost_unit_test_framework.so.1.78.0 with build-id: 44111715cc70c0705b07883fdafc59848fd85345
Metadata for module libboost_unit_test_framework.so.1.78.0 owned by FDO found: {
"type" : "rpm",
"name" : "boost",
"version" : "1.78.0-14.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libthrift-0.14.0.so with build-id: 7ec8f2a10108f6bf0e82ab70c7df8e255c53ffd5
Metadata for module libthrift-0.14.0.so owned by FDO found: {
"type" : "rpm",
"name" : "thrift",
"version" : "0.14.0-13.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_low_level_hash.so.2206.0.0 with build-id: 54f02731e4aac190276a846427c1756f68624ca8
Metadata for module libabsl_low_level_hash.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_bad_variant_access.so.2206.0.0 with build-id: aac271b1b6c0028affbd997fd6fb00580f1d802c
Metadata for module libabsl_bad_variant_access.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_city.so.2206.0.0 with build-id: f15af062278721a0c450b5fbc5bbc5004cca1d66
Metadata for module libabsl_city.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_hash.so.2206.0.0 with build-id: ded250f9194b4ee9ac86543f1a866840a3bd5c92
Metadata for module libabsl_hash.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_time_zone.so.2206.0.0 with build-id: a05382267d52bb37dbd18d46434e39b941856f57
Metadata for module libabsl_time_zone.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_log_severity.so.2206.0.0 with build-id: 32ad933bdf75d9d50f6611ceb9277891d49ba610
Metadata for module libabsl_log_severity.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_raw_logging_internal.so.2206.0.0 with build-id: de1609c0ce2fc11fef180bb402d42a0667263fb3
Metadata for module libabsl_raw_logging_internal.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_throw_delegate.so.2206.0.0 with build-id: b9e0846035e3d3c81367d561af32852bdd91de32
Metadata for module libabsl_throw_delegate.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_int128.so.2206.0.0 with build-id: aef7256436e9507891e7c273b3cfb5afe6a3f961
Metadata for module libabsl_int128.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_spinlock_wait.so.2206.0.0 with build-id: 4c9cc99cb1cbd0021fe2162e05d115183ae5216e
Metadata for module libabsl_spinlock_wait.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_base.so.2206.0.0 with build-id: 6923bdbc6c7adca9e3cc630c587ca7bc569aea7c
Metadata for module libabsl_base.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_strings_internal.so.2206.0.0 with build-id: 37882a7545d19410520b45db8bda13e1ae0f24fb
Metadata for module libabsl_strings_internal.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_strings.so.2206.0.0 with build-id: ea08a07cb064096a6979c9b6784dffbe7f937d30
Metadata for module libabsl_strings.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_civil_time.so.2206.0.0 with build-id: c0bf69065233de3b7aa4e9b38ce36c491321fbca
Metadata for module libabsl_civil_time.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_time.so.2206.0.0 with build-id: b14a9ea5584875df161bd47297fe8c7d1904334d
Metadata for module libabsl_time.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_malloc_internal.so.2206.0.0 with build-id: 2e9b98b4d58498633ce5ead7d192ade01f0b32b1
Metadata for module libabsl_malloc_internal.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_demangle_internal.so.2206.0.0 with build-id: 751c512b9503a4f9e6481a92fcb1d31149fb4fb0
Metadata for module libabsl_demangle_internal.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_debugging_internal.so.2206.0.0 with build-id: 60c2a1e174cafffc739f2317ff27af1889684ec2
Metadata for module libabsl_debugging_internal.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_symbolize.so.2206.0.0 with build-id: 028562e3f9f0e5d78c9addca1b0270f4d4ffbd0b
Metadata for module libabsl_symbolize.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_stacktrace.so.2206.0.0 with build-id: 4f532283ceb52eab32a4fbd2e9c2dfcd33798244
Metadata for module libabsl_stacktrace.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_graphcycles_internal.so.2206.0.0 with build-id: d6501a5562045c64bd2766a4d0594c120690c36a
Metadata for module libabsl_graphcycles_internal.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_synchronization.so.2206.0.0 with build-id: 1fded83eda5a15a6b176635d7cb82f3bbf2e77fa
Metadata for module libabsl_synchronization.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_exponential_biased.so.2206.0.0 with build-id: f548148b21727aee3b652e8f4ae0b876093cca80
Metadata for module libabsl_exponential_biased.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_hashtablez_sampler.so.2206.0.0 with build-id: 46c8178e24dfc52f2c6f23ce94a79aedbcc0ee0b
Metadata for module libabsl_hashtablez_sampler.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_bad_optional_access.so.2206.0.0 with build-id: ce66b4e2a512e919c08550f16f196080cf1a925f
Metadata for module libabsl_bad_optional_access.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libabsl_raw_hash_set.so.2206.0.0 with build-id: 658c5086d69b467a1fda3762f24cb0ef44b718a7
Metadata for module libabsl_raw_hash_set.so.2206.0.0 owned by FDO found: {
"type" : "rpm",
"name" : "abseil-cpp",
"version" : "20220623.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libjsoncpp.so.25 with build-id: e62d05e5ee3f6da3d3bd0fae2633c298d58cb43e
Metadata for module libjsoncpp.so.25 owned by FDO found: {
"type" : "rpm",
"name" : "jsoncpp",
"version" : "1.9.5-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libsystemd.so.0 with build-id: b57edf41d6559841b4ec671d6c5888726914a23e
Metadata for module libsystemd.so.0 owned by FDO found: {
"type" : "rpm",
"name" : "systemd",
"version" : "253.10-1.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module liblua-5.4.so with build-id: edd5d6e6ba778bd017c12bd130d23d0f39ba6ab2
Metadata for module liblua-5.4.so owned by FDO found: {
"type" : "rpm",
"name" : "lua",
"version" : "5.4.4-9.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libdeflate.so.0 with build-id: da0669fb51cad4d74cdcab5fbcde464592c07911
Metadata for module libdeflate.so.0 owned by FDO found: {
"type" : "rpm",
"name" : "libdeflate",
"version" : "1.9-6.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libxxhash.so.0 with build-id: 7941174ea5471182e7ffa380dff1a4056d104a45
Metadata for module libxxhash.so.0 owned by FDO found: {
"type" : "rpm",
"name" : "xxhash",
"version" : "0.8.2-1.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libicui18n.so.72 with build-id: 13e5dae11fa14e4ac67e847e96ff38e07b8470a7
Metadata for module libicui18n.so.72 owned by FDO found: {
"type" : "rpm",
"name" : "icu",
"version" : "72.1-2.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libicuuc.so.72 with build-id: f6dc975ad37df68318b229b945a1735f330fe363
Metadata for module libicuuc.so.72 owned by FDO found: {
"type" : "rpm",
"name" : "icu",
"version" : "72.1-2.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libboost_regex.so.1.78.0 with build-id: c6afdf477cca675a85ad2d2f23ac0299f3203c50
Metadata for module libboost_regex.so.1.78.0 owned by FDO found: {
"type" : "rpm",
"name" : "boost",
"version" : "1.78.0-14.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libboost_date_time.so.1.78.0 with build-id: 2fe7a1f5ffbf9ac89cfc788e0c2d7ef5eb50afcc
Metadata for module libboost_date_time.so.1.78.0 owned by FDO found: {
"type" : "rpm",
"name" : "boost",
"version" : "1.78.0-14.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libcrypt.so.2 with build-id: 1b328a05d4e5133c864e3cdf1eff45de72fc5733
Metadata for module libcrypt.so.2 owned by FDO found: {
"type" : "rpm",
"name" : "libxcrypt",
"version" : "4.4.36-1.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libsnappy.so.1 with build-id: 26964a0b88eaa9c9260fe89937654a1a993138cf
Metadata for module libsnappy.so.1 owned by FDO found: {
"type" : "rpm",
"name" : "snappy",
"version" : "1.1.9-7.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libz.so.1 with build-id: d8f77557d166386619afc2932ae77119e6644575
Metadata for module libz.so.1 owned by FDO found: {
"type" : "rpm",
"name" : "zlib",
"version" : "1.2.13-3.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libyaml-cpp.so.0.7 with build-id: f570d4b78b416c86aed9044f9b0f9d479dcfd432
Metadata for module libyaml-cpp.so.0.7 owned by FDO found: {
"type" : "rpm",
"name" : "yaml-cpp",
"version" : "0.7.0-3.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libm.so.6 with build-id: 9a43b49eb6899802bd908fa392b9649d4319df50
Found module libhwloc.so.15 with build-id: a0ed39ba2a7bfeea4cbaa8fff2257e34bfbac01c
Found module libp11-kit.so.0 with build-id: cc5d147fe844b09973c58e10a9a1ffc95e4a7523
Metadata for module libp11-kit.so.0 owned by FDO found: {
"type" : "rpm",
"name" : "p11-kit",
"version" : "0.25.0-1.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libidn2.so.0 with build-id: c25f9a2ef9b6fb843df0fc37c6c8903af04cbbe1
Metadata for module libidn2.so.0 owned by FDO found: {
"type" : "rpm",
"name" : "libidn2",
"version" : "2.3.4-2.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libtasn1.so.6 with build-id: c684c041d1e59673bb0406fe9ae108f6b6d64c6f
Metadata for module libtasn1.so.6 owned by FDO found: {
"type" : "rpm",
"name" : "libtasn1",
"version" : "4.19.0-2.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libhogweed.so.6 with build-id: d4becfdf402e462138154123f6c66e11c9afa11e
Metadata for module libhogweed.so.6 owned by FDO found: {
"type" : "rpm",
"name" : "nettle",
"version" : "3.8-3.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libnettle.so.8 with build-id: c750c0ee69d294221c16f0379c511cd6313671f8
Metadata for module libnettle.so.8 owned by FDO found: {
"type" : "rpm",
"name" : "nettle",
"version" : "3.8-3.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libunistring.so.5 with build-id: 586ab8815558450155f6e0285363811bbca61fef
Metadata for module libunistring.so.5 owned by FDO found: {
"type" : "rpm",
"name" : "libunistring",
"version" : "1.1-3.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libgmp.so.10 with build-id: 44b35c0168894f171067115c93846b4dc7cedcdc
Metadata for module libgmp.so.10 owned by FDO found: {
"type" : "rpm",
"name" : "gmp",
"version" : "6.2.1-4.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libgnutls.so.30 with build-id: adee206811c45ce93cbfd1863b985426c421dd8d
Metadata for module libgnutls.so.30 owned by FDO found: {
"type" : "rpm",
"name" : "gnutls",
"version" : "3.8.1-1.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module liblz4.so.1 with build-id: c32727dc2cf56f54d7a9cfc6d8776bfb765a5d60
Metadata for module liblz4.so.1 owned by FDO found: {
"type" : "rpm",
"name" : "lz4",
"version" : "1.9.4-2.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libatomic.so.1 with build-id: 796e1f2cd88b645cb9a4c359b18feab004ee3963
Found module libnuma.so.1 with build-id: 46b869f140bc78106d923a9ce337eb4aabf513a6
Metadata for module libnuma.so.1 owned by FDO found: {
"type" : "rpm",
"name" : "numactl",
"version" : "2.0.16-2.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libsctp.so.1 with build-id: 63f4589f2de2c90e478723bf58edea02ad69ce2c
Metadata for module libsctp.so.1 owned by FDO found: {
"type" : "rpm",
"name" : "lksctp-tools",
"version" : "1.0.19-3.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libfmt.so.9 with build-id: d6a5c60746b90af0b091919a4a59857b3b0b5afc
Metadata for module libfmt.so.9 owned by FDO found: {
"type" : "rpm",
"name" : "fmt",
"version" : "9.1.0-2.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libcryptopp.so.8 with build-id: 7ffca42977be17e283a6a06996c54470babe89c2
Found module libcares.so.2 with build-id: 8d2383158a726d417556f5c4b8b68940f9cc1e17
Metadata for module libcares.so.2 owned by FDO found: {
"type" : "rpm",
"name" : "c-ares",
"version" : "1.19.1-1.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libboost_system.so.1.78.0 with build-id: 9c7e1db53ce470d8f3331bd3037e672157db0855
Metadata for module libboost_system.so.1.78.0 owned by FDO found: {
"type" : "rpm",
"name" : "boost",
"version" : "1.78.0-14.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libboost_thread.so.1.78.0 with build-id: 93364022ab187e1dfc7941177bf91e98008a3fc5
Metadata for module libboost_thread.so.1.78.0 owned by FDO found: {
"type" : "rpm",
"name" : "boost",
"version" : "1.78.0-14.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module libboost_program_options.so.1.78.0 with build-id: 98f77cc68cca03c6b986381e858addaa26199975
Metadata for module libboost_program_options.so.1.78.0 owned by FDO found: {
"type" : "rpm",
"name" : "boost",
"version" : "1.78.0-14.fc38",
"architecture" : "x86_64",
"osCpe" : "cpe:/o:fedoraproject:fedora:38"
}
Found module scylla with build-id: 420b5e5ef0c99c93991034185a1061ec90ab2c43
Stack trace of thread 6157:
#0  0x00007f553faf9884 __pthread_kill_implementation (libc.so.6 + 0x8e884)
#1  0x00007f553faa8afe raise (libc.so.6 + 0x3dafe)
#2  0x00007f553fa9187f abort (libc.so.6 + 0x2687f)
#3  0x0000000005a7f678 _ZN7seastar17on_internal_errorERNS_6loggerESt17basic_string_viewIcSt11char_traitsIcEE (scylla + 0x587f678)
#4  0x0000000001fb85ac _ZN17global_schema_ptrC1ERKN7seastar13lw_shared_ptrIK6schemaEE (scylla + 0x1db85ac)
#5  0x0000000003127ca1 _ZN7service13storage_proxy18query_result_localEN7seastar10shared_ptrIKN7locator25effective_replication_mapEEENS1_13lw_shared_ptrIK6schemaEENS7_IN5query12read_commandEEERK20nonwrapping_intervalIN3dht13ring_positionEENSB_14result_optionsEN7tracing15trace_state_ptrENSt6chrono10time_pointINS1_12lowres_clockENSN_8durationIlSt5ratioILl1ELl1000000000EEEEEESt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENSY_19account_and_enforceEEE (scylla + 0x2f27ca1)
#6  0x0000000003127450 _ZN7service13storage_proxy25query_result_local_digestEN7seastar10shared_ptrIKN7locator25effective_replication_mapEEENS1_13lw_shared_ptrIK6schemaEENS7_IN5query12read_commandEEERK20nonwrapping_intervalIN3dht13ring_positionEEN7tracing15trace_state_ptrENSt6chrono10time_pointINS1_12lowres_clockENSM_8durationIlSt5ratioILl1ELl1000000000EEEEEENSB_16digest_algorithmESt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENSY_19account_and_enforceEEE (scylla + 0x2f27450)
#7  0x00000000032718cf _ZN7service13storage_proxy6remote11handle_readIN7seastar3rpc5tupleIJN5query13result_digestEl17cache_temperatureN7replica17exception_variantESt8optionalI13full_positionEEEELNS1_9read_verbE2EEENS3_6futureIT_EERKNS4_11client_infoENS4_14opt_time_pointENS6_12read_commandE17wrapping_intervalIN3dht13ring_positionEENS4_8optionalINS6_16digest_algorithmEEENSS_ISt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENSY_19account_and_enforceEEEEENSS_INS_13fencing_tokenEEE (scylla + 0x30718cf)
#8  0x000000000324c525 _ZN7service13storage_proxy6remote18handle_read_digestERKN7seastar3rpc11client_infoENS3_14opt_time_pointEN5query12read_commandE17wrapping_intervalIN3dht13ring_positionEENS3_8optionalINS8_16digest_algorithmEEENSE_ISt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENSK_19account_and_enforceEEEEENSE_INS_13fencing_tokenEEE (scylla + 0x304c525)
#9  0x0000000003277064 _ZNSt17_Function_handlerIFN7seastar6futureINS0_3rpc5tupleIJN5query13result_digestEl17cache_temperatureN7replica17exception_variantESt8optionalI13full_positionEEEEEERKNS2_11client_infoENS2_14opt_time_pointENS4_12read_commandE17wrapping_intervalIN3dht13ring_positionEENS2_8optionalINS4_16digest_algorithmEEENSN_ISt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENST_19account_and_enforceEEEEENSN_IN7service13fencing_tokenEEEESt11_Bind_frontIMNSY_13storage_proxy6remoteEFSD_SG_SH_SI_SM_SP_SX_S10_EJPS14_EEE9_M_invokeERKSt9_Any_dataSG_OSH_OSI_OSM_OSP_OSX_OS10_ (scylla + 0x3077064)
#10 0x000000000179bda2 _ZN7seastar3rpc5applyINS_6futureINS0_5tupleIJN5query13result_digestEl17cache_temperatureN7replica17exception_variantESt8optionalI13full_positionEEEEEEJNS4_12read_commandE17wrapping_intervalIN3dht13ring_positionEENS0_8optionalINS4_16digest_algorithmEEENSJ_ISt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENSP_19account_and_enforceEEEEENSJ_IN7service13fencing_tokenEEEENS0_19do_want_client_infoENS0_18do_want_time_pointESt8functionIFSD_RKNS0_11client_infoENS0_14opt_time_pointESE_SI_SL_ST_SW_EESt5tupleIJSE_SI_SL_ST_SW_EEEENS_8futurizeIT_E4typeERT3_RS10_S13_T1_T2_NS0_9signatureIFS19_DpT0_EEEOT4_ (scylla + 0x159bda2)
#11 0x000000000179a476 _ZZZZN7seastar3rpc11recv_helperIN4netw10serializerESt8functionIFNS_6futureINS0_5tupleIJN5query13result_digestEl17cache_temperatureN7replica17exception_variantESt8optionalI13full_positionEEEEEERKNS0_11client_infoENS0_14opt_time_pointENS7_12read_commandE17wrapping_intervalIN3dht13ring_positionEENS0_8optionalINS7_16digest_algorithmEEENSQ_ISt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENSW_19account_and_enforceEEEEENSQ_IN7service13fencing_tokenEEEEESG_JSL_SP_SS_S10_S13_ENS0_19do_want_client_infoENS0_18do_want_time_pointEEEDaNS0_9signatureIFT1_DpT2_EEEOT0_T3_T4_ENUlNS_10shared_ptrINS0_6server10connectionEEESC_INSt6chrono10time_pointINS_12lowres_clockENS1M_8durationIlSt5ratioILl1ELl1000000000EEEEEEElNS0_7rcv_bufEE_clES1L_S1U_lS1V_ENUlT_E_clINS_15semaphore_unitsINS_35semaphore_default_exception_factoryES1O_EEEEDaS1X_ENUlvE_clEv (scylla + 0x159a476)
#12 0x000000000179983a _ZZZN7seastar3rpc11recv_helperIN4netw10serializerESt8functionIFNS_6futureINS0_5tupleIJN5query13result_digestEl17cache_temperatureN7replica17exception_variantESt8optionalI13full_positionEEEEEERKNS0_11client_infoENS0_14opt_time_pointENS7_12read_commandE17wrapping_intervalIN3dht13ring_positionEENS0_8optionalINS7_16digest_algorithmEEENSQ_ISt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENSW_19account_and_enforceEEEEENSQ_IN7service13fencing_tokenEEEEESG_JSL_SP_SS_S10_S13_ENS0_19do_want_client_infoENS0_18do_want_time_pointEEEDaNS0_9signatureIFT1_DpT2_EEEOT0_T3_T4_ENUlNS_10shared_ptrINS0_6server10connectionEEESC_INSt6chrono10time_pointINS_12lowres_clockENS1M_8durationIlSt5ratioILl1ELl1000000000EEEEEEElNS0_7rcv_bufEE_clES1L_S1U_lS1V_ENUlT_E_clINS_15semaphore_unitsINS_35semaphore_default_exception_factoryES1O_EEEEDaS1X_ (scylla + 0x159983a)
#13 0x0000000001798ed6 _ZN7seastar6futureINS_15semaphore_unitsINS_35semaphore_default_exception_factoryENS_12lowres_clockEEEE9then_implIZZNS_3rpc11recv_helperIN4netw10serializerESt8functionIFNS0_INS7_5tupleIJN5query13result_digestEl17cache_temperatureN7replica17exception_variantESt8optionalI13full_positionEEEEEERKNS7_11client_infoENS7_14opt_time_pointENSD_12read_commandE17wrapping_intervalIN3dht13ring_positionEENS7_8optionalINSD_16digest_algorithmEEENSW_ISt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENS12_19account_and_enforceEEEEENSW_IN7service13fencing_tokenEEEEESM_JSR_SV_SY_S16_S19_ENS7_19do_want_client_infoENS7_18do_want_time_pointEEEDaNS7_9signatureIFT1_DpT2_EEEOT0_T3_T4_ENUlNS_10shared_ptrINS7_6server10connectionEEESI_INSt6chrono10time_pointIS3_NS1S_8durationIlSt5ratioILl1ELl1000000000EEEEEEElNS7_7rcv_bufEE_clES1R_S1Z_lS20_EUlT_E_NS0_IvEEEES1K_OS22_ (scylla + 0x1598ed6)
#14 0x00000000017959c2 _ZZN7seastar3rpc11recv_helperIN4netw10serializerESt8functionIFNS_6futureINS0_5tupleIJN5query13result_digestEl17cache_temperatureN7replica17exception_variantESt8optionalI13full_positionEEEEEERKNS0_11client_infoENS0_14opt_time_pointENS7_12read_commandE17wrapping_intervalIN3dht13ring_positionEENS0_8optionalINS7_16digest_algorithmEEENSQ_ISt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENSW_19account_and_enforceEEEEENSQ_IN7service13fencing_tokenEEEEESG_JSL_SP_SS_S10_S13_ENS0_19do_want_client_infoENS0_18do_want_time_pointEEEDaNS0_9signatureIFT1_DpT2_EEEOT0_T3_T4_ENUlNS_10shared_ptrINS0_6server10connectionEEESC_INSt6chrono10time_pointINS_12lowres_clockENS1M_8durationIlSt5ratioILl1ELl1000000000EEEEEEElNS0_7rcv_bufEE_clES1L_S1U_lS1V_ (scylla + 0x15959c2)
#15 0x0000000001794d3f _ZNSt17_Function_handlerIFN7seastar6futureIvEENS0_10shared_ptrINS0_3rpc6server10connectionEEESt8optionalINSt6chrono10time_pointINS0_12lowres_clockENS9_8durationIlSt5ratioILl1ELl1000000000EEEEEEElNS4_7rcv_bufEEZNS4_11recv_helperIN4netw10serializerESt8functionIFNS1_INS4_5tupleIJN5query13result_digestEl17cache_temperatureN7replica17exception_variantES8_I13full_positionEEEEEERKNS4_11client_infoENS4_14opt_time_pointENSP_12read_commandE17wrapping_intervalIN3dht13ring_positionEENS4_8optionalINSP_16digest_algorithmEEENS17_ISt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENS1D_19account_and_enforceEEEEENS17_IN7service13fencing_tokenEEEEESX_JS12_S16_S19_S1H_S1K_ENS4_19do_want_client_infoENS4_18do_want_time_pointEEEDaNS4_9signatureIFT1_DpT2_EEEOT0_T3_T4_EUlS7_SH_lSI_E_E9_M_invokeERKSt9_Any_dataOS7_OSH_OlOSI_ (scylla + 0x1594d3f)
#16 0x0000000005f9661a _ZZZZZZN7seastar3rpc6server10connection7processEvEN3$_0clEvENKUlvE_clEvENUlvE0_clEvENKUlSt5tupleIJSt8optionalImEmlS7_INS0_7rcv_bufEEEEE_clESB_ENUlvE_clEv (scylla + 0x5d9661a)
#17 0x0000000005f961e1 _ZZZZZN7seastar3rpc6server10connection7processEvEN3$_0clEvENKUlvE_clEvENUlvE0_clEvENKUlSt5tupleIJSt8optionalImEmlS7_INS0_7rcv_bufEEEEE_clESB_ (scylla + 0x5d961e1)
#18 0x0000000005f96c8a _ZN7seastar12continuationINS_8internal22promise_base_with_typeIvEEZZZZNS_3rpc6server10connection7processEvEN3$_0clEvENKUlvE_clEvENUlvE0_clEvEUlSt5tupleIJSt8optionalImEmlSB_INS4_7rcv_bufEEEEE_ZNS_6futureISF_E14then_impl_nrvoISG_NSH_IvEEEET0_OT_EUlOS3_RSG_ONS_12future_stateISF_EEE_SF_E15run_and_disposeEv (scylla + 0x5d96c8a)
#19 0x0000000005ab08e0 _ZN7seastar7reactor14run_some_tasksEv (scylla + 0x58b08e0)
#20 0x0000000005ab1bb8 _ZN7seastar7reactor6do_runEv (scylla + 0x58b1bb8)
#21 0x0000000005ad5804 _ZNSt17_Function_handlerIFvvEZN7seastar3smp9configureERKNS1_11smp_optionsERKNS1_15reactor_optionsEE3$_0E9_M_invokeERKSt9_Any_data (scylla + 0x58d5804)
#22 0x0000000005a8033b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x588033b)
#23 0x00007f553faf7947 start_thread (libc.so.6 + 0x8c947)
#24 0x00007f553fb7d870 __clone3 (libc.so.6 + 0x112870)
Stack trace of thread 6161:
#0  0x00007f553fb6c0fa read (libc.so.6 + 0x1010fa)
#1  0x0000000005af8ba5 _ZN7seastar11thread_pool4workENS_13basic_sstringIcjLj15ELb1EEE (scylla + 0x58f8ba5)
#2  0x0000000005af8eb3 _ZNSt17_Function_handlerIFvvEZN7seastar11thread_poolC1ERNS1_7reactorENS1_13basic_sstringIcjLj15ELb1EEEE3$_0E9_M_invokeERKSt9_Any_data (scylla + 0x58f8eb3)
#3  0x0000000005a8033b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x588033b)
#4  0x00007f553faf7947 start_thread (libc.so.6 + 0x8c947)
#5  0x00007f553fb7d870 __clone3 (libc.so.6 + 0x112870)
Stack trace of thread 6160:
#0  0x00007f553fb6c0fa read (libc.so.6 + 0x1010fa)
#1  0x0000000005af8ba5 _ZN7seastar11thread_pool4workENS_13basic_sstringIcjLj15ELb1EEE (scylla + 0x58f8ba5)
#2  0x0000000005af8eb3 _ZNSt17_Function_handlerIFvvEZN7seastar11thread_poolC1ERNS1_7reactorENS1_13basic_sstringIcjLj15ELb1EEEE3$_0E9_M_invokeERKSt9_Any_data (scylla + 0x58f8eb3)
#3  0x0000000005a8033b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x588033b)
#4  0x00007f553faf7947 start_thread (libc.so.6 + 0x8c947)
#5  0x00007f553fb7d870 __clone3 (libc.so.6 + 0x112870)
Stack trace of thread 6163:
#0  0x00007f553fb6c0fa read (libc.so.6 + 0x1010fa)
#1  0x0000000005af8ba5 _ZN7seastar11thread_pool4workENS_13basic_sstringIcjLj15ELb1EEE (scylla + 0x58f8ba5)
#2  0x0000000005af8eb3 _ZNSt17_Function_handlerIFvvEZN7seastar11thread_poolC1ERNS1_7reactorENS1_13basic_sstringIcjLj15ELb1EEEE3$_0E9_M_invokeERKSt9_Any_data (scylla + 0x58f8eb3)
#3  0x0000000005a8033b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x588033b)
#4  0x00007f553faf7947 start_thread (libc.so.6 + 0x8c947)
#5  0x00007f553fb7d870 __clone3 (libc.so.6 + 0x112870)
Stack trace of thread 6158:
#0  0x00007f553fb6c0fa read (libc.so.6 + 0x1010fa)
#1  0x0000000005af8ba5 _ZN7seastar11thread_pool4workENS_13basic_sstringIcjLj15ELb1EEE (scylla + 0x58f8ba5)
#2  0x0000000005af8eb3 _ZNSt17_Function_handlerIFvvEZN7seastar11thread_poolC1ERNS1_7reactorENS1_13basic_sstringIcjLj15ELb1EEEE3$_0E9_M_invokeERKSt9_Any_data (scylla + 0x58f8eb3)
#3  0x0000000005a8033b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x588033b)
#4  0x00007f553faf7947 start_thread (libc.so.6 + 0x8c947)
#5  0x00007f553fb7d870 __clone3 (libc.so.6 + 0x112870)
Stack trace of thread 6162:
#0  0x00007f553fb6c0fa read (libc.so.6 + 0x1010fa)
#1  0x0000000005af8ba5 _ZN7seastar11thread_pool4workENS_13basic_sstringIcjLj15ELb1EEE (scylla + 0x58f8ba5)
#2  0x0000000005af8eb3 _ZNSt17_Function_handlerIFvvEZN7seastar11thread_poolC1ERNS1_7reactorENS1_13basic_sstringIcjLj15ELb1EEEE3$_0E9_M_invokeERKSt9_Any_data (scylla + 0x58f8eb3)
#3  0x0000000005a8033b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x588033b)
#4  0x00007f553faf7947 start_thread (libc.so.6 + 0x8c947)
#5  0x00007f553fb7d870 __clone3 (libc.so.6 + 0x112870)
Stack trace of thread 6159:
#0  0x00007f553fb6c0fa read (libc.so.6 + 0x1010fa)
#1  0x0000000005af8ba5 _ZN7seastar11thread_pool4workENS_13basic_sstringIcjLj15ELb1EEE (scylla + 0x58f8ba5)
#2  0x0000000005af8eb3 _ZNSt17_Function_handlerIFvvEZN7seastar11thread_poolC1ERNS1_7reactorENS1_13basic_sstringIcjLj15ELb1EEEE3$_0E9_M_invokeERKSt9_Any_data (scylla + 0x58f8eb3)
#3  0x0000000005a8033b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x588033b)
#4  0x00007f553faf7947 start_thread (libc.so.6 + 0x8c947)
#5  0x00007f553fb7d870 __clone3 (libc.so.6 + 0x112870)
Stack trace of thread 6164:
#0  0x00007f553fb6c0fa read (libc.so.6 + 0x1010fa)
#1  0x0000000005af8ba5 _ZN7seastar11thread_pool4workENS_13basic_sstringIcjLj15ELb1EEE (scylla + 0x58f8ba5)
#2  0x0000000005af8eb3 _ZNSt17_Function_handlerIFvvEZN7seastar11thread_poolC1ERNS1_7reactorENS1_13basic_sstringIcjLj15ELb1EEEE3$_0E9_M_invokeERKSt9_Any_data (scylla + 0x58f8eb3)
#3  0x0000000005a8033b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x588033b)
#4  0x00007f553faf7947 start_thread (libc.so.6 + 0x8c947)
#5  0x00007f553fb7d870 __clone3 (libc.so.6 + 0x112870)
Stack trace of thread 6154:
#0  0x0000000005f82553 _ZN7seastar3rpc10connection10send_entryERNS1_14outgoing_entryE (scylla + 0x5d82553)
#1  0x0000000005f8be80 _ZZN7seastar3rpc10connection4sendENS0_7snd_bufESt8optionalINSt6chrono10time_pointINS_12lowres_clockENS4_8durationIlSt5ratioILl1ELl1000000000EEEEEEEPNS0_11cancellableEEN3$_1clEv (scylla + 0x5d8be80)
#2  0x0000000005f83cda _ZN7seastar3rpc10connection4sendENS0_7snd_bufESt8optionalINSt6chrono10time_pointINS_12lowres_clockENS4_8durationIlSt5ratioILl1ELl1000000000EEEEEEEPNS0_11cancellableE (scylla + 0x5d83cda)
#3  0x0000000005f85364 _ZN7seastar3rpc6client7requestEmlNS0_7snd_bufESt8optionalINSt6chrono10time_pointINS_12lowres_clockENS4_8durationIlSt5ratioILl1ELl1000000000EEEEEEEPNS0_11cancellableE (scylla + 0x5d85364)
#4  0x00000000015673c0 _ZZN7seastar3rpc11send_helperIN4netw10serializerENS2_14messaging_verbENS0_12no_wait_typeEJ15frozen_mutationN5utils12small_vectorIN3gms12inet_addressELm3EEESA_jmSt8optionalIN7tracing10trace_infoEESt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENSJ_19account_and_enforceEEEN7service13fencing_tokenEEEEDaT0_NS0_9signatureIFT1_DpT2_EEEEN7shelper4sendERNS0_6clientESC_INSt6chrono10time_pointINS_12lowres_clockENSZ_8durationIlSt5ratioILl1ELl1000000000EEEEEEEPNS0_11cancellableERKS6_RKSB_RKSA_RKjRKmRKSF_RKSM_RKSO_ (scylla + 0x13673c0)
#5  0x00000000014d0a71 _ZN3ser23storage_proxy_rpc_verbs13send_mutationEPN4netw17messaging_serviceENS1_8msg_addrENSt6chrono10time_pointIN7seastar12lowres_clockENS5_8durationIlSt5ratioILl1ELl1000000000EEEEEERK15frozen_mutationRKN5utils12small_vectorIN3gms12inet_addressELm3EEESK_jmRKSt8optionalIN7tracing10trace_infoEESt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENSX_19account_and_enforceEEEN7service13fencing_tokenE (scylla + 0x12d0a71)
#6  0x000000000325571f _ZN7seastar9coroutine3allIJNS_6futureIvEES3_EEC2IJZN7service13storage_proxy6remote12handle_writeIN5utils11tagged_uuidI24table_schema_version_tagEE15frozen_mutationZNS8_24receive_mutation_handlerENS_17smp_service_groupERKNS_3rpc11client_infoENSG_14opt_time_pointESE_NSA_12small_vectorIN3gms12inet_addressELm3EEESN_jmNSG_8optionalISt8optionalIN7tracing10trace_infoEEEENSP_ISt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENSY_19account_and_enforceEEEEENSP_INS6_13fencing_tokenEEEEUlRNS_10shared_ptrIS7_EENSR_15trace_state_ptrENS_13lw_shared_ptrIK6schemaEERKSE_NSt6chrono10time_pointINS_12lowres_clockENS1F_8durationIlSt5ratioILl1ELl1000000000EEEEEES13_E_ZNS8_24receive_mutation_handlerESF_SJ_SK_SE_SO_SN_jmSU_S12_S14_EUlS17_N4netw8msg_addrES1M_S1E_SN_jmRKST_S13_E_EENS2_INSG_12no_wait_typeEEES1P_SK_T_T0_RKSO_SN_jmS1R_S13_OT1_OT2_EUlvE0_ZNS9_ISD_SE_S1N_S1S_EES1U_S1P_SK_S1V_S1W_S1Y_SN_jmS1R_S13_S20_S22_EUlvE1_EEEDpOT_ (scylla + 0x305571f)
#7  0x0000000003254221 _ZN7service13storage_proxy6remote12handle_writeIN5utils11tagged_uuidI24table_schema_version_tagEE15frozen_mutationZNS1_24receive_mutation_handlerEN7seastar17smp_service_groupERKNS8_3rpc11client_infoENSA_14opt_time_pointES7_NS3_12small_vectorIN3gms12inet_addressELm3EEESH_jmNSA_8optionalISt8optionalIN7tracing10trace_infoEEEENSJ_ISt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENSS_19account_and_enforceEEEEENSJ_INS_13fencing_tokenEEEEUlRNS8_10shared_ptrIS0_EENSL_15trace_state_ptrENS8_13lw_shared_ptrIK6schemaEERKS7_NSt6chrono10time_pointINS8_12lowres_clockENS19_8durationIlSt5ratioILl1ELl1000000000EEEEEESX_E_ZNS1_24receive_mutation_handlerES9_SD_SE_S7_SI_SH_jmSO_SW_SY_EUlS11_N4netw8msg_addrES1G_S18_SH_jmRKSN_SX_E_EENS8_6futureINSA_12no_wait_typeEEES1J_SE_T_T0_RKSI_SH_jmS1L_SX_OT1_OT2_ (scylla + 0x3054221)
#8  0x000000000324a1a0 _ZN7service13storage_proxy6remote24receive_mutation_handlerEN7seastar17smp_service_groupERKNS2_3rpc11client_infoENS4_14opt_time_pointE15frozen_mutationN5utils12small_vectorIN3gms12inet_addressELm3EEESD_jmNS4_8optionalISt8optionalIN7tracing10trace_infoEEEENSF_ISt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENSO_19account_and_enforceEEEEENSF_INS_13fencing_tokenEEE (scylla + 0x304a1a0)
#9  0x0000000003257f5f _ZNSt17_Function_handlerIFN7seastar6futureINS0_3rpc12no_wait_typeEEERKNS2_11client_infoENS2_14opt_time_pointE15frozen_mutationN5utils12small_vectorIN3gms12inet_addressELm3EEESD_jmNS2_8optionalISt8optionalIN7tracing10trace_infoEEEENSF_ISt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENSO_19account_and_enforceEEEEENSF_IN7service13fencing_tokenEEEESt11_Bind_frontIMNST_13storage_proxy6remoteEFS4_NS0_17smp_service_groupES7_S8_S9_SE_SD_jmSK_SS_SV_EJPSZ_S10_EEE9_M_invokeERKSt9_Any_dataS7_OS8_OS9_OSE_OSD_OjOmOSK_OSS_OSV_ (scylla + 0x3057f5f)
#10 0x0000000001739adf _ZZZZN7seastar3rpc11recv_helperIN4netw10serializerESt8functionIFNS_6futureINS0_12no_wait_typeEEERKNS0_11client_infoENS0_14opt_time_pointE15frozen_mutationN5utils12small_vectorIN3gms12inet_addressELm3EEESG_jmNS0_8optionalISt8optionalIN7tracing10trace_infoEEEENSI_ISt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENSR_19account_and_enforceEEEEENSI_IN7service13fencing_tokenEEEEES7_JSC_SH_SG_jmSN_SV_SY_ENS0_19do_want_client_infoENS0_18do_want_time_pointEEEDaNS0_9signatureIFT1_DpT2_EEEOT0_T3_T4_ENUlNS_10shared_ptrINS0_6server10connectionEEESJ_INSt6chrono10time_pointINS_12lowres_clockENS1H_8durationIlSt5ratioILl1ELl1000000000EEEEEEElNS0_7rcv_bufEE_clES1G_S1P_lS1Q_ENUlT_E_clINS_15semaphore_unitsINS_35semaphore_default_exception_factoryES1J_EEEEDaS1S_ENUlvE_clEv (scylla + 0x1539adf)
#11 0x000000000173868a _ZZZN7seastar3rpc11recv_helperIN4netw10serializerESt8functionIFNS_6futureINS0_12no_wait_typeEEERKNS0_11client_infoENS0_14opt_time_pointE15frozen_mutationN5utils12small_vectorIN3gms12inet_addressELm3EEESG_jmNS0_8optionalISt8optionalIN7tracing10trace_infoEEEENSI_ISt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENSR_19account_and_enforceEEEEENSI_IN7service13fencing_tokenEEEEES7_JSC_SH_SG_jmSN_SV_SY_ENS0_19do_want_client_infoENS0_18do_want_time_pointEEEDaNS0_9signatureIFT1_DpT2_EEEOT0_T3_T4_ENUlNS_10shared_ptrINS0_6server10connectionEEESJ_INSt6chrono10time_pointINS_12lowres_clockENS1H_8durationIlSt5ratioILl1ELl1000000000EEEEEEElNS0_7rcv_bufEE_clES1G_S1P_lS1Q_ENUlT_E_clINS_15semaphore_unitsINS_35semaphore_default_exception_factoryES1J_EEEEDaS1S_ (scylla + 0x153868a)
#12 0x0000000001737d26 _ZN7seastar6futureINS_15semaphore_unitsINS_35semaphore_default_exception_factoryENS_12lowres_clockEEEE9then_implIZZNS_3rpc11recv_helperIN4netw10serializerESt8functionIFNS0_INS7_12no_wait_typeEEERKNS7_11client_infoENS7_14opt_time_pointE15frozen_mutationN5utils12small_vectorIN3gms12inet_addressELm3EEESM_jmNS7_8optionalISt8optionalIN7tracing10trace_infoEEEENSO_ISt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENSX_19account_and_enforceEEEEENSO_IN7service13fencing_tokenEEEEESD_JSI_SN_SM_jmST_S11_S14_ENS7_19do_want_client_infoENS7_18do_want_time_pointEEEDaNS7_9signatureIFT1_DpT2_EEEOT0_T3_T4_ENUlNS_10shared_ptrINS7_6server10connectionEEESP_INSt6chrono10time_pointIS3_NS1N_8durationIlSt5ratioILl1ELl1000000000EEEEEEElNS7_7rcv_bufEE_clES1M_S1U_lS1V_EUlT_E_NS0_IvEEEES1F_OS1X_ (scylla + 0x1537d26)
#13 0x0000000001735de2 _ZZN7seastar3rpc11recv_helperIN4netw10serializerESt8functionIFNS_6futureINS0_12no_wait_typeEEERKNS0_11client_infoENS0_14opt_time_pointE15frozen_mutationN5utils12small_vectorIN3gms12inet_addressELm3EEESG_jmNS0_8optionalISt8optionalIN7tracing10trace_infoEEEENSI_ISt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENSR_19account_and_enforceEEEEENSI_IN7service13fencing_tokenEEEEES7_JSC_SH_SG_jmSN_SV_SY_ENS0_19do_want_client_infoENS0_18do_want_time_pointEEEDaNS0_9signatureIFT1_DpT2_EEEOT0_T3_T4_ENUlNS_10shared_ptrINS0_6server10connectionEEESJ_INSt6chrono10time_pointINS_12lowres_clockENS1H_8durationIlSt5ratioILl1ELl1000000000EEEEEEElNS0_7rcv_bufEE_clES1G_S1P_lS1Q_ (scylla + 0x1535de2)
#14 0x000000000173515f _ZNSt17_Function_handlerIFN7seastar6futureIvEENS0_10shared_ptrINS0_3rpc6server10connectionEEESt8optionalINSt6chrono10time_pointINS0_12lowres_clockENS9_8durationIlSt5ratioILl1ELl1000000000EEEEEEElNS4_7rcv_bufEEZNS4_11recv_helperIN4netw10serializerESt8functionIFNS1_INS4_12no_wait_typeEEERKNS4_11client_infoENS4_14opt_time_pointE15frozen_mutationN5utils12small_vectorIN3gms12inet_addressELm3EEESY_jmNS4_8optionalIS8_IN7tracing10trace_infoEEEENS10_ISt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENS18_19account_and_enforceEEEEENS10_IN7service13fencing_tokenEEEEESP_JSU_SZ_SY_jmS14_S1C_S1F_ENS4_19do_want_client_infoENS4_18do_want_time_pointEEEDaNS4_9signatureIFT1_DpT2_EEEOT0_T3_T4_EUlS7_SH_lSI_E_E9_M_invokeERKSt9_Any_dataOS7_OSH_OlOSI_ (scylla + 0x153515f)
#15 0x0000000005f9661a _ZZZZZZN7seastar3rpc6server10connection7processEvEN3$_0clEvENKUlvE_clEvENUlvE0_clEvENKUlSt5tupleIJSt8optionalImEmlS7_INS0_7rcv_bufEEEEE_clESB_ENUlvE_clEv (scylla + 0x5d9661a)
#16 0x0000000005f961e1 _ZZZZZN7seastar3rpc6server10connection7processEvEN3$_0clEvENKUlvE_clEvENUlvE0_clEvENKUlSt5tupleIJSt8optionalImEmlS7_INS0_7rcv_bufEEEEE_clESB_ (scylla + 0x5d961e1)
#17 0x0000000005f95e04 _ZZZZN7seastar3rpc6server10connection7processEvEN3$_0clEvENKUlvE_clEvENUlvE0_clEv (scylla + 0x5d95e04)
#18 0x0000000005f96ee6 _ZN7seastar8internal14do_until_stateIZZZNS_3rpc6server10connection7processEvEN3$_0clEvENKUlvE_clEvEUlvE_ZZZNS4_7processEvENS5_clEvENKS6_clEvEUlvE0_E15run_and_disposeEv (scylla + 0x5d96ee6)
#19 0x0000000005ab08e0 _ZN7seastar7reactor14run_some_tasksEv (scylla + 0x58b08e0)
#20 0x0000000005ab1bb8 _ZN7seastar7reactor6do_runEv (scylla + 0x58b1bb8)
#21 0x0000000005ad5804 _ZNSt17_Function_handlerIFvvEZN7seastar3smp9configureERKNS1_11smp_optionsERKNS1_15reactor_optionsEE3$_0E9_M_invokeERKSt9_Any_data (scylla + 0x58d5804)
#22 0x0000000005a8033b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x588033b)
#23 0x00007f553faf7947 start_thread (libc.so.6 + 0x8c947)
#24 0x00007f553fb7d870 __clone3 (libc.so.6 + 0x112870)
Stack trace of thread 6153:
#0  0x00000000038074f3 _ZNK2db9commitlog11min_gc_timeERKN5utils11tagged_uuidI12table_id_tagEE (scylla + 0x36074f3)
#1  0x00000000045493d2 _ZNK18tombstone_gc_state21get_gc_before_for_keyEN7seastar13lw_shared_ptrIK6schemaEERKN3dht13decorated_keyERKNSt6chrono10time_pointI8gc_clockNS9_8durationIlSt5ratioILl1ELl1EEEEEE (scylla + 0x43493d2)
#2  0x00000000025d319a _ZN22compact_mutation_stateIL20compact_for_sstables1EE7consumeIN8sstables26compacted_fragments_writerE33noop_compacted_fragments_consumerEEN7seastar10bool_classINS6_18stop_iteration_tagEEEO14clustering_rowRT_RT0_ (scylla + 0x23d319a)
#3  0x00000000025ceddd _ZNO20mutation_fragment_v27consumeIN23flat_mutation_reader_v24impl16consumer_adapterI25compact_for_compaction_v2IN8sstables26compacted_fragments_writerE33noop_compacted_fragments_consumerEEEEEDcRT_ (scylla + 0x23ceddd)
#4  0x00000000025cdd36 _ZN23flat_mutation_reader_v24impl17consume_in_threadI25compact_for_compaction_v2IN8sstables26compacted_fragments_writerE33noop_compacted_fragments_consumerENS_9no_filterEEEDaT_T0_ (scylla + 0x23cdd36)
#5  0x00000000025bf931 _ZZZN8sstables10compaction7consumeEvENUl23flat_mutation_reader_v2E_clES1_ENUlvE_clEv (scylla + 0x23bf931)
#6  0x00000000025be6d6 _ZN7seastar20noncopyable_functionIFvvEE17direct_vtable_forIZNS_5asyncIZZN8sstables10compaction7consumeEvENUl23flat_mutation_reader_v2E_clES7_EUlvE_JEEENS_8futurizeINSt13invoke_resultIT_JDpT0_EE4typeEE4typeENS_17thread_attributesEOSC_DpOSD_EUlvE_E4callEPKS2_ (scylla + 0x23be6d6)
#7  0x0000000005e8e9e7 _ZN7seastar14thread_context4mainEv (scylla + 0x5c8e9e7)
Stack trace of thread 6155:
#0  0x0000000005af41f0 _ZN7seastar7reactor26io_queue_submission_pollfn4pollEv (scylla + 0x58f41f0)
#1  0x0000000005ad4809 _ZNSt17_Function_handlerIFbvEZN7seastar7reactor6do_runEvE3$_5E9_M_invokeERKSt9_Any_data (scylla + 0x58d4809)
#2  0x0000000005ab1bf6 _ZN7seastar7reactor6do_runEv (scylla + 0x58b1bf6)
#3  0x0000000005ad5804 _ZNSt17_Function_handlerIFvvEZN7seastar3smp9configureERKNS1_11smp_optionsERKNS1_15reactor_optionsEE3$_0E9_M_invokeERKSt9_Any_data (scylla + 0x58d5804)
#4  0x0000000005a8033b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x588033b)
#5  0x00007f553faf7947 start_thread (libc.so.6 + 0x8c947)
#6  0x00007f553fb7d870 __clone3 (libc.so.6 + 0x112870)
Stack trace of thread 6156:
#0  0x0000000001bc1092 _ZZN5query12consume_pageI20query_result_builderEEDaR23flat_mutation_reader_v2N7seastar13lw_shared_ptrI22compact_mutation_stateIL20compact_for_sstables0EEEERKNS_15partition_sliceEOT_mjNSt6chrono10time_pointI8gc_clockNSF_8durationIlSt5ratioILl1ELl1EEEEEEENUlP20mutation_fragment_v2E_clESO_ (scylla + 0x19c1092)
#1  0x0000000001bbfa5f _ZN5query12consume_pageI20query_result_builderEEDaR23flat_mutation_reader_v2N7seastar13lw_shared_ptrI22compact_mutation_stateIL20compact_for_sstables0EEEERKNS_15partition_sliceEOT_mjNSt6chrono10time_pointI8gc_clockNSF_8durationIlSt5ratioILl1ELl1EEEEEE (scylla + 0x19bfa5f)
#2  0x0000000001b88790 _ZN5query7querier12consume_pageI20query_result_builderEEDaOT_mjNSt6chrono10time_pointI8gc_clockNS5_8durationIlSt5ratioILl1ELl1EEEEEEN7tracing15trace_state_ptrE (scylla + 0x1988790)
#3  0x0000000001b83314 _ZN7replica5table5queryEN7seastar13lw_shared_ptrIK6schemaEE13reader_permitRKN5query12read_commandENS7_14result_optionsERKSt6vectorI20nonwrapping_intervalIN3dht13ring_positionEESaISG_EEN7tracing15trace_state_ptrERNS7_21result_memory_limiterENSt6chrono10time_pointINS1_12lowres_clockENSP_8durationIlSt5ratioILl1ELl1000000000EEEEEEPSt8optionalINS7_7querierEE (scylla + 0x1983314)
#4  0x0000000001aac339 _ZN7seastar20noncopyable_functionIFNS_6futureIvEE13reader_permitEE19indirect_vtable_forIZN7replica8database5queryENS_13lw_shared_ptrIK6schemaEERKN5query12read_commandENSD_14result_optionsERKSt6vectorI20nonwrapping_intervalIN3dht13ring_positionEESaISM_EEN7tracing15trace_state_ptrENSt6chrono10time_pointINS_12lowres_clockENST_8durationIlSt5ratioILl1ELl1000000000EEEEEESt7variantIJSt9monostateN2db24per_partition_rate_limit12account_onlyENS14_19account_and_enforceEEEE3$_0E4callEPKS5_S3_ (scylla + 0x18ac339)
#5  0x00000000045f0ea0 _ZN28reader_concurrency_semaphore14execution_loopEv.resume (scylla + 0x43f0ea0)
#6  0x000000000138afab _ZN7seastar8internal21coroutine_traits_baseIvE12promise_type15run_and_disposeEv (scylla + 0x118afab)
#7  0x0000000005ab08e0 _ZN7seastar7reactor14run_some_tasksEv (scylla + 0x58b08e0)
#8  0x0000000005ab1bb8 _ZN7seastar7reactor6do_runEv (scylla + 0x58b1bb8)
#9  0x0000000005ad5804 _ZNSt17_Function_handlerIFvvEZN7seastar3smp9configureERKNS1_11smp_optionsERKNS1_15reactor_optionsEE3$_0E9_M_invokeERKSt9_Any_data (scylla + 0x58d5804)
#10 0x0000000005a8033b _ZN7seastar12posix_thread13start_routineEPv (scylla + 0x588033b)
#11 0x00007f553faf7947 start_thread (libc.so.6 + 0x8c947)
#12 0x00007f553fb7d870 __clone3 (libc.so.6 + 0x112870)
Stack trace of thread 6152:
#0  0x00007f553fc64e50 _Unwind_DebugHook (libgcc_s.so.1 + 0x19e50)
#1  0x00007f553fc6519d _Unwind_RaiseException (libgcc_s.so.1 + 0x1a19d)
#2  0x00000000027d5387 _ZN13cql_transportL24process_execute_internalERN7service12client_stateERN7seastar7shardedIN4cql315query_processorEEENS_14request_readerEth14service_permitN7tracing15trace_state_ptrEbSt13unordered_mapIhSt8optionalINS3_13basic_sstringIajLj31ELb0EEEESt4hashIhESt8equal_toIhESaISt4pairIKhSH_EEE (scylla + 0x25d5387)
#3  0x0000000000000010 n/a (n/a + 0x0)
#4  0x00007f553d1d0d00 n/a (n/a + 0x0)
download_instructions=gsutil cp gs://upload.scylladb.com/core.scylla.112.ed05a280769b4043ab0b1a6fe25decc8.6147.1699079551000000/core.scylla.112.ed05a280769b4043ab0b1a6fe25decc8.6147.1699079551000000.gz .
gunzip /var/lib/systemd/coredump/core.scylla.112.ed05a280769b4043ab0b1a6fe25decc8.6147.1699079551000000.gz

Installation details

Kernel Version: 5.15.0-1049-aws
Scylla version (or git commit hash): 5.5.0~dev-20231027.227136ddf54f with build-id 420b5e5ef0c99c93991034185a1061ec90ab2c43

Cluster size: 12 nodes (i3en.2xlarge)

Scylla Nodes used in this run:

  • parallel-topology-schema-changes-mu-db-node-d8343c00-9 (13.40.105.112 | 10.3.9.119) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-d8343c00-8 (18.169.104.175 | 10.3.10.209) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-d8343c00-7 (3.8.162.206 | 10.3.11.44) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-d8343c00-6 (34.245.232.43 | 10.4.8.175) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-d8343c00-5 (3.253.2.214 | 10.4.11.251) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-d8343c00-4 (54.74.152.92 | 10.4.11.96) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-d8343c00-3 (54.74.160.119 | 10.4.8.254) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-d8343c00-2 (3.249.78.165 | 10.4.10.71) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-d8343c00-14 (3.11.8.66 | 10.3.10.165) (shards: -1)
  • parallel-topology-schema-changes-mu-db-node-d8343c00-13 (3.8.85.81 | 10.3.11.142) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-d8343c00-12 (3.8.86.27 | 10.3.9.157) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-d8343c00-11 (3.10.144.99 | 10.3.9.206) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-d8343c00-10 (18.169.237.29 | 10.3.11.143) (shards: 7)
  • parallel-topology-schema-changes-mu-db-node-d8343c00-1 (34.253.192.108 | 10.4.8.199) (shards: 7)

OS / Image: ami-0445669e3c59c66ab ami-066dad85b694dd7e8 (aws: undefined_region)

Test: longevity-multidc-schema-topology-changes-12h-test
Test id: d8343c00-7767-4d09-8c26-cf2c6f09751e
Test name: scylla-master/longevity/longevity-multidc-schema-topology-changes-12h-test
Test config file(s):

Logs and commands
  • Restore Monitor Stack command: $ hydra investigate show-monitor d8343c00-7767-4d09-8c26-cf2c6f09751e
  • Restore monitor on AWS instance using Jenkins job
  • Show all stored logs command: $ hydra investigate show-logs d8343c00-7767-4d09-8c26-cf2c6f09751e

Logs:

Jenkins job URL
Argus

@kostja
Copy link
Contributor

kostja commented Nov 19, 2023

@eliransin if it's the same issue, please close as duplicate.

denesb pushed a commit that referenced this issue Nov 21, 2023
…ews schemas that lacks base information' from Eliran Sinvani

This miniset addresses two potential conversions to `global_schema_ptr` of incomplete materialized views schemas.
One of them was completely unnecessary and also is a "chicken and an egg" problem where on the sync schema procedure itself a view schema was converted to `global_schema_ptr` solely for the purposes of logging. This can create a
"hickup" in the materialized views updates if they are comming from a node with a different mv schema.
The reason why sometimes a synced schema can have no base info is because of deactivision and reactivision of the schema inside the `schema_registry` which doesn't restore the base information due to lack of context.
When a schema is synced the problem becomes easy since we can just use the latest base information from the database.

Fixes #14011

Closes #14861

* github.com:scylladb/scylladb:
  migration manager: fix incomplete mv schemas returned from get_schema_for_write
  migration_manager: do not globalize potentially incomplete schema

(cherry picked from commit 5752dc8)
@denesb
Copy link
Contributor

denesb commented Nov 21, 2023

Backport to 5.4. queued. @eliransin does any other release need this?

@eliransin
Copy link
Contributor

Backport to 5.4. queued. @eliransin does any other release need this?

Checking...

@eliransin
Copy link
Contributor

5.3 needs this too but it is not a clean backport, I will prepare a backport for this.

@denesb
Copy link
Contributor

denesb commented Nov 21, 2023

5.3 needs this too but it is not a clean backport, I will prepare a backport for this.

We don't have a 5.3, do you mean 5.2?

@avikivity
Copy link
Member

@eliransin ping backport

@denesb
Copy link
Contributor

denesb commented Dec 15, 2023

@eliransin ping backport.

1 similar comment
@mykaul
Copy link
Contributor

mykaul commented Jan 1, 2024

@eliransin ping backport.

@eliransin
Copy link
Contributor

5.3 needs this too but it is not a clean backport, I will prepare a backport for this.

We don't have a 5.3, do you mean 5.2?

Sorry for the long delay, 5.2 doesn't have this code so no backport needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/materialized views P1 Urgent symptom/ci stability Issues that failed in ScyllaDB CI - tests and framework tests/dtest type/bug
Projects
None yet