Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thread 'tokio-runtime-worker' panicked #595

Closed
gdzien-co opened this issue Jul 12, 2021 · 8 comments
Closed

Thread 'tokio-runtime-worker' panicked #595

gdzien-co opened this issue Jul 12, 2021 · 8 comments

Comments

@gdzien-co
Copy link

2021-07-12 17:23:30 [Relaychain] ✨ Imported #676997 (0xe8a9…f680)

====================

Version: 0.9.1-78d1dae-x86_64-linux-gnu

0: sp_panic_handler::set::{{closure}}
1: std::panicking::rust_panic_with_hook
at rustc/fe1bf8e05c39bdcc73fc09e246b7209444e389bc/library/std/src/panicking.rs:595:17
2: std::panicking::begin_panic_handler::{{closure}}
at rustc/fe1bf8e05c39bdcc73fc09e246b7209444e389bc/library/std/src/panicking.rs:497:13
3: std::sys_common::backtrace::__rust_end_short_backtrace
at rustc/fe1bf8e05c39bdcc73fc09e246b7209444e389bc/library/std/src/sys_common/backtrace.rs:141:18
4: rust_begin_unwind
at rustc/fe1bf8e05c39bdcc73fc09e246b7209444e389bc/library/std/src/panicking.rs:493:5
5: core::panicking::panic_fmt
at rustc/fe1bf8e05c39bdcc73fc09e246b7209444e389bc/library/core/src/panicking.rs:92:14
6: core::option::expect_failed
at rustc/fe1bf8e05c39bdcc73fc09e246b7209444e389bc/library/core/src/option.rs:1292:5
7: sc_client_db::Backend::try_commit_operation
8: <sc_client_db::Backend as sc_client_api::backend::Backend>::commit_operation
9: <sc_service::client::client::Client<B,E,Block,RA> as sc_client_api::backend::LockImportRun<Block,B>>::lock_import_and_run::{{closure}}
10: <core::future::from_generator::GenFuture as core::future::future::Future>::poll
11: <core::future::from_generator::GenFuture as core::future::future::Future>::poll
12: <core::future::from_generator::GenFuture as core::future::future::Future>::poll
13: <core::future::from_generator::GenFuture as core::future::future::Future>::poll
14: <core::future::from_generator::GenFuture as core::future::future::Future>::poll
15: <core::future::from_generator::GenFuture as core::future::future::Future>::poll
16: <core::future::from_generator::GenFuture as core::future::future::Future>::poll
17: <futures_util::future::future::map::Map<Fut,F> as core::future::future::Future>::poll
18: <sc_service::task_manager::prometheus_future::PrometheusFuture as core::future::future::Future>::poll
19: <core::future::from_generator::GenFuture as core::future::future::Future>::poll
20: <tracing_futures::Instrumented as core::future::future::Future>::poll
21: std::thread::local::LocalKey::with
22: futures_executor::local_pool::block_on
23: tokio::runtime::task::core::Core<T,S>::poll
24: tokio::runtime::task::harness::Harness<T,S>::poll::{{closure}}
25: tokio::runtime::task::harness::Harness<T,S>::poll
26: tokio::runtime::blocking::pool::Inner::run
27: tokio::runtime::context::enter
28: std::sys_common::backtrace::__rust_begin_short_backtrace
29: core::ops::function::FnOnce::call_once{{vtable.shim}}
30: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce>::call_once
at rustc/fe1bf8e05c39bdcc73fc09e246b7209444e389bc/library/alloc/src/boxed.rs:1546:9
<alloc::boxed::Box<F,A> as core::ops::function::FnOnce>::call_once
at rustc/fe1bf8e05c39bdcc73fc09e246b7209444e389bc/library/alloc/src/boxed.rs:1546:9
std::sys::unix::thread::Thread::new::thread_start
at rustc/fe1bf8e05c39bdcc73fc09e246b7209444e389bc/library/std/src/sys/unix/thread.rs:71:17
31: start_thread
32: clone

Thread 'tokio-runtime-worker' panicked at 'existence of block with number new_canonical implies existence of blocks with all numbers before it; qed', /home/gh-actions/.cargo/git/checkouts/
substrate-7e08433d4c370a21/9c57262/client/db/src/lib.rs:1218

This is a bug. Please report it at:

    https://github.com/PureStake/moonbeam/issues/new
@gdzien-co
Copy link
Author

Tried running with debugging options I've seen in #541
-l state-db=trace,sync=trace

It has changed nothing, seems like something in my data is corrupted, will try to run other node with other data location, keeping those for a week or so in case if you need the data files to investigate further.
Log file from docker logs of "new container with old data" attached:
log.txt

@crystalin
Copy link
Collaborator

Thank you for trying.
The parameter you typed looks good, but the output should look like those (where you see the log level in the line, like INFO, TRACE...)

2021-07-12 17:46:16.383  INFO main sc_cli::runner: Moonbeam Parachain Collator
2021-07-12 17:46:16.384  INFO main sc_cli::runner: ✌️  version 0.9.2-c25242f8-x86_64-linux-gnu
2021-07-12 17:46:16.384  INFO main sc_cli::runner: ❤️  by PureStake, 2019-2021
2021-07-12 17:46:16.384  INFO main sc_cli::runner: 📋 Chain specification: Moonbase Alpha
2021-07-12 17:46:16.384  INFO main sc_cli::runner: 🏷 Node name: tricky-crate-7940
2021-07-12 17:46:16.384  INFO main sc_cli::runner: 👤 Role: FULL
2021-07-12 17:46:16.384  INFO main sc_cli::runner: 💾 Database: RocksDb at /tmp/substrateKm2Dx8/chains/moonbase_alpha/db
2021-07-12 17:46:16.384  INFO main sc_cli::runner: ⛓  Native runtime: moonbase-155 (moonbase-0.tx2.au3)
2021-07-12 17:46:16.420  INFO main moonbeam_cli::command: Parachain id: Id(1000)
2021-07-12 17:46:16.420  INFO main moonbeam_cli::command: Parachain Account: 5Ec4AhPZk8STuex8Wsi9TwDtJQxKqzPJRCH7348Xtcs9vZLJ
2021-07-12 17:46:16.421  INFO main moonbeam_cli::command: Parachain genesis state: 0x000000000000000000000000000000000000000000000000000000000000000000b505bc9a20d69f14620b2417b6d777c398ceb3e32119b9a53507111d1880927c03170a2e7597b7b7e3d84c05391d139a62b157e78786d8c082f29dcf4c11131400
2021-07-12 17:46:16.532 TRACE main state-db: [🌗] StateDb settings: Constrained(Constraints { max_blocks: Some(256), max_mem: None }). Ref-counting: true
2021-07-12 17:46:16.532 TRACE main state-db: [🌗] DB pruning mode: None
2021-07-12 17:46:16.533 TRACE main state-db: [🌗] Reading pruning journal. Pending #0
2021-07-12 17:46:16.553  INFO main sc_service::client::client: [🌗] 🔨 Initializing Genesis block/state (state: 0xb505…927c, header-hash: 0x91bc…9527)
2021-07-12 17:46:16.554 TRACE main state-db: [🌗] Inserted uncanonicalized changeset 0.0 (75 inserted, 0 deleted)
2021-07-12 17:46:16.555 TRACE main state-db: [🌗] Canonicalizing 0x91bc6e169807aaa54802737e1c504b2577d4fafedd5a02c10293b1cd60e39527
2021-07-12 17:46:16.555 TRACE main state-db: [🌗] Discarding 1 records

@gdzien-co
Copy link
Author

I will try on the chain that was failing yesterday but I am afraid that the same thing may happen as it did happen with second attempt node - which after waiting some time and restarting went further and fully synchronized the chain.

@gdzien-co
Copy link
Author

The way I tried to start the node does not seem to take the provided arguments:
docker logs -f $(docker run -d --network="host" -v "/data:/data" -u $(id -u ${USER}):$(id -g ${USER}) purestake/moonbeam:v0.9.1 --base-path=/data --chain alphanet --name="GDZIEN-C1-Test2-Temp" --execution wasm --wasm-execution compiled --pruning archive --state-cache-size 1 -- --pruning archive --name="GDZIEN-C1-Test2-Temp (Embedded Relay)" -l state-db=trace,sync=trace)

How else I may try to set those debugging/tracing options?

@crystalin
Copy link
Collaborator

@gdzien-co you should try to put the flags before the -- one
-- is used to separate the parameter of the parachain (before) and the relaychain (after)

@gdzien-co
Copy link
Author

Perfect, log is now way faster to scroll the screen, if the node will fail (I am running it as the 2nd one on the same network, so some network port collision is expected).

@gdzien-co
Copy link
Author

log-new-net.zip
This file deflates to 109MiB. Contains error with debugging options.

Can provide corrupted files upon request.

@crystalin
Copy link
Collaborator

Closing it in favor of #541

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants