-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Node crushing after a simple restart of the service #139
Comments
Thank you for reporting! We'll investigate this |
Confirm the error: Jun 17 09:55:15 stakewars-iv-h15a neard[143422]: thread 'main' panicked at chain/client/src/client_actor.rs:168:6:
Jun 17 09:55:15 stakewars-iv-h15a neard[143422]: called `Result::unwrap()` on an `Err` value: Chain(StorageError(StorageInconsistentState("No ChunkExtra for block 8WX1DQnSttuk4WTyHPD5oJnrYBAL95hbCDaF2nbX2pgj in shard s1.v3")))
Jun 17 09:55:15 stakewars-iv-h15a neard[143422]: stack backtrace:
Jun 17 09:55:15 stakewars-iv-h15a neard[143422]: 0: rust_begin_unwind
Jun 17 09:55:15 stakewars-iv-h15a neard[143422]: 1: core::panicking::panic_fmt
Jun 17 09:55:15 stakewars-iv-h15a neard[143422]: 2: core::result::unwrap_failed
Jun 17 09:55:15 stakewars-iv-h15a neard[143422]: 3: nearcore::start_with_config_and_synchronization
Jun 17 09:55:15 stakewars-iv-h15a neard[143422]: 4: neard::cli::RunCmd::run::{{closure}}
Jun 17 09:55:15 stakewars-iv-h15a neard[143422]: 5: tokio::task::local::LocalSet::run_until::{{closure}}
Jun 17 09:55:15 stakewars-iv-h15a neard[143422]: 6: neard::cli::NeardCmd::parse_and_run
Jun 17 09:55:15 stakewars-iv-h15a neard[143422]: 7: neard::main
Jun 17 09:55:15 stakewars-iv-h15a neard[143422]: note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
Jun 17 09:55:15 stakewars-iv-h15a systemd[1]: neard.service: Main process exited, code=dumped, status=6/ABRT
Jun 17 09:55:15 stakewars-iv-h15a systemd[1]: neard.service: Failed with result 'core-dump'. The problem appeared right after the Stateless pool : Here are the steps I took to fix this error:
When moving the keys to another server, the error is completely reproduced. The error is still present. |
managed to start the node with this snapshot : 2024-06-17T11:42:04Z |
Thanks, it works, but I'm stuck on the block |
The network was reset, this problem should not appear anymore |
Bug Report
Overview
Please share high level description of the issue/bug you are reporting.
i set up a stateless node days ago and was running fine.
This morning , i just did a :
sudo systemctl restart neard
and i'm getting this :I got the latest snapshot data but i had the same error.
Affected parties
Who is affected? Validators? Contract developers? Or regular users?
stateless node
pool : abahmane.pool.statelessnet.
Impact
What’s the worst outcome of the issue?
Reproduction steps
Please share step by step guideline on how to reproduce the issue.
Do a simple :
sudo systemctl restart neard
[Optional] Code reference
Please locate the issue in the codebase.
[Optional] Root cause analysis
This section is optional but should be filed to claim additional reward.
Please share your analysis on the root cause of the issue.
[Optional] Suggested fix
This section is optional but should be filed to claim additional reward.
Please share a recommended long-term/short-term fix for the issue.
The text was updated successfully, but these errors were encountered: