This repository has been archived by the owner on Aug 2, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 3.8k
Crazy Network Crash #2803
Comments
Apparently the config file had both attempting to produce and everything was fine for a while then all hell broke lose when applying a block. |
bytemaster
added a commit
that referenced
this issue
May 6, 2018
wow,maybe they all want to be rich.tell them should be by voting. |
I can fix this pretty easily in the morning. The commit that is tagged with
this is dangerous. If we want to use that commit we should revert the
validator plugin and the concept of coordinators in appbase. Otherwise we
have a lot of dead code which is only waiting to cause problems.
The root cause of the error is that unlike the previous chain controller we
can now throw exceptions when trying to schedule the next block.
start_block can throw. We should probably take the time of the head block
into account to make sure this symptom doesn't happen but their is also
something apparently wrong in the retry logic as well that needs work
…On Sun, May 6, 2018, 5:29 PM Daniel Larimer ***@***.***> wrote:
Started one node with:
./nodeos -e --producer eosio
...
1508503ms thread-0 producer_plugin.cpp:415 block_production_loo ] Produced block 00000180e0d4c715... #384 @ 2018-05-06T21:25:08.500 signed by eosio [trxs: 0, lib: 384, confirmed: 0]
1509003ms thread-0 producer_plugin.cpp:415 block_production_loo ] Produced block 00000181040d98a9... #385 @ 2018-05-06T21:25:09.000 signed by eosio [trxs: 0, lib: 385, confirmed: 0]
1509005ms thread-0 producer_plugin.cpp:124 on_incoming_block ] Received block 000001821eca8981... #386 @ 2018-05-06T21:25:09.500 signed by eosio [trxs: 0, lib: 386, confirmed: 0]
1509005ms thread-0 producer_plugin.cpp:347 schedule_production_ ] 10 assert_exception: Assert Exception
when > header.timestamp: next block must be in the future
{}
thread-0 block_header_state.cpp:71 generate_next
1509005ms thread-0 producer_plugin.cpp:367 schedule_production_ ] Failed to start a pending block, will try again later
Segmentation fault: 11
And another with
./nodeos -d node2 --p2p-peer-address 127.0.0.1:9876 --p2p-listen-endpoint 127.0.0.1:6789
...
1508503ms thread-0 producer_plugin.cpp:124 on_incoming_block ] Received block 00000180e0d4c715... #384 @ 2018-05-06T21:25:08.500 signed by eosio [trxs: 0, lib: 384, confirmed: 0]
1509004ms thread-0 producer_plugin.cpp:124 on_incoming_block ] Received block 00000181040d98a9... #385 @ 2018-05-06T21:25:09.000 signed by eosio [trxs: 0, lib: 385, confirmed: 0]
1509004ms thread-0 producer_plugin.cpp:415 block_production_loo ] Produced block 000001821eca8981... #386 @ 2018-05-06T21:25:09.500 signed by eosio [trxs: 0, lib: 386, confirmed: 0]
1509005ms thread-0 producer_plugin.cpp:347 schedule_production_ ] 10 assert_exception: Assert Exception
when > header.timestamp: next block must be in the future
{}
thread-0 block_header_state.cpp:71 generate_next
1509005ms thread-0 producer_plugin.cpp:367 schedule_production_ ] Failed to start a pending block, will try again later
Segmentation fault: 11
Out of nowhere, the non-producing block schedules production and sends a
block to the producign node... which rejects it and then both nodes crash.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#2803>, or mute the thread
<https://github.com/notifications/unsubscribe-auth/ACYR4qQItlpX2gLQ5Ql4NBHGoXqYHeljks5tv2tEgaJpZM4T0NHw>
.
|
wanderingbort
pushed a commit
to wanderingbort/eos
that referenced
this issue
May 7, 2018
…ucer_plugin again. Fix for 85b2948 did not link the default plugin properly. Also check to make sure that we have not received a block from the future when we attempt to start the next block which is the last symptom before whatever caused the crash in EOSIO#2803
wanderingbort
pushed a commit
to wanderingbort/eos
that referenced
this issue
May 7, 2018
use the correct base time when calculating the minimum time to the next block prevent bad access of members of `pending` if `start_block` throws by cleaning up `pending` on the way out prevent bad access of vector when fullfilling a fork request results in no blocks to send
Merged
bytemaster
added a commit
that referenced
this issue
May 8, 2018
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Started one node with:
And another with
Out of nowhere, the non-producing block schedules production and sends a block to the producign node... which rejects it and then both nodes crash.
The text was updated successfully, but these errors were encountered: