You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In a similar vein to "Freeze The Bridge Via Large ERC20 Names/Symbols/Denoms", a sufficiently large validator set or sufficiently rapid validator update could cause both the eth_oracle_main_loop and relayer_main_loop to fall into a state of perpetual errors. In find_latest_valset, we call:
Which if the validator set is sufficiently large, or sufficiently rapidly updated, which continuous return an error if the logs in a 5000 (see: const BLOCKS_TO_SEARCH: u128 = 5_000u128;) block range are in excess of 10mb. Cosmos hub says they will be pushing the number of validators up to 300 (currently 125). At 300, each log would produce 19328 bytes of data (4*32+64*300). Given this, there must be below 517 updates per 5000 block range otherwise the node will fall out of sync.
This will freeze the bridge by disallowing attestations to take place.
This requires a patch to reenable the bridge.
Recommendation
Handle the error more concretely and check if you got a byte limit error. If you did, chunk the search size into 2 and try again. Repeat as necessary, and combine the results.
The text was updated successfully, but these errors were encountered:
This is a solid report with detailed computations to back it up. I appreciate it and will take actions in our web3 library to prevent this exact scenario.
Handle
nascent
Vulnerability details
In a similar vein to "Freeze The Bridge Via Large ERC20 Names/Symbols/Denoms", a sufficiently large validator set or sufficiently rapid validator update could cause both the
eth_oracle_main_loop
andrelayer_main_loop
to fall into a state of perpetual errors. Infind_latest_valset
, we call:Which if the validator set is sufficiently large, or sufficiently rapidly updated, which continuous return an error if the logs in a 5000 (see:
const BLOCKS_TO_SEARCH: u128 = 5_000u128;
) block range are in excess of 10mb. Cosmos hub says they will be pushing the number of validators up to 300 (currently 125). At 300, each log would produce 19328 bytes of data (4*32+64*300). Given this, there must be below 517 updates per 5000 block range otherwise the node will fall out of sync.This will freeze the bridge by disallowing attestations to take place.
This requires a patch to reenable the bridge.
Recommendation
Handle the error more concretely and check if you got a byte limit error. If you did, chunk the search size into 2 and try again. Repeat as necessary, and combine the results.
The text was updated successfully, but these errors were encountered: