-
Notifications
You must be signed in to change notification settings - Fork 289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update the relaying mechanism to catch up correctly #370
Comments
@evan-forbes What do you think ? |
This is an important "edge" case, that might not get hit under our initial use, so if there is a simple way to do this before the worker pool design, then I'd say we should do that, but if not, then perhaps we could wait. Do you think that we could get away with not covering this case until the worker pool design? I think the new design would be better at handling cases like this. |
We can, but it would only work in the case that orchestrator are signing commitments really fast and no valset change happened in between. |
This will be fixed using the universal nonces #464 |
As of #298, the QGB will be able to catch up with the old signatures and relay them.
The current way this is implemented is: The relayer is relaying valsets and data commitments independently.
This will need to be updated as we will be replaying data commitments from a previous valset and checking them against the current valset (a possible scenario when catching up and the relayer was down for some reason):
https://github.com/celestiaorg/quantum-gravity-bridge/blob/c61578fc1d05b9a9c607a94ffcfef5424be7ce33/src/QuantumGravityBridge.sol#L344
Thus, we will need to update the relaying mechanism: Whenever it finds a new signed valset, it doesn't relay it until all the data commitments confirmations have been relayed to the contract up to a
data commitment.end_block >= valset.Height
. Only then, the relayer proceeds by relaying the new valsets.The text was updated successfully, but these errors were encountered: