Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update the relaying mechanism to catch up correctly #370

Closed
Tracked by #301
rach-id opened this issue Apr 28, 2022 · 4 comments
Closed
Tracked by #301

Update the relaying mechanism to catch up correctly #370

rach-id opened this issue Apr 28, 2022 · 4 comments
Assignees

Comments

@rach-id
Copy link
Member

rach-id commented Apr 28, 2022

As of #298, the QGB will be able to catch up with the old signatures and relay them.
The current way this is implemented is: The relayer is relaying valsets and data commitments independently.
This will need to be updated as we will be replaying data commitments from a previous valset and checking them against the current valset (a possible scenario when catching up and the relayer was down for some reason):
https://github.com/celestiaorg/quantum-gravity-bridge/blob/c61578fc1d05b9a9c607a94ffcfef5424be7ce33/src/QuantumGravityBridge.sol#L344

Thus, we will need to update the relaying mechanism: Whenever it finds a new signed valset, it doesn't relay it until all the data commitments confirmations have been relayed to the contract up to a data commitment.end_block >= valset.Height. Only then, the relayer proceeds by relaying the new valsets.

@rach-id rach-id added the C: QGB label Apr 28, 2022
@rach-id rach-id self-assigned this Apr 28, 2022
@rach-id
Copy link
Member Author

rach-id commented Apr 28, 2022

@evan-forbes What do you think ?
I am not sure whether to do this after the worker pool design or before...

@evan-forbes
Copy link
Member

This is an important "edge" case, that might not get hit under our initial use, so if there is a simple way to do this before the worker pool design, then I'd say we should do that, but if not, then perhaps we could wait.

Do you think that we could get away with not covering this case until the worker pool design? I think the new design would be better at handling cases like this.

@rach-id
Copy link
Member Author

rach-id commented Apr 28, 2022

We can, but it would only work in the case that orchestrator are signing commitments really fast and no valset change happened in between.
But yes, we can leave it after the worker pool design.

@rach-id
Copy link
Member Author

rach-id commented Jun 23, 2022

This will be fixed using the universal nonces #464

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Archived in project
Development

No branches or pull requests

2 participants