-
Notifications
You must be signed in to change notification settings - Fork 156
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Migration failures in docker #810
Comments
|
|
Same here, using docker compose setup the same way as in this repo. Every restart leads to rollback to the start of the epoch.
|
db-sync currently only takes ledger snapshots on epoch boundaries. The reasons are:
So it is expected that it only rollback to the latest epoch boundary. However restarts should not happen in normal circumstances. |
@xdzurman If you are not seeing the migration errors, you do not have the same issue. |
If I had to choose between 30 minutes waiting on the epoch boundary (predictable) and random outages due to restarts, I prefer the predictability. How about narrowing the window of inserting rewards to the last 2 days of an epoch. Creating 2 snapshots, one at the boundary, one half way through the epoch and inserting rewards only in the second half. This would cut the "catch up time" of 1 hour per 20k blocks to 30 minutes, if we restart on day 5 of 5 of an epoch. |
@MarcelKlammer The first migration error log suggests that there is a database name mismatch. Sorry that is up to you to sort out. |
That may be what you prefer but it was not deemed acceptable by most other users of |
I'm using the docker-compose of the graphql package. So if there is some type of missmatch, that's within the db-sync docker image. |
I know nothing about Docker. @rhyslbw ? |
Aparently this has been fixed in #793 but that was merged after the |
So why not tagging the new docker image with an actual release number and just pushing it as |
@mikaint I found out 9 hours ago that this had been fixed on master. At that time it was 10pm Sunday night in my time zone. Yes, the required fix will be cherry picked onto the |
Closing this. |
Mainnet, epoch 289
cardano-db-sync 11.0.0 via docker image
It seems, that restarting db-sync results in a rollback. The following logs show a restart of the containers 09:12 and 09:47. Both times db-sync deleted blocks (19k and 5k) and starts to sync from the last known state.
Due to db-syncs slow insertion of blocks (30 minutes for 10k blocks), it results in a service outage.
This was not the case for v9 of db-sync where a restart had it pick up where it left off.
The text was updated successfully, but these errors were encountered: