Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fast-sync uses too much memory when trying to sync a large number of blocks #407

Closed
albrow opened this issue Sep 12, 2019 · 0 comments
Closed
Assignees
Labels
performance Related to improving or measuring performance

Comments

@albrow
Copy link
Contributor

albrow commented Sep 12, 2019

I've observed an issue that comes up when starting Mesh for the first time after a while. One of our Mesh nodes was constantly restarting. The last message we see before restarting is "Some blocks have elapsed since last boot...". Here's a screenshot of the logs:

Screen Shot 2019-09-12 at 10 25 32 AM

I used docker stats to see that memory usage was steadily climbing until the container is eventually forcibly killed by Docker.

Screen Shot 2019-09-12 at 10 26 56 AM

I believe this is happening because the "fast-sync" feature in Mesh keeps all event logs in memory while trying to catch up to the latest block. We can solve this issue by either (1) processing the logs in batches instead of trying to keep them all in memory or (2) placing a cap on the maximum number of blocks that Mesh will attempt to fast-sync. If too many blocks have passed, we would be better off starting from scratch and revalidating all existing orders.

The workaround for now is to manually delete the db/ folder, causing Mesh to start from scratch instead of trying to use the fast-sync feature.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Related to improving or measuring performance
Projects
None yet
Development

No branches or pull requests

2 participants