New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Snapshot generation takes too long #23114
Comments
Something seems off there. Then again, your block imports are excruciatingly slow too. 55s for a single blocks is quite bad. Snapshot generation can only be done while you are not importing blocks, so as long as you're out of sync, it won't be able to progress. How much RAM do you have, how large cache allowance are you running with? The less RAM you allot to Geth, to more it needs to push stuff to disk, which causes it's database to grow faster. At this point it might be worthwhile to resync. You can keep the |
I ran geth with 256 cache, I changed it to 2048 and restarted it. The log now looks like this:
If I end up resyncing with keeping |
@PanosChtz if you do |
Ok thank you, I will leave it running for a few more days before giving up and will update this thread accordingly. |
I also noticed that there is no ETA any more in the log. Again, not sure if this is normal or a bug.
|
So after letting it run for more than 5 days for a second time, I decided to resync (did not delete ancient db).
when the synced finally reached 100%, it started a "state heal":
which still goes on. Is this normal?
|
Looks like state heal finished, but the snapshot generation thing started again. Hope this won't take forever this time.
|
So eventually today my long journey in the snapshot world ended:
So not sure what was the cause of the initial problem that somehow caused the snapshot generation process to stall. Maybe if I realized that it had stalled sooner I would have just done the resync and save 2 weeks of waiting. I'm closing the issue but I suggest the developers to look into this ETA issue (have seen crazy ETAs in other online sources where people suggested to wait it out), maybe if the log is more clear (like percentage completed in snapshot generation?) |
I am in the same boat as you are, I have been trying for ~1 week now getting mainnet to sync using snap sync under different cloud hardware. That ETA is really troublesome, and I wish it was more strict saying "Hardware not supported, exiting.." so we know early on that this will never complete, due to some heuristics. |
System information
Geth version:
10.0.4
OS & Version: Linux (Ubuntu 20.04.2 LTS) on Raspberry Pi 8GB
Expected behaviour
Complete pruning in a reasonable time
Actual behaviour
Pruning still runs after one week
Steps to reproduce the behaviour
I was running geth 1.10.2 on my RPi4 with the parameters
--txlookuplimit=0 --snapshot=false
to minimize cpu load. However since my 2TB SSD is now about 79% full for some reason, I decided to make a snapshot before pruning. So I updated to geth 1.10.4 and took these parameters out, which enabled snapshot generation by default. However the snapshot generation for some reason seems to take for ever, it's been running for over a week now. Is this normal behavior? Anyone experienced this on a RPi4? I'm afraid that if it takes like a month or so, I will be forced to delete all chaindata and resync from scratch since the disk will eventually become full. Geth log below.Backtrace
after a few days, ETA became a negative number which is very weird. Also noticed the "at=", "accounts=", "slots=" and "storage=" fields remain the same all the time (not sure if this is normal).
next day:
today ETA is positive again, yet no end in sight:
The text was updated successfully, but these errors were encountered: