Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is the upper bound of "imported new state entries"? #14647

Closed
sonulrk opened this issue Jun 18, 2017 · 224 comments
Closed

What is the upper bound of "imported new state entries"? #14647

sonulrk opened this issue Jun 18, 2017 · 224 comments

Comments

@sonulrk
Copy link

sonulrk commented Jun 18, 2017

System information

Geth version: 1.6.5
OS & Version: Windows 7 x64
geth Command: geth --fast --cache 8192

Expected behaviour

Geth should start in full mode.

Actual behaviour

After nearing the current block geth is continuously "imported new state entries".

Steps to reproduce the behaviour

Currently running since 10 days.

Geth console info

eth.blockNumber
6
eth.syncing
{
currentBlock: 3890742,
highestBlock: 3890893,
knownStates: 17124512,
pulledStates: 17105895,
startingBlock: 3890340
}

Backtrace

INFO [06-18|10:10:31] Imported new state entries count=384 elapsed=22.001ms processed=17118951 pending=24263
INFO [06-18|10:10:32] Imported new state entries count=384 elapsed=33.001ms processed=17119335 pending=23819
INFO [06-18|10:10:33] Imported new state entries count=384 elapsed=111.006ms processed=17119719 pending=23875
INFO [06-18|10:10:34] Imported new state entries count=384 elapsed=131.007ms processed=17120103 pending=23855
INFO [06-18|10:10:35] Imported new state entries count=384 elapsed=116.006ms processed=17120487 pending=23978
INFO [06-18|10:10:36] Imported new state entries count=384 elapsed=134.007ms processed=17120871 pending=24186
INFO [06-18|10:10:38] Imported new state entries count=384 elapsed=305.017ms processed=17121255 pending=27727
INFO [06-18|10:10:42] Imported new state entries count=384 elapsed=448.025ms processed=17121639 pending=33614
INFO [06-18|10:10:46] Imported new state entries count=384 elapsed=441.025ms processed=17122023 pending=39642
INFO [06-18|10:10:48] Imported new state entries count=384 elapsed=44.002ms processed=17122407 pending=39170
INFO [06-18|10:10:52] Imported new state entries count=384 elapsed=427.024ms processed=17122791 pending=45142
INFO [06-18|10:10:55] Imported new state entries count=384 elapsed=473.027ms processed=17123175 pending=51166
INFO [06-18|10:10:58] Imported new state entries count=384 elapsed=448.025ms processed=17123559 pending=57128
INFO [06-18|10:11:01] Imported new state entries count=384 elapsed=444.025ms processed=17123943 pending=63129
INFO [06-18|10:11:04] Imported new state entries count=384 elapsed=441.025ms processed=17124327 pending=69173
INFO [06-18|10:11:04] Imported new state entries count=1 elapsed=0s processed=17124328 pending=69172
INFO [06-18|10:11:07] Imported new state entries count=384 elapsed=442.025ms processed=17124712 pending=75182
INFO [06-18|10:11:10] Imported new state entries count=384 elapsed=470.026ms processed=17125096 pending=81186
INFO [06-18|10:11:11] Imported new state entries count=384 elapsed=335.019ms processed=17125480 pending=81736
INFO [06-18|10:11:14] Imported new state entries count=384 elapsed=440.025ms processed=17125864 pending=87718
INFO [06-18|10:11:15] Imported new state entries count=384 elapsed=140.008ms processed=17126248 pending=87812
INFO [06-18|10:11:16] Imported new state entries count=384 elapsed=31.001ms processed=17126632 pending=87226
INFO [06-18|10:11:18] Imported new state entries count=384 elapsed=88.005ms processed=17127016 pending=87040
INFO [06-18|10:11:19] Imported new state entries count=384 elapsed=39.002ms processed=17127400 pending=86803
INFO [06-18|10:11:20] Imported new state entries count=384 elapsed=36.002ms processed=17127784 pending=86585
INFO [06-18|10:11:23] Imported new state entries count=1 elapsed=0s processed=17127785 pending=86272
INFO [06-18|10:11:23] Imported new state entries count=384 elapsed=1.610s processed=17128169 pending=86271
INFO [06-18|10:11:25] Imported new state entries count=384 elapsed=143.008ms processed=17128553 pending=87792
INFO [06-18|10:11:28] Imported new state entries count=384 elapsed=183.010ms processed=17128937 pending=90117
INFO [06-18|10:11:28] Imported new state entries count=1 elapsed=1ms processed=17128938 pending=90120
INFO [06-18|10:11:28] Imported new state entries count=1 elapsed=0s processed=17128939 pending=90118
INFO [06-18|10:11:29] Imported new state entries count=384 elapsed=102.005ms processed=17129323 pending=90022
INFO [06-18|10:11:30] Imported new state entries count=384 elapsed=184.010ms processed=17129707 pending=92320
INFO [06-18|10:11:32] Imported new state entries count=384 elapsed=185.010ms processed=17130091 pending=94665
INFO [06-18|10:11:34] Imported new state entries count=384 elapsed=187.010ms processed=17130475 pending=97053
INFO [06-18|10:11:36] Imported new state entries count=384 elapsed=194.011ms processed=17130859 pending=99550
INFO [06-18|10:11:38] Imported new state entries count=384 elapsed=183.010ms processed=17131243 pending=101954
INFO [06-18|10:11:40] Imported new state entries count=384 elapsed=202.011ms processed=17131627 pending=104395
INFO [06-18|10:11:42] Imported new state entries count=384 elapsed=196.011ms processed=17132011 pending=106904
INFO [06-18|10:11:44] Imported new state entries count=384 elapsed=186.010ms processed=17132395 pending=109176
INFO [06-18|10:11:47] Imported new state entries count=384 elapsed=184.010ms processed=17132779 pending=111554
INFO [06-18|10:11:47] Imported new state entries count=2 elapsed=184.010ms processed=17132781 pending=111554
INFO [06-18|10:11:48] Imported new state entries count=384 elapsed=34.002ms processed=17133165 pending=110760
INFO [06-18|10:11:50] Imported new state entries count=384 elapsed=193.011ms processed=17133549 pending=113172

@ghost
Copy link

ghost commented Jun 28, 2017

Yes .. same exact effect
Command is: geth --syncmode=fast --cache=4096 console

@fjl
Copy link
Contributor

fjl commented Jun 28, 2017

Please try geth v1.6.6.

@n0-m4d
Copy link

n0-m4d commented Jun 30, 2017

in my case v1.6.6 does not fix it

@pebwindkraft
Copy link

same here, described detailed status here:
#14571

@yahortsaryk
Copy link

yahortsaryk commented Jul 4, 2017

The same for me for --testnet (ropsten) on Mac OS. The geth version is 1.6.6.
I'm running:
geth --testnet --syncmode "fast" --rpc --rpcapi db,eth,net,web3,personal --cache=1024 --rpcport 8545 --rpcaddr 127.0.0.1 --rpccorsdomain "*" --bootnodes "enode://20c9ad97c081d63397d7b685a412227a40e23c8bdc6688c6f37e97cfbc22d2b4d1db1510d8f61e6a8866ad7f0e17c02b14182d37ea7c3c8b9c2683aeb6b733a1@52.169.14.227:30303,enode://6ce05930c72abc632c58e2e4324f7c7ea478cec0ed4fa2528982cf34483094e9cbc9216e7aa349691242576d552a2a56aaeae426c5303ded677ce455ba1acd9d@13.84.180.240:30303"
and when it comes near to latest blocks (at https://ropsten.etherscan.io/) it continuously "imports new state entries". If I restart the cmd above it fetches most recent blocks but never reach latest ones.

@porfavorite
Copy link

porfavorite commented Jul 8, 2017

same here... the geth is 1.6.6, running geth --testnet --fast --cache=1024

@adhicl
Copy link

adhicl commented Jul 11, 2017

I am in same state after a week using geth --fast --cache=1024. Anyone knows what should I do right now?

@thecopy
Copy link

thecopy commented Jul 17, 2017

Same situation on testnet. OSX, geth 1.6.7-stable-ab5646c5. Started with --fast and --cache 1500

@sonulrk
Copy link
Author

sonulrk commented Jul 17, 2017

What I could understand from this geth fast mode problem is that:

  1. You must use Quad core processor with 4 gb or more RAM
  2. You must use SSD instead of HDD
  3. Your internet connection must be at least 2mbps or more and reliable.
    If all above are checked then you should try geth. Using geth in full mode you would most probably synced in a week or two max. In fast mode it depends on your luck but you are most probably synced in 2-3 days or never.

@clowestab
Copy link

Encountering similar issues. Geth --fast sync does not sync, let alone fast.

Using 1.6.7-stable on Ubuntu, and it gets within 100 blocks and then endlessly imports state entries.

@alfkors
Copy link

alfkors commented Sep 1, 2017

@clowestab did you ever get it to sync?

@hlagagoalga
Copy link

Having same issue. Also using ubuntu, and geth endlessly syncs.

@mboehler
Copy link

mboehler commented Sep 7, 2017

I am having the same issue. Has anyone been able to solve this problem?

@brennino
Copy link

brennino commented Sep 8, 2017

Same problem here, I run geth with 1024 cache and fast syncing three days ago and, after reaching the last block number one day ago, it had never stopped the "Imported new state entries" state.

During this "state entries" phase, however, my balance changed from 0 to the correct value and the block number reported changed from 0 to the a number near the highestBlock value.

This is geth version on my ubuntu machine:

Geth
Version: 1.6.7-stable
Git Commit: ab5646c
Architecture: amd64
Protocol Versions: [63 62]
Network Id: 1
Go Version: go1.8.1
Operating System: linux
GOPATH=
GOROOT=/usr/lib/go-1.8

and this is the result now every half second:
...
INFO [09-08|10:05:34] Imported new state entries count=11 flushed=7 elapsed=216.807ms processed=25170515 pending=1852 retry=1 duplicate=5293 unexpected=6261
INFO [09-08|10:05:34] Imported new state entries count=2 flushed=5 elapsed=521.989µs processed=25170517 pending=1847 retry=0 duplicate=5293 unexpected=6261
INFO [09-08|10:05:34] Imported new state entries count=7 flushed=1 elapsed=263.738ms processed=25170524 pending=1862 retry=2 duplicate=5293 unexpected=6261
INFO [09-08|10:05:34] Imported new state entries count=2 flushed=4 elapsed=6.934ms processed=25170526 pending=1859 retry=0 duplicate=5293 unexpected=6261
INFO [09-08|10:05:34] Imported new state entries count=1 flushed=0 elapsed=27.828ms processed=25170527 pending=1861 retry=1 duplicate=5293 unexpected=6261
INFO [09-08|10:05:34] Imported new state entries count=5 flushed=5 elapsed=19.440ms processed=25170532 pending=1860 retry=0 duplicate=5293 unexpected=6261
...

this is the result of eth.syncing and other geth tools:

eth.syncing
{
currentBlock: 4249131,
highestBlock: 4250814,
knownStates: 25172364,
pulledStates: 25170517,
startingBlock: 0
}

net.peerCount
25

eth.blockNumber
4244762

The ether balance of my wallet is not 0 and is reported correcly and updated to the last transaction made one day ago.

How long is this state supposed to last?

@brennino
Copy link

brennino commented Sep 8, 2017

Another information, my .ethereum folder size is currently 41 GB, maybe a little too large for a right fast sync.

@sonulrk
Copy link
Author

sonulrk commented Sep 8, 2017

I think never.

@brennino
Copy link

brennino commented Sep 8, 2017

UPDATE: I stopped geth with CTRL-D and reopen it. Now it seems that the "Imported new state entries" phase halted and geth is working correctly updating only new blocks.
It seems that the problem is that fast sync continue forever to download states and is not aware that the blockchain has already in a right, consistent state.

So, for now until the issue is solved this is my advise:

  • Start geth fast sync
  • wait until the command eth.syncing report a currentBlock near the highestBlock
  • After that, every few hours run the command: eth.blockNumber ... firstly it returns probably 0 but continue to wait.
  • When eth.blockNumber returns a value different from 0 and near the currentBlock value close geth with (CTRL-D or with the command "exit" so it can close correclty and under control) and wait until the program closes in the right way and your operating system shell comes back
  • Reopen geth with fast option. You will see the warning "Blockchain not empty, fast sync disabled"... this is the correct behavior, is telling you that fast sync has been finished.
  • Now the "Imported new state entries" messages disappear and you just see this messages every few seconds:
    INFO [09-08|21:12:06] Imported new chain segment blocks=1 txs=131 mgas=5.379 elapsed=13.855s mgasps=0.388 number=4251422 hash=697652…d85ce8

Now your problem has been solved and probably geth avoid to download a lot of states, reducing the hard disk space taken by geth too.

This is valid until the issue will be solved and geth will become aware when the blockchain is correctly synced.

This is my experience, maybe work, maybe not. For me it worked.
Hope this helps,
Marco

@sonulrk
Copy link
Author

sonulrk commented Sep 8, 2017

Congrats but I had run it for more than 8 hours and eth.blockNumber had shown 6 always. I have to change to parity to sync the blockchain.
Edit: Fast sync worked at first run only on consecutive run you have to run in full mode or geth automatically give warning "Blockchain not empty, fast sync disable" and continue with full mode.

@brennino
Copy link

brennino commented Sep 9, 2017

I think you have just to wait until eth.blockNumber shows a number near currentBlock before close it and start it again.
I forget to tell to remove old blockchains before starting the task I told before.
Yes, fast sync can be used with an empty blockchain and only the first time.
The command for clear the whole blockchain is "geth removedb" on the operating system shell, it removes everything has been downloaded before.
After that you are able to start a fast sync again from an empty blockchain and follow the procedure I told in my prev post hoping it works.

I'm not a geth developer, I just use it so I can't solve the problem or tell you what it does internally or why the command returns you "6" and what you have to do, but it seems that it downloads a lot of states and, when it finds the head state it's able to build the full blockchain. For me this happened when geth.syncing showed a "knownStates" near : 20.000.000 but it can happen before or after.

During my test, after fast sync finished to download all the blocks headers, it takes more than 24 hours more for having eth.blockNumber = 4244762 . I run geth on a server in with a band of 100 Mb/s.

When it showed to me "0" I let it doing the work and after 24 hours I see the command returns 4244762. I haven't tried to run the command in the middle so I don't know if the command returns other numbers before reaching the last block.

I have never used parity but is seems good and use less disk space than geth so it worth a try.
Maybe some geth dev can make things more clear.

@fjl
Copy link
Contributor

fjl commented Sep 10, 2017

We believe this is fixed on the master branch. Fast sync takes a while (especially with the mainnet), but will terminate eventually.

@vincentvc
Copy link

@brennino
My eth.blockNumber shows 0 after almost several days sync. I am wondering whether the fast sync will fail if I stop and restart the sync process in middle for several times.

@manicprogrammer
Copy link
Contributor

manicprogrammer commented Sep 12, 2017

@vincentvc fast sync in geth only works when the database is empty thus you get one chance to fast sync and then it will be the full sync after that [STATED IN ERROR THE FOLLOWING- THIS IS INCORRECT AS POINTED OUT BY FJL: thus yes if you stop and restart anytime before that first fast sync finishes you won't do a fast sync from that point.] My experience with the scenario listed here was two fold. I used the latest build off of main and I made sure I had my database on an SSD. doesn't seem like the SSD vs HDD thing would matter so much but in my experience, until I put it on the SSD I could never get that first sync to finish up - not to say that is true for everyone just my experience.

@skarn01
Copy link

skarn01 commented Sep 13, 2017

I'm having the same problem, continuous imported state I'm currently trying as @brennino says and will come back with result later... currently 350k states processed

here some info :

> eth.syncing
{
  currentBlock: 4269853,
  highestBlock: 4270000,
  knownStates: 357664,
  pulledStates: 348163,
  startingBlock: 4268019
}

net.peerCount 10

eth.blockNumber 0
update : almost 24h later here's the number
blockNumber : 0

eth.syncing
{
currentBlock: 4270728,
highestBlock: 4270793,
knownStates: 6879452,
pulledStates: 6875584,
startingBlock: 4268019
}

imported state still going, gonna check back tomorow...

@brennino
Copy link

hi @skarn01 I don't know if this happen only to me but when I start fast sync with an empty block chain starting block always shows 0. You can see my eth.syncing on my previous post that I report here:

eth.syncing
{
currentBlock: 4249131,
highestBlock: 4250814,
knownStates: 25172364,
pulledStates: 25170517,
startingBlock: 0
}

Maybe something wrong happen during fast sync or you close geth before eth.blockNumber says a number near last block. Blockchain sync is really time consuming and you haven't to stop fast sync until finishes or eth.blockNumber != 0 and near highestBlock.

What can I tell for helping you... it seems you are not starting fast sync from an empty block chain so if I'm in you I will start again from the beginning.

If you don't want to start again (and I can understand you, I went crazy for days before having some results...) you have this opportunities:

1 - If you want just to make a transaction because ethereum are falling in price now and you are in panic you can use light sync for syncing, make your transaction and, when you stop praying and shout in your room for a price peak, try with calm to sync your blockchain with fast sync again.
2 - wait two days more with your current situation and see if something changes.

About point number 1, I think light sync is not an experimental feature any more (but maybe someone else can confirm)... and I succeed to make a transaction with a light sync blockchain without problems.
If you want to start again with point number 2 later, you can close geth (because your situation is already corrupted) and rename your blockchiain directory .ethereum/geth. If you are ok instead for clear your current blockchain just use geth removedb on the operating system shell.
After that, for starting light sync (different from fast sync!), I have started geth with the command:
geth --light --cache=1024
and wait just 2 or 3 hour for download a 600Mb light blockchain. After that you can make your transaction. Again this is just my experience for helping people, no responsabilities if you lose your ethereum.
Hope it helps
Marco

@skarn01
Copy link

skarn01 commented Sep 15, 2017

hi @brennino, you're right that strange that my starting block is not 0 as i haven't stop the geth daemon...

what i want to do is develop my own services using the chain ( got experience creating an ICO for my boss, now i got some interest in that technology ^^ ) so i don't worry for money i currently put here, there's currently none, ahah.

i'll try the --light option after the geth removedb. --light still give me possibility to work with the chain and see full block after the first sync?

Thank you and I'll come back with update on my situation.

@holiman
Copy link
Contributor

holiman commented Aug 27, 2020

@sjlnk it's because you've stopped and restarted sync several times. Or so I presume, since it's still Initializing fast sync bloom. Every time you restart, geth goes into a mode where, while importing state, also reads the entire db to create a bloom filter of existing states. Which slows down sync. Also, every time you restart, you then choose a new pivot point to sync to, so more state to download.

@karalabe just merged a great PR that makes fast-sync a lot more stable, and less prone to crashing due to out of memory. I would recommend to update to latest master and either start from scratch or continue (yes, with another restart -- can't be helped).

@holiman
Copy link
Contributor

holiman commented Aug 27, 2020

@sjlnk actually, looking at it again, I suspect you have bad read IO. It has only gone through 29M entries for the bloom filter, in 16+h, which is very slow indeed. Might want to check that your disk is good. It's not an HDD, is it? And if it's an SSD, those can get degraded pretty badly with time.

@cncr04s
Copy link

cncr04s commented Aug 27, 2020

I synced a 3 month gap from last sync, took 2.5 weeks to complete. Its an array of 6 10k rpm sas drives in raid 5. Adding 2 more soon. Took about 2 or more tb of write volume. It really is an ssd killer.

@sjlnk
Copy link

sjlnk commented Aug 27, 2020

@sjlnk it's because you've stopped and restarted sync several times. Or so I presume, since it's still Initializing fast sync bloom. Every time you restart, geth goes into a mode where, while importing state, also reads the entire db to create a bloom filter of existing states. Which slows down sync. Also, every time you restart, you then choose a new pivot point to sync to, so more state to download.

@karalabe just merged a great PR that makes fast-sync a lot more stable, and less prone to crashing due to out of memory. I would recommend to update to latest master and either start from scratch or continue (yes, with another restart -- can't be helped).

Thanks for clearing it out! Actually my sync completed in 10 hours with a powerful AWS NVMe enabled instance. I just wanted to know why some people have to download more states than others. I never restarted the process so I'm assuming the other people here did and that's why they have so many more states to download.

@sssubik
Copy link

sssubik commented Aug 28, 2020

Hey @holiman . The logs are from my VPS server. It is an SSD. Does this mean that my server is not powerful enough as it has very less i/o speed. Do I need to upgrade it?

@sssubik
Copy link

sssubik commented Aug 29, 2020

Hey @sjlnk . Could you give the specification of your server? Because I too have a good server.. But it has not completed for like a month. What is your Disk I/O rate specifically..

@Neurone
Copy link
Contributor

Neurone commented Aug 30, 2020

I'm done with counter at 652299180. Please note that in my case the processed counter at the end was bigger of about ~12M of entries vs the real entries stored in levelDB. I restarted geth during the fast sync a lot of times in these days.

INFO [08-30|10:01:52.403] Imported new state entries               count=1605 elapsed=22.003ms   processed=652296924 pending=3467   retry=0    duplicate=41127 unexpected=121037
INFO [08-30|10:01:53.302] Imported new state entries               count=1157 elapsed=11.998ms   processed=652298081 pending=2059   retry=0    duplicate=41127 unexpected=121037
INFO [08-30|10:01:54.214] Imported new state entries               count=1001 elapsed=14.992ms   processed=652299082 pending=348    retry=0    duplicate=41127 unexpected=121037
INFO [08-30|10:01:54.523] Imported new state entries               count=98   elapsed=2.837ms    processed=652299180 pending=0      retry=0    duplicate=41127 unexpected=121037
INFO [08-30|10:01:54.535] Imported new block receipts              count=1    elapsed=2ms        number=10760885 hash="e5569e…7ead3c" age=29m44s   size=101.58KiB
INFO [08-30|10:01:54.545] Committed new head block                 number=10760885 hash="e5569e…7ead3c"
INFO [08-30|10:01:54.613] Deallocated fast sync bloom              items=639248389 errorrate=0.001
INFO [08-30|10:02:03.025] Imported new chain segment               blocks=12 txs=2533 mgas=149.011 elapsed=8.402s     mgasps=17.735 number=10760897 hash="f8d397…0dafd0" age=26m49s   dirty=18.46MiB
INFO [08-30|10:02:11.409] Imported new chain segment               blocks=14 txs=3369 mgas=174.232 elapsed=8.384s     mgasps=20.780 number=10760911 hash="1adada…67ecc0" age=21m55s   dirty=41.57MiB
INFO [08-30|10:02:19.956] Imported new chain segment               blocks=18 txs=3487 mgas=210.546 elapsed=8.546s     mgasps=24.636 number=10760929 hash="97dcea…9d023b" age=16m20s   dirty=68.14MiB
INFO [08-30|10:02:27.303] New local node record                    seq=39281 id=82090fbe6ea27663 ip=127.0.0.1     udp=30303 tcp=30303
INFO [08-30|10:02:27.389] New local node record                    seq=39282 id=82090fbe6ea27663 ip=93.40.103.165 udp=30303 tcp=30303
INFO [08-30|10:02:28.037] Imported new chain segment               blocks=17 txs=3815 mgas=209.530 elapsed=8.081s     mgasps=25.927 number=10760946 hash="4165bb…e09268" age=11m39s   dirty=94.33MiB
INFO [08-30|10:02:36.621] Imported new chain segment               blocks=12 txs=2487 mgas=136.747 elapsed=8.584s     mgasps=15.930 number=10760958 hash="c59ba3…d33c9e" age=8m32s    dirty=111.89MiB
INFO [08-30|10:02:40.190] New local node record                    seq=39283 id=82090fbe6ea27663 ip=127.0.0.1     udp=30303 tcp=30303
INFO [08-30|10:02:45.795] New local node record                    seq=39284 id=82090fbe6ea27663 ip=93.40.103.165 udp=30303 tcp=30303
INFO [08-30|10:02:45.895] Imported new chain segment               blocks=10 txs=2042 mgas=120.584 elapsed=9.273s     mgasps=13.003 number=10760968 hash="82b2bd…ea0969" age=6m43s    dirty=126.93MiB
INFO [08-30|10:02:46.400] Deep froze chain segment                 blocks=30001 elapsed=18.624s    number=10057008 hash="ffaddb…05d9b1"
INFO [08-30|10:02:54.190] Imported new chain segment               blocks=13    txs=2915 mgas=161.846 elapsed=8.295s     mgasps=19.511 number=10760981 hash="e653ff…70529a" age=3m40s    dirty=146.58MiB
INFO [08-30|10:03:00.113] Imported new chain segment               blocks=7     txs=1296 mgas=87.019  elapsed=5.923s     mgasps=14.691 number=10760988 hash="a655a4…7c6ca5" age=1m31s    dirty=156.69MiB
INFO [08-30|10:03:00.126] Imported new block headers               count=1    elapsed=1m4.052s   number=10760989 hash="31df12…bc9465" age=1m30s
INFO [08-30|10:03:00.139] Imported new block headers               count=1    elapsed=4.998ms    number=10760990 hash="04c374…9a92dc" age=1m27s
INFO [08-30|10:03:00.153] Imported new block headers               count=3    elapsed=6.000ms    number=10760993 hash="c6fffb…265cae"
INFO [08-30|10:03:00.225] Downloader queue stats                   receiptTasks=0    blockTasks=0    itemSize=221.62KiB throttle=296
INFO [08-30|10:03:00.997] Imported new chain segment               blocks=1     txs=235  mgas=12.392  elapsed=763.005ms  mgasps=16.241 number=10760989 hash="31df12…bc9465" age=1m30s    dirty=158.37MiB
INFO [08-30|10:03:01.008] Imported new block headers               count=1    elapsed=572.995ms  number=10760994 hash="e3afed…f9afef"
INFO [08-30|10:03:03.377] Imported new chain segment               blocks=1     txs=200  mgas=12.423  elapsed=2.361s     mgasps=5.261  number=10760990 hash="04c374…9a92dc" age=1m30s    dirty=160.05MiB
INFO [08-30|10:03:04.270] Deep froze chain segment                 blocks=30001 elapsed=17.860s    number=10087009 hash="e2f6c6…fe7f20"
WARN [08-30|10:03:07.574] Fast syncing, discarded propagated block number=10760994 hash="e3afed…f9afef"
INFO [08-30|10:03:08.411] Imported new chain segment               blocks=4     txs=666  mgas=49.755  elapsed=5.023s     mgasps=9.905  number=10760994 hash="e3afed…f9afef" dirty=165.73MiB
INFO [08-30|10:03:08.422] Fast sync complete, auto disabling

Detailed DB stats

> geth --datadir s:\Ethereum inspect
INFO [08-30|12:49:32.043] Maximum peer count                       ETH=50 LES=0 total=50
INFO [08-30|12:49:32.388] Set global gas cap                       cap=25000000
INFO [08-30|12:49:32.393] Allocated cache and file handles         database=s:\Ethereum\geth\chaindata cache=512.00MiB handles=8192
INFO [08-30|12:49:34.040] Opened ancient database                  database=s:\Ethereum\geth\chaindata\ancient
INFO [08-30|12:49:34.074] Disk storage enabled for ethash caches   dir=s:\Ethereum\geth\ethash count=3
INFO [08-30|12:49:34.079] Disk storage enabled for ethash DAGs     dir=C:\Users\giuse\AppData\Local\Ethash count=2
INFO [08-30|12:49:34.090] Loaded most recent local header          number=10761038 hash="95d315…aac7a9" td=17106408363816477706008 age=2h38m10s
INFO [08-30|12:49:34.099] Loaded most recent local full block      number=10760905 hash="a638f7…ecd63c" td=17106043165864029076711 age=3h10m31s
INFO [08-30|12:49:34.106] Loaded most recent local fast block      number=10761038 hash="95d315…aac7a9" td=17106408363816477706008 age=2h38m10s
INFO [08-30|12:49:34.107] Deep froze chain segment                 blocks=18 elapsed=33.995ms number=10670905 hash="8c3832…cb9018"
INFO [08-30|12:49:34.115] Loaded last fast-sync pivot marker       number=10760885
INFO [08-30|12:49:42.130] Inspecting database                      count=6482000 elapsed=8.001s
...
INFO [08-30|13:22:16.661] Counting ancient database receipts       blocknumber=10670906 percentage=100 elapsed=32m42.533s
+-----------------+--------------------+------------+-----------+
|    DATABASE     |      CATEGORY      |    SIZE    |   ITEMS   |
+-----------------+--------------------+------------+-----------+
| Key-Value store | Headers            | 55.00 MiB  |    100554 |
| Key-Value store | Bodies             | 3.64 GiB   |     98505 |
| Key-Value store | Receipts           | 4.20 GiB   |  17751843 |
| Key-Value store | Difficulties       | 6.25 MiB   |    110238 |
| Key-Value store | Block number->hash | 5.16 MiB   |    110222 |
| Key-Value store | Block hash->number | 420.76 MiB |  10761043 |
| Key-Value store | Transaction index  | 27.44 GiB  | 818569194 |
| Key-Value store | Bloombit index     | 1.64 GiB   |   5380096 |
| Key-Value store | Contract codes     | 37.33 MiB  |      6636 |
| Key-Value store | Trie nodes         | 74.51 GiB  | 639023322 |
| Key-Value store | Trie preimages     | 1.07 MiB   |     17051 |
| Key-Value store | Account snapshot   | 0.00 B     |         0 |
| Key-Value store | Storage snapshot   | 0.00 B     |         0 |
| Key-Value store | Clique snapshots   | 0.00 B     |         0 |
| Key-Value store | Singleton metadata | 151.00 B   |         5 |
| Ancient store   | Headers            | 4.51 GiB   |  10670906 |
| Ancient store   | Bodies             | 97.85 GiB  |  10670906 |
| Ancient store   | Receipt lists      | 44.38 GiB  |  10670906 |
| Ancient store   | └ counted receipts | --         | 802441861 |
| Ancient store   | Difficulties       | 166.01 MiB |  10670906 |
| Ancient store   | Block number->hash | 386.71 MiB |  10670906 |
| Light client    | CHT trie nodes     | 0.00 B     |         0 |
| Light client    | Bloom trie nodes   | 0.00 B     |         0 |
+-----------------+--------------------+------------+-----------+
|                         TOTAL        | 259.22 GIB |           |
+-----------------+--------------------+------------+-----------+
ERROR[08-30|13:22:16.783] Database contains unaccounted data       size=121.08KiB count=2632

@sssubik
Copy link

sssubik commented Aug 31, 2020

Hey @Neurone How many days does the full sync take in average? How can I know how close am I? I have been syncing for like a month now..

@sjlnk
Copy link

sjlnk commented Aug 31, 2020

Hey @sjlnk . Could you give the specification of your server? Because I too have a good server.. But it has not completed for like a month. What is your Disk I/O rate specifically..

I used i3.xlarge on Amazon EC2.

@splix
Copy link

splix commented Aug 31, 2020

Finished a fast sync today (block 10770930) with:

processed=608439377

(was running it on a work machine during day time, so the total time is irrelevant in my case)

@Neurone
Copy link
Contributor

Neurone commented Aug 31, 2020

Hey @Neurone How many days does the full sync take in average? How can I know how close am I? I have been syncing for like a month now..

Not easy to say because I tested many things during the sync process, I restarted a lot of times (40~50 times) and I moved the files around different systems. In any case I searched for all the logs I saved between restarts (not all) and I created the summary below, just to have an idea.

The most of the time is spent for the fast sync of state entries, but block headers are downloaded in few hours in any case. So you should check your stats and calculate your rate to download state entries per hour: if it's high enough, I suggest you to delete the database - not the ancient - and download state entries again from the start.

If your rate is like mine at the end (~8.2M entries per hour) you should be able to reach the current state from scratch in ~3 days.

SYSTEM / OS RAM GB DISK HOURS ONLINE RESTARTS STATE ENTRIES REACHED STATE ENTRIES PER HOUR TIME TO INIT FAST SYNC BLOOM WHEN RESTARTED
Rock64 / Armbian 20.02.1 Bionic 4 HDD ~1.800 ~40 469.380.958 ~100k (decreasing over time, this is the mean value in the last 26 days) 3h16m29
Desktop / Ubuntu 20.04 (WSL2 on Windows 10) 16 SSD 134 1 578.598.409 ~815k 2h38m14
Desktop / Windows 10 16 SSD 9 0 652.299.180 ~8.2M 30m28
  • Desktop:
    • Processor: Intel i7-3820 3.6Ghz (2012)
    • SSD: Crucial CT525MX300SSD1 (2016)

@Duncan-Brain
Copy link

So you should check your stats and calculate your rate to download state entries per hour: if it's high enough, I suggest you to delete the database - not the ancient - and download state entries again from the start.

Hi @Neurone thanks for adding a few bits of your data. I am trying to figure out if I have a high enough rate. Currently looking at about 600k per hour on a 8GB Pi 4, 1 TB Samsung SSD. It has been hard to find info on what might be a high enough rate. I just took some guessed stationary value of 600M and made myself some moving goalposts of low estimated 3000 state changes per block and high estimate 6000 state changes per block. Which would put me at needing 720k-1.44M per hour to just keep up with the chain minimum. What rate do you think is reasonable?

I have been considering deleting the chain and starting again to see if I have just fallen too far behind. Also do you know if the removedb command would keep ancient by specifying the datadir even if they are both in chaindata folder?

Thanks!

@sirnicolas21
Copy link

sirnicolas21 commented Nov 10, 2020

So you should check your stats and calculate your rate to download state entries per hour: if it's high enough, I suggest you to delete the database - not the ancient - and download state entries again from the start.

Hi @Neurone thanks for adding a few bits of your data. I am trying to figure out if I have a high enough rate. Currently looking at about 600k per hour on a 8GB Pi 4, 1 TB Samsung SSD. It has been hard to find info on what might be a high enough rate. I just took some guessed stationary value of 600M and made myself some moving goalposts of low estimated 3000 state changes per block and high estimate 6000 state changes per block. Which would put me at needing 720k-1.44M per hour to just keep up with the chain minimum. What rate do you think is reasonable?

I have been considering deleting the chain and starting again to see if I have just fallen too far behind. Also do you know if the removedb command would keep ancient by specifying the datadir even if they are both in chaindata folder?

Thanks!

`INFO [11-10|15:05:14.458] Imported new state entries count=384 elapsed="11.648µs" processed=501571121 pending=49648 trieretry=0 coderetry=0 duplicate=15215 unexpected=73697
INFO [11-10|15:05:19.526] Imported new block headers count=1 elapsed=242.947ms number=11229982 hash="aff1eb▒616e67" age=1m18s
INFO [11-10|15:05:22.529] Imported new state entries count=384 elapsed="8.352µs" processed=501571505 pending=50058 trieretry=0 coderetry=0 duplicate=15215 unexpected=73697
INFO [11-10|15:05:30.183] Imported new state entries count=384 elapsed="180.219µs" processed=501571889 pending=50553 trieretry=0 coderetry=0 duplicate=15215 unexpected=73697
INFO [11-10|15:05:43.575] Imported new state entries count=384 elapsed="288.309µs" processed=501572273 pending=50935 trieretry=4 coderetry=0 duplicate=15215 unexpected=73697
INFO [11-10|15:05:51.439] Imported new state entries count=386 elapsed="190.904µs" processed=501572659 pending=51429 trieretry=0 coderetry=0 duplicate=15215 unexpected=73697
INFO [11-10|15:05:59.872] Imported new state entries count=384 elapsed="8.074µs" processed=501573043 pending=51883 trieretry=0 coderetry=0 duplicate=15215 unexpected=73697
INFO [11-10|15:06:11.695] Imported new state entries count=384 elapsed="212.496µs" processed=501573427 pending=52268 trieretry=0 coderetry=0 duplicate=15215 unexpected=73697
INFO [11-10|15:06:19.745] Imported new block headers count=1 elapsed=568.387ms number=11229983 hash="d10a7b▒22dbf3" age=2m9s
INFO [11-10|15:06:20.106] Downloader queue stats receiptTasks=0 blockTasks=0 itemSize=216.91KiB throttle=303
INFO [11-10|15:06:22.905] Imported new state entries count=384 elapsed="17.499µs" processed=501573811 pending=52654 trieretry=0 coderetry=0 duplicate=15215 unexpected=73697
INFO [11-10|15:06:25.797] Imported new block headers count=1 elapsed=197.033ms number=11229984 hash="624d1f▒04161a" age=1m25s
INFO [11-10|15:06:33.955] Imported new state entries count=384 elapsed="8.407µs" processed=501574195 pending=53041 trieretry=0 coderetry=0 duplicate=15215 unexpected=73697
INFO [11-10|15:06:38.346] Imported new block headers count=1 elapsed=41.740ms number=11229985 hash="530417▒f68043" age=1m20s
INFO [11-10|15:06:44.704] Imported new state entries count=384 elapsed="8.907µs" processed=501574579 pending=53428 trieretry=0 coderetry=0 duplicate=15215 unexpected=73697
INFO [11-10|15:06:47.901] Imported new block headers count=1 elapsed=31.135ms number=11229986 hash="11688f▒0590b5"
INFO [11-10|15:06:51.181] Imported new block headers count=1 elapsed=20.278ms number=11229987 hash="2e93db▒b5fa93"
INFO [11-10|15:06:55.595] Imported new state entries count=384 elapsed="163.312µs" processed=501574963 pending=53811 trieretry=0 coderetry=0 duplicate=15215 unexpected=73697
INFO [11-10|15:07:05.701] Imported new state entries count=384 elapsed="16.314µs" processed=501575347 pending=54197 trieretry=0 coderetry=0 duplicate=15215 unexpected=73697
INFO [11-10|15:07:13.349] Imported new block headers count=1 elapsed=27.639ms number=11229988 hash="9b7a99▒8cd706"

`
@Duncan-Brain
this is a sample of my rasp4 (4gb ram) with samsung T5 ssd
it says its running 4 days but i think i restarted once due to power failure..... so it should be 4-5 days
note that its running a full 64bit Raspbian

@Duncan-Brain
Copy link

Duncan-Brain commented Nov 10, 2020

@sirnicolas21 I am sorry I am confused by your response here. What is your rate? It looks to me like you are in the importing state trie phase and are about ~80 blocks behind still? I have been running for 15 days been in the state trie phase for 12 days. I have been upping my --cache allocation to try to increase my rate as described above. I can see you have ~196M less state entries than I so it seems like you may have done something right, or have a ways to go. What geth options did you specify? Thanks!

@sirnicolas21
Copy link

@sirnicolas21 I am sorry I am confused by your response here. What is your rate? It looks to me like you are in the importing state trie phase and are about ~80 blocks behind still? I have been running for 15 days been in the state trie phase for 12 days. I have been upping my --cache allocation to try to increase my rate as described above. I can see you have ~196M less state entries than I so it seems like you may have done something right, or have a ways to go. What geth options did you specify? Thanks!

ps -ef | grep geth
pi        1015   757 39 Nov05 pts/1    2-00:59:02 geth --datadir /data/nodes/geth --cache 1024

@Duncan-Brain i only configured the cache on geth nothing else "--cache 1024"

  • make sure you have your disk formatted on ext4 and not ntfs
  • i also had to enable swap since 4 gb is not much ram, i doubt you need it but give it a try
  • make sure you are running raspbian 64 and not 32, default installation is 32 but that will fail badly
  • make sure that the disk is on the fast usb (the blue ones)

also take note that on the fast sync phase it doesn't matter if i am 80 blocks behind because i must reach the 600mil entries point to fully sync and then use the blockchain, before that ethereum is just resource eating thing and nothing more

right now i am @ 502983948 entries and my previous post 6 hours ago was 501575347 so thats 240.000 entries per hour aprox

after the initial sync phase you download the blocks and parse them one by one and the process is slightly different than what is happening now

and last but not least.... make sure you are not overheating, personally i made a have a small heatsink and fan running on the rpi with auto turn on at around 60c

@Duncan-Brain
Copy link

Duncan-Brain commented Nov 10, 2020

Hi @sirnicolas21 . Thanks for the discussion I have all those suggestions implemented (I am using Kauri.io raspberry pi setup guide with a few tweaks ie --cache 2048). As per this discussion thread I am trying to find an upper bound on the number of states (which I understand is a moving target) so that I am confident that after 15 days I can tell myself "I am almost fully synced" or "I need to start over". Over in this thread on Issue #15616 some nodes go up to ~800M some ~650M.

As you are not yet fully synced how do you have confidence that it will fully sync ever?

You can check out some of my pi stats here on a google sheet - Google Sheet

@Neurone
Copy link
Contributor

Neurone commented Nov 12, 2020

So you should check your stats and calculate your rate to download state entries per hour: if it's high enough, I suggest you to delete the database - not the ancient - and download state entries again from the start.

Hi @Neurone thanks for adding a few bits of your data. I am trying to figure out if I have a high enough rate. Currently looking at about 600k per hour on a 8GB Pi 4, 1 TB Samsung SSD. It has been hard to find info on what might be a high enough rate. I just took some guessed stationary value of 600M and made myself some moving goalposts of low estimated 3000 state changes per block and high estimate 6000 state changes per block. Which would put me at needing 720k-1.44M per hour to just keep up with the chain minimum. What rate do you think is reasonable?

I have been considering deleting the chain and starting again to see if I have just fallen too far behind. Also do you know if the removedb command would keep ancient by specifying the datadir even if they are both in chaindata folder?

Thanks!

Hi @Duncan-Brain I can't remember my state importing rate but I can share the current stat of my fullly synced node so you can have an idea of numbers you need to reach to get a fully synced node (currently 880M state entries). About removing the db, when you execute removedb command it lets you choose if removing ancient db also: I suggest to keep it because there's no gain in removing it (trie state entries are always stored in leveldb, they never go inside the ancient db)

D:\Program Files\Geth>geth --datadir s:\Ethereum inspect
INFO [11-11|19:19:15.793] Maximum peer count                       ETH=50 LES=0 total=50
INFO [11-11|19:19:16.156] Set global gas cap                       cap=25000000
INFO [11-11|19:19:16.160] Allocated cache and file handles         database=s:\Ethereum\geth\chaindata cache=512.00MiB handles=8192
INFO [11-11|19:19:27.022] Opened ancient database                  database=s:\Ethereum\geth\chaindata\ancient
INFO [11-11|19:19:27.046] Disk storage enabled for ethash caches   dir=s:\Ethereum\geth\ethash count=3
INFO [11-11|19:19:27.051] Disk storage enabled for ethash DAGs     dir=C:\Users\giuse\AppData\Local\Ethash count=2
INFO [11-11|19:19:27.059] Loaded most recent local header          number=11237906 hash="d8c4b8…c3f1e9" td=18641055599883751407245 age=53s
INFO [11-11|19:19:27.069] Loaded most recent local full block      number=11237906 hash="d8c4b8…c3f1e9" td=18641055599883751407245 age=53s
INFO [11-11|19:19:27.079] Loaded most recent local fast block      number=11237906 hash="d8c4b8…c3f1e9" td=18641055599883751407245 age=53s
INFO [11-11|19:19:27.087] Loaded last fast-sync pivot marker       number=10760885
INFO [11-11|19:19:35.100] Inspecting database                      count=4996000 elapsed=8.002s
INFO [11-11|19:19:43.108] Inspecting database                      count=10062000 elapsed=16.010s
INFO [11-11|19:19:51.115] Inspecting database                      count=14730000 elapsed=24.017s
...
INFO [11-11|19:50:53.083] Inspecting database                      count=1821919000 elapsed=31m25.985s
INFO [11-11|19:51:01.090] Inspecting database                      count=1828550000 elapsed=31m33.992s
INFO [11-11|19:51:09.097] Inspecting database                      count=1835247000 elapsed=31m41.999s
+-----------------+--------------------+------------+-----------+
|    DATABASE     |      CATEGORY      |    SIZE    |   ITEMS   |
+-----------------+--------------------+------------+-----------+
| Key-Value store | Headers            | 54.97 MiB  |    100438 |
| Key-Value store | Bodies             | 3.52 GiB   |     98390 |
| Key-Value store | Receipt lists      | 4.48 GiB   |     98390 |
| Key-Value store | Difficulties       | 7.11 MiB   |    113813 |
| Key-Value store | Block number->hash | 6.04 MiB   |    113813 |
| Key-Value store | Block hash->number | 439.41 MiB |  11237928 |
| Key-Value store | Transaction index  | 30.13 GiB  | 898783141 |
| Key-Value store | Bloombit index     | 1.76 GiB   |   5617664 |
| Key-Value store | Contract codes     | 264.01 MiB |     44710 |
| Key-Value store | Trie nodes         | 130.27 GiB | 880287080 |
| Key-Value store | Trie preimages     | 2.98 GiB   |  45170757 |
| Key-Value store | Account snapshot   | 0.00 B     |         0 |
| Key-Value store | Storage snapshot   | 0.00 B     |         0 |
| Key-Value store | Clique snapshots   | 0.00 B     |         0 |
| Key-Value store | Singleton metadata | 151.00 B   |         5 |
| Ancient store   | Headers            | 4.75 GiB   |  11147907 |
| Ancient store   | Bodies             | 108.13 GiB |  11147907 |
| Ancient store   | Receipt lists      | 50.63 GiB  |  11147907 |
| Ancient store   | Difficulties       | 173.74 MiB |  11147907 |
| Ancient store   | Block number->hash | 404.00 MiB |  11147907 |
| Light client    | CHT trie nodes     | 0.00 B     |         0 |
| Light client    | Bloom trie nodes   | 0.00 B     |         0 |
+-----------------+--------------------+------------+-----------+
|                         TOTAL        | 337.97 GIB |           |
+-----------------+--------------------+------------+-----------+
ERROR[11-11|19:51:16.769] Database contains unaccounted data       size=126.40KiB count=2748

@Duncan-Brain
Copy link

Duncan-Brain commented Nov 12, 2020

SYSTEM / OS RAM GB DISK HOURS ONLINE RESTARTS STATE ENTRIES REACHED STATE ENTRIES PER HOUR TIME TO INIT FAST SYNC BLOOM WHEN RESTARTED
Rock64 / Armbian 20.02.1 Bionic 4 HDD ~1.800 ~40 469.380.958 ~100k (decreasing over time, this is the mean value in the last 26 days) 3h16m29
Desktop / Ubuntu 20.04 (WSL2 on Windows 10) 16 SSD 134 1 578.598.409 ~815k 2h38m14
Desktop / Windows 10 16 SSD 9 0 652.299.180 ~8.2M 30m28

@Neurone Thanks that is helpful I think. Based on the data on my setup I think I am falling behind.

Did the second test setup(Desktop Ubuntu) in your table complete sync? Or did only the last setup(Desktop Windows 10) complete sync?

@Neurone
Copy link
Contributor

Neurone commented Nov 12, 2020

Yes, the only node that was able to fully sync it was the desktop one because it was using an SSD (it was Ubuntu on WSL2, now it is Windows because of how bad WSL2 handles intensive disk activities, you can see a x10 difference in performance in the table). My SSD is old and not really fast, but I was able to sync at the end even though with many more state entries then normal (880M vs ~700M if I resync again from scratch). I plan to move all geth data to an external SSD and put it again on my Rock64 (that now it is running only Bitcoin and IPFS ^^) but I think re-syncing again the state is not worth the effort because I will be able to free only about ~25 Gb with a grand total of 338Gb, so I think I'm ok with that for now.

@sirnicolas21
Copy link

@Duncan-Brain how many it says on processed? mine is now 518m and at this point i can say it started to be really slow

@Duncan-Brain
Copy link

Duncan-Brain commented Nov 12, 2020

@Neurone okay thanks, last question from me: What makes you think if you resync again it would be ~700M? And for that matter 25Gb freed up... it seems it could be part of the same calculation. Based on your numbers(~652M August31 880M Today) I would say the average of 3000 states per block seems quite accurate once fully synced. But as you suggest there is some pruning possible there. If I am in fast sync I may yet catch up, if I need to beat 3000 states per block right now then I have been behind for 2 weeks.

@sirnicolas21 I updated my spreadsheet from my previous post so you can see -- currently ~710M

@Neurone
Copy link
Contributor

Neurone commented Nov 12, 2020

@Duncan-Brain About ~700M is just a guess in the middle based on stats of other people, stats taken even from this issue: #15616. It seems the most fortunate ones - i.e. #15616 (comment) - get fully synced with ~620M entries, while less fortunate ones reach ~800M. With my ~880M I think I'm an extreme case because I restarted the process so many times, and I don't want to do it again from scratch, so let's say I'll stop to ~700M if I restart all the state sync process. Because my trie nodes take ~130Gb of disk (you can see it in the inspect stats) it means every 1M states take ~147Mb. So if I save ~180M states (880-700) I'll gain ~26GB of disk.

My disk is a 512GB SSD dedicated to Ethereum, the chain grows relatively slowly and, if I will need more space, I still have those ~150GB of ancient data to move outside the SSD, so for now I prefer to keep things as they are until it will be really needed.

@holiman
Copy link
Contributor

holiman commented Nov 12, 2020

My disk is a 512GB SSD dedicated to Ethereum, the chain grows relatively slowly and, if I will need more space, I still have those ~150GB of ancient data to move outside the SSD, so for now I prefer to keep things as they are until it will be really needed.

@rjl493456442 is working on pruning, which will be able to remove some of the junk which isn't needed.

@vogelito
Copy link

vogelito commented Dec 1, 2020

Fresh sync stats:

Command:

geth --http

There were 640,610,869 state entries
Full sync took roughly 37 hours
Takes 297G

@AdvancedStyle
Copy link

How were there only 640,610,869 state entries 5 days ago?
I'm already at 698,492,860 still not done syncing.

@Neurone
Copy link
Contributor

Neurone commented Dec 6, 2020

How were there only 640,610,869 state entries 5 days ago?
I'm already at 698,492,860 still not done syncing.

State entries are different for each one. @holiman explained why it is so here: #14647 (comment)

Here another open thread where you can find check stats and experiences from others: #15616

And here data of a eth1 full node that I finished to sync yesterday after 64 hours, processing 650.634.632 state entries, using 282.75 GIB. Full stats below.

Please note that during synchronization the same machine was running a Teku beacon node, so performance could have been better without it. I wanted to test if those two clients can both run smoothly on this mini PC and, spoiler alert, yes they can 😉.

KEY VALUE
Fast sync complete 64 hours
Processed state entries 650.634.632
Storage used 282.75 GIB
OS Ubuntu 20.04.1
Client Geth/v1.9.24-stable-cc05b050/linux-amd64/go1.15.5
Machine MINISFORUM DeskMini UM300 Mini PC
Processor AMD Ryzen™ 3 3300U , 4 Cores/4 Threads
Memory DDR4 8GB Dual channel (min. 1 GB reserved for graphic card, so actual RAM 7GB!)
Storage n.1 Kingston M.2 2280 256GB SATA SSD (used by Teku)
Storage n.2 Crucial BX500 1TB SATA SSD (used by Geth)
Connectivity Ethernet 1Gb
ISP Fastweb, Italy
./geth
INFO [12-02|03:57:17.926] Starting Geth on Ethereum mainnet...
INFO [12-02|03:57:17.927] Bumping default cache on mainnet         provided=1024 updated=4096
WARN [12-02|03:57:17.927] Sanitizing cache to Go's GC limits       provided=4096 updated=2323
INFO [12-02|03:57:17.934] Maximum peer count                       ETH=50 LES=0 total=50
INFO [12-02|03:57:17.934] Smartcard socket not found, disabling    err="stat /run/pcscd/pcscd.comm: no such file or directory"
INFO [12-02|03:57:17.936] Set global gas cap                       cap=25000000
INFO [12-02|03:57:17.936] Allocated trie memory caches             clean=580.00MiB dirty=580.00MiB
...
INFO [12-02|03:57:18.015] Writing default main-net genesis block
INFO [12-02|03:57:18.279] Persisted trie from memory database      nodes=12356 size=1.78MiB time=84.346878ms gcnodes=0 gcsize=0.00B gctime=0s livenodes=1 livesize=0.00B
INFO [12-02|03:57:18.279] Initialised chain configuration          config="{ChainID: 1 Homestead: 1150000 DAO: 1920000 DAOSupport: true EIP150: 2463000 EIP155: 2675000 EIP158: 2675000 Byzantium: 4370000 Constantinople: 7280000 Petersburg: 7280000 Istanbul: 9069000, Muir Glacier: 9200000, YOLO v2: <nil>, Engine: ethash}"
...
INFO [12-02|03:57:18.280] Initialising Ethereum protocol           versions="[65 64 63]" network=1 dbversion=<nil>
WARN [12-02|03:57:18.280] Upgrade blockchain database version      from=<nil> to=8
INFO [12-02|03:57:18.281] Loaded most recent local header          number=0 hash="d4e567…cb8fa3" td=17179869184 age=51y7mo4w
INFO [12-02|03:57:18.281] Loaded most recent local full block      number=0 hash="d4e567…cb8fa3" td=17179869184 age=51y7mo4w
INFO [12-02|03:57:18.281] Loaded most recent local fast block      number=0 hash="d4e567…cb8fa3" td=17179869184 age=51y7mo4w
INFO [12-02|03:57:18.281] Regenerated local transaction journal    transactions=0 accounts=0
INFO [12-02|03:57:18.298] Allocated fast sync bloom                size=1.13GiB
INFO [12-02|03:57:18.304] Starting peer-to-peer node               instance=Geth/v1.9.24-stable-cc05b050/linux-amd64/go1.15.5
...
INFO [12-02|03:57:18.667] Initialized fast sync bloom              items=12356 errorrate=0.000 elapsed=367.872ms
...
INFO [12-02|03:57:28.334] Block synchronisation started
...
INFO [12-02|03:57:35.319] Imported new state entries               count=1152 elapsed=5.229ms     processed=45201 pending=20345 trieretry=0 coderetry=0 duplicate=0 unexpected=0
INFO [12-02|03:57:35.425] Imported new state entries               count=1152 elapsed=4.800ms     processed=46353 pending=20673 trieretry=0 coderetry=0 duplicate=0 unexpected=0
INFO [12-02|03:57:35.751] Imported new state entries               count=1536 elapsed=8.549ms     processed=47889 pending=19409 trieretry=0 coderetry=0 duplicate=0 unexpected=0
...
INFO [12-04|20:52:06.008] Imported new state entries               count=1996 elapsed=62.015ms    processed=650631754 pending=9510   trieretry=0    coderetry=0 duplicate=126032 unexpected=765459
INFO [12-04|20:52:06.239] Imported new state entries               count=2191 elapsed=141.737ms   processed=650633945 pending=2499   trieretry=0    coderetry=0 duplicate=126032 unexpected=765459
INFO [12-04|20:52:10.981] Imported new state entries               count=687  elapsed=45.228ms    processed=650634632 pending=0      trieretry=0    coderetry=0 duplicate=126032 unexpected=765459
INFO [12-04|20:52:10.998] Committed new head block                 number=11388325 hash="4dfe95…2574b4"
INFO [12-04|20:52:11.049] Deallocated fast sync bloom              items=646967270 errorrate=0.006
...
INFO [12-04|20:52:11.049] Deallocated fast sync bloom              items=646967270 errorrate=0.006
...
INFO [12-04|20:58:18.180] Fast sync complete, auto disabling
./geth inspect
INFO [12-04|22:37:47.671] Maximum peer count                       ETH=50 LES=0 total=50
INFO [12-04|22:37:47.671] Smartcard socket not found, disabling    err="stat /run/pcscd/pcscd.comm: no such file or directory"
INFO [12-04|22:37:47.672] Set global gas cap                       cap=25000000
...
INFO [12-04|22:37:52.507] Loaded most recent local header          number=11388879 hash="0ec041…573550" td=19169271088158317937607 age=3m28s
INFO [12-04|22:37:52.507] Loaded most recent local full block      number=11388879 hash="0ec041…573550" td=19169271088158317937607 age=3m28s
INFO [12-04|22:37:52.507] Loaded most recent local fast block      number=11388879 hash="0ec041…573550" td=19169271088158317937607 age=3m28s
INFO [12-04|22:37:52.510] Loaded last fast-sync pivot marker       number=11388325
INFO [12-04|22:38:00.533] Inspecting database                      count=10557000 elapsed=8.000s
INFO [12-04|22:38:08.533] Inspecting database                      count=21006000 elapsed=16.000s
INFO [12-04|22:38:16.533] Inspecting database                      count=31660000 elapsed=24.000s
...
INFO [12-04|22:52:30.881] Inspecting database                      count=1561224000 elapsed=14m38.348s
INFO [12-04|22:52:38.881] Inspecting database                      count=1573276000 elapsed=14m46.348s
INFO [12-04|22:52:46.881] Inspecting database                      count=1581359000 elapsed=14m54.348s
+-----------------+--------------------+------------+-----------+
|    DATABASE     |      CATEGORY      |    SIZE    |   ITEMS   |
+-----------------+--------------------+------------+-----------+
| Key-Value store | Headers            | 49.31 MiB  |     90033 |
| Key-Value store | Bodies             | 3.28 GiB   |     90033 |
| Key-Value store | Receipt lists      | 4.08 GiB   |     90033 |
| Key-Value store | Difficulties       | 5.66 MiB   |     99853 |
| Key-Value store | Block number->hash | 4.68 MiB   |     99771 |
| Key-Value store | Block hash->number | 445.31 MiB |  11388911 |
| Key-Value store | Transaction index  | 31.01 GiB  | 924916880 |
| Key-Value store | Bloombit index     | 1.80 GiB   |   5693440 |
| Key-Value store | Contract codes     | 1.59 GiB   |    333905 |
| Key-Value store | Trie nodes         | 71.08 GiB  | 647171117 |
| Key-Value store | Trie preimages     | 8.82 MiB   |    132275 |
| Key-Value store | Account snapshot   | 0.00 B     |         0 |
| Key-Value store | Storage snapshot   | 0.00 B     |         0 |
| Key-Value store | Clique snapshots   | 0.00 B     |         0 |
| Key-Value store | Singleton metadata | 151.00 B   |         5 |
| Ancient store   | Headers            | 4.83 GiB   |  11298879 |
| Ancient store   | Bodies             | 111.38 GiB |  11298879 |
| Ancient store   | Receipt lists      | 52.63 GiB  |  11298879 |
| Ancient store   | Difficulties       | 176.19 MiB |  11298879 |
| Ancient store   | Block number->hash | 409.47 MiB |  11298879 |
| Light client    | CHT trie nodes     | 0.00 B     |         0 |
| Light client    | Bloom trie nodes   | 0.00 B     |         0 |
+-----------------+--------------------+------------+-----------+
|                         TOTAL        | 282.75 GIB |           |
+-----------------+--------------------+------------+-----------+
ERROR[12-04|22:52:52.717] Database contains unaccounted data       size=128.10KiB count=2785

@AdvancedStyle
Copy link

AdvancedStyle commented Dec 10, 2020

My state entries seems to be way longer than what other people are reporting?

Currently (after a week still not synced):

Dec 10 10:14:20 vmi480454.contaboserver.net geth[422]: INFO [12-10|10:14:20.382] Imported new state entries               count=384  elapsed="145.832µs" processed=745635339 pending=115802 trieretry=0    coderetry=0 duplicate=492 unexpected=34366
+-----------------+--------------------+------------+-----------+
|    DATABASE     |      CATEGORY      |    SIZE    |   ITEMS   |
+-----------------+--------------------+------------+-----------+
| Key-Value store | Headers            | 215.65 MiB |    394004 |
| Key-Value store | Bodies             | 14.00 GiB  |    393879 |
| Key-Value store | Receipt lists      | 17.90 GiB  |    393879 |
| Key-Value store | Difficulties       | 21.23 MiB  |    404356 |
| Key-Value store | Block number->hash | 17.08 MiB  |    404331 |
| Key-Value store | Block hash->number | 447.19 MiB |  11436819 |
| Key-Value store | Transaction index  | 31.28 GiB  | 933024549 |
| Key-Value store | Bloombit index     | 1.80 GiB   |   5697536 |
| Key-Value store | Contract codes     | 1.59 GiB   |    333673 |
| Key-Value store | Trie nodes         | 83.95 GiB  | 679328402 |
| Key-Value store | Trie preimages     | 547.13 KiB |      8893 |
| Key-Value store | Account snapshot   | 0.00 B     |         0 |
| Key-Value store | Storage snapshot   | 0.00 B     |         0 |
| Key-Value store | Clique snapshots   | 0.00 B     |         0 |
| Key-Value store | Singleton metadata | 151.00 B   |         5 |
| Ancient store   | Headers            | 4.70 GiB   |  11042816 |
| Ancient store   | Bodies             | 105.90 GiB |  11042816 |
| Ancient store   | Receipt lists      | 49.25 GiB  |  11042816 |
| Ancient store   | Difficulties       | 172.04 MiB |  11042816 |
| Ancient store   | Block number->hash | 400.19 MiB |  11042816 |
| Light client    | CHT trie nodes     | 0.00 B     |         0 |
| Light client    | Bloom trie nodes   | 0.00 B     |         0 |
+-----------------+--------------------+------------+-----------+
|                         TOTAL        | 311.62 GIB |           |
+-----------------+--------------------+------------+-----------+
ERROR[12-12|08:25:16.301] Database contains unaccounted data       size=128.19KiB count=2787

@vladbalmos
Copy link

Posting latest db info after a fast sync. I must mention that the number of processed state entries reached more than 950 millions, possibly due to restarting the node while syncing multiple times.

+-----------------+--------------------+------------+-----------+ 
|    DATABASE     |      CATEGORY      |    SIZE    |   ITEMS   | 
+-----------------+--------------------+------------+-----------+ 
| Key-Value store | Headers            | 49.34 MiB  |     90160 | 
| Key-Value store | Bodies             | 4.69 GiB   |     90160 | 
| Key-Value store | Receipt lists      | 5.09 GiB   |     90160 | 
| Key-Value store | Difficulties       | 6.01 MiB   |    102141 | 
| Key-Value store | Block number->hash | 5.05 MiB   |    102037 | 
| Key-Value store | Block hash->number | 487.06 MiB |  12456601 | 
| Key-Value store | Transaction index  | 14.27 GiB  | 425704772 | 
| Key-Value store | Bloombit index     | 2.08 GiB   |   6231010 | 
| Key-Value store | Contract codes     | 1.94 GiB   |    386202 | 
| Key-Value store | Trie nodes         | 93.76 GiB  | 790154728 | 
| Key-Value store | Trie preimages     | 547.13 KiB |      8893 | 
| Key-Value store | Account snapshot   | 444.56 MiB |   9512508 | 
| Key-Value store | Storage snapshot   | 3.30 GiB   |  45451625 | 
| Key-Value store | Clique snapshots   | 0.00 B     |         0 | 
| Key-Value store | Singleton metadata | 451.54 KiB |        11 | 
| Key-Value store | Shutdown metadata  | 69.00 B    |         1 | 
| Ancient store   | Headers            | 5.36 GiB   |  12366442 | 
| Ancient store   | Bodies             | 137.80 GiB |  12366442 | 
| Ancient store   | Receipt lists      | 67.58 GiB  |  12366442 | 
| Ancient store   | Difficulties       | 193.50 MiB |  12366442 | 
| Ancient store   | Block number->hash | 448.16 MiB |  12366442 | 
| Light client    | CHT trie nodes     | 0.00 B     |         0 | 
| Light client    | Bloom trie nodes   | 0.00 B     |         0 | 
+-----------------+--------------------+------------+-----------+ 
|                         TOTAL        | 337.46 GIB |           | 
+-----------------+--------------------+------------+-----------+ 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests