-
Notifications
You must be signed in to change notification settings - Fork 19.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is the upper bound of "imported new state entries"? #14647
Comments
Yes .. same exact effect |
Please try geth v1.6.6. |
in my case v1.6.6 does not fix it |
same here, described detailed status here: |
The same for me for |
same here... the |
I am in same state after a week using geth --fast --cache=1024. Anyone knows what should I do right now? |
Same situation on testnet. OSX, geth 1.6.7-stable-ab5646c5. Started with --fast and --cache 1500 |
What I could understand from this geth fast mode problem is that:
|
Encountering similar issues. Geth --fast sync does not sync, let alone fast. Using 1.6.7-stable on Ubuntu, and it gets within 100 blocks and then endlessly imports state entries. |
@clowestab did you ever get it to sync? |
Having same issue. Also using ubuntu, and geth endlessly syncs. |
I am having the same issue. Has anyone been able to solve this problem? |
Same problem here, I run geth with 1024 cache and fast syncing three days ago and, after reaching the last block number one day ago, it had never stopped the "Imported new state entries" state. During this "state entries" phase, however, my balance changed from 0 to the correct value and the block number reported changed from 0 to the a number near the highestBlock value. This is geth version on my ubuntu machine: Geth and this is the result now every half second: this is the result of eth.syncing and other geth tools:
The ether balance of my wallet is not 0 and is reported correcly and updated to the last transaction made one day ago. How long is this state supposed to last? |
Another information, my .ethereum folder size is currently 41 GB, maybe a little too large for a right fast sync. |
I think never. |
UPDATE: I stopped geth with CTRL-D and reopen it. Now it seems that the "Imported new state entries" phase halted and geth is working correctly updating only new blocks. So, for now until the issue is solved this is my advise:
Now your problem has been solved and probably geth avoid to download a lot of states, reducing the hard disk space taken by geth too. This is valid until the issue will be solved and geth will become aware when the blockchain is correctly synced. This is my experience, maybe work, maybe not. For me it worked. |
Congrats but I had run it for more than 8 hours and eth.blockNumber had shown 6 always. I have to change to parity to sync the blockchain. |
I think you have just to wait until eth.blockNumber shows a number near currentBlock before close it and start it again. I'm not a geth developer, I just use it so I can't solve the problem or tell you what it does internally or why the command returns you "6" and what you have to do, but it seems that it downloads a lot of states and, when it finds the head state it's able to build the full blockchain. For me this happened when geth.syncing showed a "knownStates" near : 20.000.000 but it can happen before or after. During my test, after fast sync finished to download all the blocks headers, it takes more than 24 hours more for having eth.blockNumber = 4244762 . I run geth on a server in with a band of 100 Mb/s. When it showed to me "0" I let it doing the work and after 24 hours I see the command returns 4244762. I haven't tried to run the command in the middle so I don't know if the command returns other numbers before reaching the last block. I have never used parity but is seems good and use less disk space than geth so it worth a try. |
We believe this is fixed on the master branch. Fast sync takes a while (especially with the mainnet), but will terminate eventually. |
@brennino |
@vincentvc fast sync in geth only works when the database is empty thus you get one chance to fast sync and then it will be the full sync after that [STATED IN ERROR THE FOLLOWING- THIS IS INCORRECT AS POINTED OUT BY FJL: thus yes if you stop and restart anytime before that first fast sync finishes you won't do a fast sync from that point.] My experience with the scenario listed here was two fold. I used the latest build off of main and I made sure I had my database on an SSD. doesn't seem like the SSD vs HDD thing would matter so much but in my experience, until I put it on the SSD I could never get that first sync to finish up - not to say that is true for everyone just my experience. |
I'm having the same problem, continuous imported state I'm currently trying as @brennino says and will come back with result later... currently 350k states processed here some info :
imported state still going, gonna check back tomorow... |
hi @skarn01 I don't know if this happen only to me but when I start fast sync with an empty block chain starting block always shows 0. You can see my eth.syncing on my previous post that I report here: eth.syncing Maybe something wrong happen during fast sync or you close geth before eth.blockNumber says a number near last block. Blockchain sync is really time consuming and you haven't to stop fast sync until finishes or eth.blockNumber != 0 and near highestBlock. What can I tell for helping you... it seems you are not starting fast sync from an empty block chain so if I'm in you I will start again from the beginning. If you don't want to start again (and I can understand you, I went crazy for days before having some results...) you have this opportunities: 1 - If you want just to make a transaction because ethereum are falling in price now and you are in panic you can use light sync for syncing, make your transaction and, when you stop praying and shout in your room for a price peak, try with calm to sync your blockchain with fast sync again. About point number 1, I think light sync is not an experimental feature any more (but maybe someone else can confirm)... and I succeed to make a transaction with a light sync blockchain without problems. |
hi @brennino, you're right that strange that my starting block is not 0 as i haven't stop the geth daemon... what i want to do is develop my own services using the chain ( got experience creating an ICO for my boss, now i got some interest in that technology ^^ ) so i don't worry for money i currently put here, there's currently none, ahah. i'll try the --light option after the geth removedb. --light still give me possibility to work with the chain and see full block after the first sync? Thank you and I'll come back with update on my situation. |
@sjlnk it's because you've stopped and restarted sync several times. Or so I presume, since it's still @karalabe just merged a great PR that makes fast-sync a lot more stable, and less prone to crashing due to out of memory. I would recommend to update to latest |
@sjlnk actually, looking at it again, I suspect you have bad read IO. It has only gone through |
I synced a 3 month gap from last sync, took 2.5 weeks to complete. Its an array of 6 10k rpm sas drives in raid 5. Adding 2 more soon. Took about 2 or more tb of write volume. It really is an ssd killer. |
Thanks for clearing it out! Actually my sync completed in 10 hours with a powerful AWS NVMe enabled instance. I just wanted to know why some people have to download more states than others. I never restarted the process so I'm assuming the other people here did and that's why they have so many more states to download. |
Hey @holiman . The logs are from my VPS server. It is an SSD. Does this mean that my server is not powerful enough as it has very less i/o speed. Do I need to upgrade it? |
Hey @sjlnk . Could you give the specification of your server? Because I too have a good server.. But it has not completed for like a month. What is your Disk I/O rate specifically.. |
I'm done with counter at INFO [08-30|10:01:52.403] Imported new state entries count=1605 elapsed=22.003ms processed=652296924 pending=3467 retry=0 duplicate=41127 unexpected=121037
INFO [08-30|10:01:53.302] Imported new state entries count=1157 elapsed=11.998ms processed=652298081 pending=2059 retry=0 duplicate=41127 unexpected=121037
INFO [08-30|10:01:54.214] Imported new state entries count=1001 elapsed=14.992ms processed=652299082 pending=348 retry=0 duplicate=41127 unexpected=121037
INFO [08-30|10:01:54.523] Imported new state entries count=98 elapsed=2.837ms processed=652299180 pending=0 retry=0 duplicate=41127 unexpected=121037
INFO [08-30|10:01:54.535] Imported new block receipts count=1 elapsed=2ms number=10760885 hash="e5569e…7ead3c" age=29m44s size=101.58KiB
INFO [08-30|10:01:54.545] Committed new head block number=10760885 hash="e5569e…7ead3c"
INFO [08-30|10:01:54.613] Deallocated fast sync bloom items=639248389 errorrate=0.001
INFO [08-30|10:02:03.025] Imported new chain segment blocks=12 txs=2533 mgas=149.011 elapsed=8.402s mgasps=17.735 number=10760897 hash="f8d397…0dafd0" age=26m49s dirty=18.46MiB
INFO [08-30|10:02:11.409] Imported new chain segment blocks=14 txs=3369 mgas=174.232 elapsed=8.384s mgasps=20.780 number=10760911 hash="1adada…67ecc0" age=21m55s dirty=41.57MiB
INFO [08-30|10:02:19.956] Imported new chain segment blocks=18 txs=3487 mgas=210.546 elapsed=8.546s mgasps=24.636 number=10760929 hash="97dcea…9d023b" age=16m20s dirty=68.14MiB
INFO [08-30|10:02:27.303] New local node record seq=39281 id=82090fbe6ea27663 ip=127.0.0.1 udp=30303 tcp=30303
INFO [08-30|10:02:27.389] New local node record seq=39282 id=82090fbe6ea27663 ip=93.40.103.165 udp=30303 tcp=30303
INFO [08-30|10:02:28.037] Imported new chain segment blocks=17 txs=3815 mgas=209.530 elapsed=8.081s mgasps=25.927 number=10760946 hash="4165bb…e09268" age=11m39s dirty=94.33MiB
INFO [08-30|10:02:36.621] Imported new chain segment blocks=12 txs=2487 mgas=136.747 elapsed=8.584s mgasps=15.930 number=10760958 hash="c59ba3…d33c9e" age=8m32s dirty=111.89MiB
INFO [08-30|10:02:40.190] New local node record seq=39283 id=82090fbe6ea27663 ip=127.0.0.1 udp=30303 tcp=30303
INFO [08-30|10:02:45.795] New local node record seq=39284 id=82090fbe6ea27663 ip=93.40.103.165 udp=30303 tcp=30303
INFO [08-30|10:02:45.895] Imported new chain segment blocks=10 txs=2042 mgas=120.584 elapsed=9.273s mgasps=13.003 number=10760968 hash="82b2bd…ea0969" age=6m43s dirty=126.93MiB
INFO [08-30|10:02:46.400] Deep froze chain segment blocks=30001 elapsed=18.624s number=10057008 hash="ffaddb…05d9b1"
INFO [08-30|10:02:54.190] Imported new chain segment blocks=13 txs=2915 mgas=161.846 elapsed=8.295s mgasps=19.511 number=10760981 hash="e653ff…70529a" age=3m40s dirty=146.58MiB
INFO [08-30|10:03:00.113] Imported new chain segment blocks=7 txs=1296 mgas=87.019 elapsed=5.923s mgasps=14.691 number=10760988 hash="a655a4…7c6ca5" age=1m31s dirty=156.69MiB
INFO [08-30|10:03:00.126] Imported new block headers count=1 elapsed=1m4.052s number=10760989 hash="31df12…bc9465" age=1m30s
INFO [08-30|10:03:00.139] Imported new block headers count=1 elapsed=4.998ms number=10760990 hash="04c374…9a92dc" age=1m27s
INFO [08-30|10:03:00.153] Imported new block headers count=3 elapsed=6.000ms number=10760993 hash="c6fffb…265cae"
INFO [08-30|10:03:00.225] Downloader queue stats receiptTasks=0 blockTasks=0 itemSize=221.62KiB throttle=296
INFO [08-30|10:03:00.997] Imported new chain segment blocks=1 txs=235 mgas=12.392 elapsed=763.005ms mgasps=16.241 number=10760989 hash="31df12…bc9465" age=1m30s dirty=158.37MiB
INFO [08-30|10:03:01.008] Imported new block headers count=1 elapsed=572.995ms number=10760994 hash="e3afed…f9afef"
INFO [08-30|10:03:03.377] Imported new chain segment blocks=1 txs=200 mgas=12.423 elapsed=2.361s mgasps=5.261 number=10760990 hash="04c374…9a92dc" age=1m30s dirty=160.05MiB
INFO [08-30|10:03:04.270] Deep froze chain segment blocks=30001 elapsed=17.860s number=10087009 hash="e2f6c6…fe7f20"
WARN [08-30|10:03:07.574] Fast syncing, discarded propagated block number=10760994 hash="e3afed…f9afef"
INFO [08-30|10:03:08.411] Imported new chain segment blocks=4 txs=666 mgas=49.755 elapsed=5.023s mgasps=9.905 number=10760994 hash="e3afed…f9afef" dirty=165.73MiB
INFO [08-30|10:03:08.422] Fast sync complete, auto disabling Detailed DB stats > geth --datadir s:\Ethereum inspect
INFO [08-30|12:49:32.043] Maximum peer count ETH=50 LES=0 total=50
INFO [08-30|12:49:32.388] Set global gas cap cap=25000000
INFO [08-30|12:49:32.393] Allocated cache and file handles database=s:\Ethereum\geth\chaindata cache=512.00MiB handles=8192
INFO [08-30|12:49:34.040] Opened ancient database database=s:\Ethereum\geth\chaindata\ancient
INFO [08-30|12:49:34.074] Disk storage enabled for ethash caches dir=s:\Ethereum\geth\ethash count=3
INFO [08-30|12:49:34.079] Disk storage enabled for ethash DAGs dir=C:\Users\giuse\AppData\Local\Ethash count=2
INFO [08-30|12:49:34.090] Loaded most recent local header number=10761038 hash="95d315…aac7a9" td=17106408363816477706008 age=2h38m10s
INFO [08-30|12:49:34.099] Loaded most recent local full block number=10760905 hash="a638f7…ecd63c" td=17106043165864029076711 age=3h10m31s
INFO [08-30|12:49:34.106] Loaded most recent local fast block number=10761038 hash="95d315…aac7a9" td=17106408363816477706008 age=2h38m10s
INFO [08-30|12:49:34.107] Deep froze chain segment blocks=18 elapsed=33.995ms number=10670905 hash="8c3832…cb9018"
INFO [08-30|12:49:34.115] Loaded last fast-sync pivot marker number=10760885
INFO [08-30|12:49:42.130] Inspecting database count=6482000 elapsed=8.001s
...
INFO [08-30|13:22:16.661] Counting ancient database receipts blocknumber=10670906 percentage=100 elapsed=32m42.533s
+-----------------+--------------------+------------+-----------+
| DATABASE | CATEGORY | SIZE | ITEMS |
+-----------------+--------------------+------------+-----------+
| Key-Value store | Headers | 55.00 MiB | 100554 |
| Key-Value store | Bodies | 3.64 GiB | 98505 |
| Key-Value store | Receipts | 4.20 GiB | 17751843 |
| Key-Value store | Difficulties | 6.25 MiB | 110238 |
| Key-Value store | Block number->hash | 5.16 MiB | 110222 |
| Key-Value store | Block hash->number | 420.76 MiB | 10761043 |
| Key-Value store | Transaction index | 27.44 GiB | 818569194 |
| Key-Value store | Bloombit index | 1.64 GiB | 5380096 |
| Key-Value store | Contract codes | 37.33 MiB | 6636 |
| Key-Value store | Trie nodes | 74.51 GiB | 639023322 |
| Key-Value store | Trie preimages | 1.07 MiB | 17051 |
| Key-Value store | Account snapshot | 0.00 B | 0 |
| Key-Value store | Storage snapshot | 0.00 B | 0 |
| Key-Value store | Clique snapshots | 0.00 B | 0 |
| Key-Value store | Singleton metadata | 151.00 B | 5 |
| Ancient store | Headers | 4.51 GiB | 10670906 |
| Ancient store | Bodies | 97.85 GiB | 10670906 |
| Ancient store | Receipt lists | 44.38 GiB | 10670906 |
| Ancient store | └ counted receipts | -- | 802441861 |
| Ancient store | Difficulties | 166.01 MiB | 10670906 |
| Ancient store | Block number->hash | 386.71 MiB | 10670906 |
| Light client | CHT trie nodes | 0.00 B | 0 |
| Light client | Bloom trie nodes | 0.00 B | 0 |
+-----------------+--------------------+------------+-----------+
| TOTAL | 259.22 GIB | |
+-----------------+--------------------+------------+-----------+
ERROR[08-30|13:22:16.783] Database contains unaccounted data size=121.08KiB count=2632
|
Hey @Neurone How many days does the full sync take in average? How can I know how close am I? I have been syncing for like a month now.. |
I used i3.xlarge on Amazon EC2. |
Finished a fast sync today (block 10770930) with:
(was running it on a work machine during day time, so the total time is irrelevant in my case) |
Not easy to say because I tested many things during the sync process, I restarted a lot of times (40~50 times) and I moved the files around different systems. In any case I searched for all the logs I saved between restarts (not all) and I created the summary below, just to have an idea. The most of the time is spent for the fast sync of state entries, but block headers are downloaded in few hours in any case. So you should check your stats and calculate your rate to download state entries per hour: if it's high enough, I suggest you to delete the database - not the ancient - and download state entries again from the start. If your rate is like mine at the end (~8.2M entries per hour) you should be able to reach the current state from scratch in ~3 days.
|
Hi @Neurone thanks for adding a few bits of your data. I am trying to figure out if I have a high enough rate. Currently looking at about 600k per hour on a 8GB Pi 4, 1 TB Samsung SSD. It has been hard to find info on what might be a high enough rate. I just took some guessed stationary value of 600M and made myself some moving goalposts of low estimated 3000 state changes per block and high estimate 6000 state changes per block. Which would put me at needing 720k-1.44M per hour to just keep up with the chain minimum. What rate do you think is reasonable? I have been considering deleting the chain and starting again to see if I have just fallen too far behind. Also do you know if the removedb command would keep ancient by specifying the datadir even if they are both in chaindata folder? Thanks! |
` |
@sirnicolas21 I am sorry I am confused by your response here. What is your rate? It looks to me like you are in the importing state trie phase and are about ~80 blocks behind still? I have been running for 15 days been in the state trie phase for 12 days. I have been upping my --cache allocation to try to increase my rate as described above. I can see you have ~196M less state entries than I so it seems like you may have done something right, or have a ways to go. What geth options did you specify? Thanks! |
@Duncan-Brain i only configured the cache on geth nothing else "--cache 1024"
also take note that on the fast sync phase it doesn't matter if i am 80 blocks behind because i must reach the 600mil entries point to fully sync and then use the blockchain, before that ethereum is just resource eating thing and nothing more right now i am @ 502983948 entries and my previous post 6 hours ago was 501575347 so thats 240.000 entries per hour aprox after the initial sync phase you download the blocks and parse them one by one and the process is slightly different than what is happening now and last but not least.... make sure you are not overheating, personally i made a have a small heatsink and fan running on the rpi with auto turn on at around 60c |
Hi @sirnicolas21 . Thanks for the discussion I have all those suggestions implemented (I am using Kauri.io raspberry pi setup guide with a few tweaks ie --cache 2048). As per this discussion thread I am trying to find an upper bound on the number of states (which I understand is a moving target) so that I am confident that after 15 days I can tell myself "I am almost fully synced" or "I need to start over". Over in this thread on Issue #15616 some nodes go up to ~800M some ~650M. As you are not yet fully synced how do you have confidence that it will fully sync ever? You can check out some of my pi stats here on a google sheet - Google Sheet |
Hi @Duncan-Brain I can't remember my state importing rate but I can share the current stat of my fullly synced node so you can have an idea of numbers you need to reach to get a fully synced node (currently 880M state entries). About removing the db, when you execute D:\Program Files\Geth>geth --datadir s:\Ethereum inspect
INFO [11-11|19:19:15.793] Maximum peer count ETH=50 LES=0 total=50
INFO [11-11|19:19:16.156] Set global gas cap cap=25000000
INFO [11-11|19:19:16.160] Allocated cache and file handles database=s:\Ethereum\geth\chaindata cache=512.00MiB handles=8192
INFO [11-11|19:19:27.022] Opened ancient database database=s:\Ethereum\geth\chaindata\ancient
INFO [11-11|19:19:27.046] Disk storage enabled for ethash caches dir=s:\Ethereum\geth\ethash count=3
INFO [11-11|19:19:27.051] Disk storage enabled for ethash DAGs dir=C:\Users\giuse\AppData\Local\Ethash count=2
INFO [11-11|19:19:27.059] Loaded most recent local header number=11237906 hash="d8c4b8…c3f1e9" td=18641055599883751407245 age=53s
INFO [11-11|19:19:27.069] Loaded most recent local full block number=11237906 hash="d8c4b8…c3f1e9" td=18641055599883751407245 age=53s
INFO [11-11|19:19:27.079] Loaded most recent local fast block number=11237906 hash="d8c4b8…c3f1e9" td=18641055599883751407245 age=53s
INFO [11-11|19:19:27.087] Loaded last fast-sync pivot marker number=10760885
INFO [11-11|19:19:35.100] Inspecting database count=4996000 elapsed=8.002s
INFO [11-11|19:19:43.108] Inspecting database count=10062000 elapsed=16.010s
INFO [11-11|19:19:51.115] Inspecting database count=14730000 elapsed=24.017s
...
INFO [11-11|19:50:53.083] Inspecting database count=1821919000 elapsed=31m25.985s
INFO [11-11|19:51:01.090] Inspecting database count=1828550000 elapsed=31m33.992s
INFO [11-11|19:51:09.097] Inspecting database count=1835247000 elapsed=31m41.999s
+-----------------+--------------------+------------+-----------+
| DATABASE | CATEGORY | SIZE | ITEMS |
+-----------------+--------------------+------------+-----------+
| Key-Value store | Headers | 54.97 MiB | 100438 |
| Key-Value store | Bodies | 3.52 GiB | 98390 |
| Key-Value store | Receipt lists | 4.48 GiB | 98390 |
| Key-Value store | Difficulties | 7.11 MiB | 113813 |
| Key-Value store | Block number->hash | 6.04 MiB | 113813 |
| Key-Value store | Block hash->number | 439.41 MiB | 11237928 |
| Key-Value store | Transaction index | 30.13 GiB | 898783141 |
| Key-Value store | Bloombit index | 1.76 GiB | 5617664 |
| Key-Value store | Contract codes | 264.01 MiB | 44710 |
| Key-Value store | Trie nodes | 130.27 GiB | 880287080 |
| Key-Value store | Trie preimages | 2.98 GiB | 45170757 |
| Key-Value store | Account snapshot | 0.00 B | 0 |
| Key-Value store | Storage snapshot | 0.00 B | 0 |
| Key-Value store | Clique snapshots | 0.00 B | 0 |
| Key-Value store | Singleton metadata | 151.00 B | 5 |
| Ancient store | Headers | 4.75 GiB | 11147907 |
| Ancient store | Bodies | 108.13 GiB | 11147907 |
| Ancient store | Receipt lists | 50.63 GiB | 11147907 |
| Ancient store | Difficulties | 173.74 MiB | 11147907 |
| Ancient store | Block number->hash | 404.00 MiB | 11147907 |
| Light client | CHT trie nodes | 0.00 B | 0 |
| Light client | Bloom trie nodes | 0.00 B | 0 |
+-----------------+--------------------+------------+-----------+
| TOTAL | 337.97 GIB | |
+-----------------+--------------------+------------+-----------+
ERROR[11-11|19:51:16.769] Database contains unaccounted data size=126.40KiB count=2748 |
@Neurone Thanks that is helpful I think. Based on the data on my setup I think I am falling behind. Did the second test setup(Desktop Ubuntu) in your table complete sync? Or did only the last setup(Desktop Windows 10) complete sync? |
Yes, the only node that was able to fully sync it was the desktop one because it was using an SSD (it was Ubuntu on WSL2, now it is Windows because of how bad WSL2 handles intensive disk activities, you can see a x10 difference in performance in the table). My SSD is old and not really fast, but I was able to sync at the end even though with many more state entries then normal (880M vs ~700M if I resync again from scratch). I plan to move all geth data to an external SSD and put it again on my Rock64 (that now it is running only Bitcoin and IPFS ^^) but I think re-syncing again the state is not worth the effort because I will be able to free only about ~25 Gb with a grand total of 338Gb, so I think I'm ok with that for now. |
@Duncan-Brain how many it says on processed? mine is now 518m and at this point i can say it started to be really slow |
@Neurone okay thanks, last question from me: What makes you think if you resync again it would be ~700M? And for that matter 25Gb freed up... it seems it could be part of the same calculation. Based on your numbers(~652M August31 880M Today) I would say the average of 3000 states per block seems quite accurate once fully synced. But as you suggest there is some pruning possible there. If I am in fast sync I may yet catch up, if I need to beat 3000 states per block right now then I have been behind for 2 weeks. @sirnicolas21 I updated my spreadsheet from my previous post so you can see -- currently ~710M |
@Duncan-Brain About ~700M is just a guess in the middle based on stats of other people, stats taken even from this issue: #15616. It seems the most fortunate ones - i.e. #15616 (comment) - get fully synced with ~620M entries, while less fortunate ones reach ~800M. With my ~880M I think I'm an extreme case because I restarted the process so many times, and I don't want to do it again from scratch, so let's say I'll stop to ~700M if I restart all the state sync process. Because my trie nodes take ~130Gb of disk (you can see it in the inspect stats) it means every 1M states take ~147Mb. So if I save ~180M states (880-700) I'll gain ~26GB of disk. My disk is a 512GB SSD dedicated to Ethereum, the chain grows relatively slowly and, if I will need more space, I still have those ~150GB of ancient data to move outside the SSD, so for now I prefer to keep things as they are until it will be really needed. |
@rjl493456442 is working on pruning, which will be able to remove some of the junk which isn't needed. |
Fresh sync stats: Command:
There were 640,610,869 state entries |
How were there only 640,610,869 state entries 5 days ago? |
State entries are different for each one. @holiman explained why it is so here: #14647 (comment) Here another open thread where you can find check stats and experiences from others: #15616 And here data of a eth1 full node that I finished to sync yesterday after 64 hours, processing 650.634.632 state entries, using 282.75 GIB. Full stats below. Please note that during synchronization the same machine was running a Teku beacon node, so performance could have been better without it. I wanted to test if those two clients can both run smoothly on this mini PC and, spoiler alert, yes they can 😉.
|
My state entries seems to be way longer than what other people are reporting? Currently (after a week still not synced):
|
Posting latest db info after a fast sync. I must mention that the number of processed state entries reached more than 950 millions, possibly due to restarting the node while syncing multiple times.
|
System information
Geth version:
1.6.5
OS & Version: Windows 7 x64
geth Command: geth --fast --cache 8192
Expected behaviour
Geth should start in full mode.
Actual behaviour
After nearing the current block geth is continuously "imported new state entries".
Steps to reproduce the behaviour
Currently running since 10 days.
Geth console info
Backtrace
INFO [06-18|10:10:31] Imported new state entries count=384 elapsed=22.001ms processed=17118951 pending=24263
INFO [06-18|10:10:32] Imported new state entries count=384 elapsed=33.001ms processed=17119335 pending=23819
INFO [06-18|10:10:33] Imported new state entries count=384 elapsed=111.006ms processed=17119719 pending=23875
INFO [06-18|10:10:34] Imported new state entries count=384 elapsed=131.007ms processed=17120103 pending=23855
INFO [06-18|10:10:35] Imported new state entries count=384 elapsed=116.006ms processed=17120487 pending=23978
INFO [06-18|10:10:36] Imported new state entries count=384 elapsed=134.007ms processed=17120871 pending=24186
INFO [06-18|10:10:38] Imported new state entries count=384 elapsed=305.017ms processed=17121255 pending=27727
INFO [06-18|10:10:42] Imported new state entries count=384 elapsed=448.025ms processed=17121639 pending=33614
INFO [06-18|10:10:46] Imported new state entries count=384 elapsed=441.025ms processed=17122023 pending=39642
INFO [06-18|10:10:48] Imported new state entries count=384 elapsed=44.002ms processed=17122407 pending=39170
INFO [06-18|10:10:52] Imported new state entries count=384 elapsed=427.024ms processed=17122791 pending=45142
INFO [06-18|10:10:55] Imported new state entries count=384 elapsed=473.027ms processed=17123175 pending=51166
INFO [06-18|10:10:58] Imported new state entries count=384 elapsed=448.025ms processed=17123559 pending=57128
INFO [06-18|10:11:01] Imported new state entries count=384 elapsed=444.025ms processed=17123943 pending=63129
INFO [06-18|10:11:04] Imported new state entries count=384 elapsed=441.025ms processed=17124327 pending=69173
INFO [06-18|10:11:04] Imported new state entries count=1 elapsed=0s processed=17124328 pending=69172
INFO [06-18|10:11:07] Imported new state entries count=384 elapsed=442.025ms processed=17124712 pending=75182
INFO [06-18|10:11:10] Imported new state entries count=384 elapsed=470.026ms processed=17125096 pending=81186
INFO [06-18|10:11:11] Imported new state entries count=384 elapsed=335.019ms processed=17125480 pending=81736
INFO [06-18|10:11:14] Imported new state entries count=384 elapsed=440.025ms processed=17125864 pending=87718
INFO [06-18|10:11:15] Imported new state entries count=384 elapsed=140.008ms processed=17126248 pending=87812
INFO [06-18|10:11:16] Imported new state entries count=384 elapsed=31.001ms processed=17126632 pending=87226
INFO [06-18|10:11:18] Imported new state entries count=384 elapsed=88.005ms processed=17127016 pending=87040
INFO [06-18|10:11:19] Imported new state entries count=384 elapsed=39.002ms processed=17127400 pending=86803
INFO [06-18|10:11:20] Imported new state entries count=384 elapsed=36.002ms processed=17127784 pending=86585
INFO [06-18|10:11:23] Imported new state entries count=1 elapsed=0s processed=17127785 pending=86272
INFO [06-18|10:11:23] Imported new state entries count=384 elapsed=1.610s processed=17128169 pending=86271
INFO [06-18|10:11:25] Imported new state entries count=384 elapsed=143.008ms processed=17128553 pending=87792
INFO [06-18|10:11:28] Imported new state entries count=384 elapsed=183.010ms processed=17128937 pending=90117
INFO [06-18|10:11:28] Imported new state entries count=1 elapsed=1ms processed=17128938 pending=90120
INFO [06-18|10:11:28] Imported new state entries count=1 elapsed=0s processed=17128939 pending=90118
INFO [06-18|10:11:29] Imported new state entries count=384 elapsed=102.005ms processed=17129323 pending=90022
INFO [06-18|10:11:30] Imported new state entries count=384 elapsed=184.010ms processed=17129707 pending=92320
INFO [06-18|10:11:32] Imported new state entries count=384 elapsed=185.010ms processed=17130091 pending=94665
INFO [06-18|10:11:34] Imported new state entries count=384 elapsed=187.010ms processed=17130475 pending=97053
INFO [06-18|10:11:36] Imported new state entries count=384 elapsed=194.011ms processed=17130859 pending=99550
INFO [06-18|10:11:38] Imported new state entries count=384 elapsed=183.010ms processed=17131243 pending=101954
INFO [06-18|10:11:40] Imported new state entries count=384 elapsed=202.011ms processed=17131627 pending=104395
INFO [06-18|10:11:42] Imported new state entries count=384 elapsed=196.011ms processed=17132011 pending=106904
INFO [06-18|10:11:44] Imported new state entries count=384 elapsed=186.010ms processed=17132395 pending=109176
INFO [06-18|10:11:47] Imported new state entries count=384 elapsed=184.010ms processed=17132779 pending=111554
INFO [06-18|10:11:47] Imported new state entries count=2 elapsed=184.010ms processed=17132781 pending=111554
INFO [06-18|10:11:48] Imported new state entries count=384 elapsed=34.002ms processed=17133165 pending=110760
INFO [06-18|10:11:50] Imported new state entries count=384 elapsed=193.011ms processed=17133549 pending=113172
The text was updated successfully, but these errors were encountered: