Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tips for running a BSC full node #502

Closed
unclezoro opened this issue Nov 1, 2021 · 162 comments
Closed

Tips for running a BSC full node #502

unclezoro opened this issue Nov 1, 2021 · 162 comments
Labels
good first issue Good for newcomers

Comments

@unclezoro
Copy link
Collaborator

unclezoro commented Nov 1, 2021

Some of the enhancements below can address the existing challenges with running a BSC full node:

Binary

All the clients are suggested to upgrade to the latest release. The latest version is supposed to be more stable and get better performance.

Storage

According to the test, the performance of a fullnoded will degrade when the storage size exceeds 1.5T. We suggest the fullnode always keeps light storage by pruning the storage.

Following are the steps to do prune:

  1. Stop the BSC node first.
  2. Run nohup geth snapshot prune-state --datadir {the data dir of your bsc node} &. It will take 3-5 hours to finish.
  3. Start the node once it is done.

The maintainers should always have a few backup nodes so that you can switch to the backup ones when one of them is pruning.

The hardware is also important, make sure the SSD meets: 2T GB of free disk space, solid-state drive(SSD), gp3, 8k IOPS, 250MB/S throughput, read latency <1ms.

Light Storage

When the node crashes or is force killed, the node will sync from a block that was a few minutes or a few hours ago. This is because the state in memory is not persisted into the database in real time, and the node needs to replay blocks from the last checkpoint. The replaying time dependents on the configuration TrieTimeout in the config.toml. We suggest you raise it if you can tolerate with long replaying time, so the node can keep light storage.

Performance Tuning

In the logs, mgasps means the block processing ability of the fullnode, make sure the value is above 50.

The node can enable the profile function by —pprof

Profile by curl -sK -v http://127.0.0.1:6060/debug/pprof/profile?seconds=60 > profile_60s.out, and the dev community can help to analyze the performance.

New Node

If you build a new BSC node, please fetch snapshot from: https://github.com/binance-chain/bsc-snapshots

@unclezoro unclezoro pinned this issue Nov 1, 2021
@unclezoro unclezoro changed the title some enhancements that can address the existing challenges with running a BSC full node: some tips for running a BSC full node Nov 1, 2021
@psdlt
Copy link

psdlt commented Nov 1, 2021

@guagualvcha thank you. Could you please elaborate on what exactly DisablePeerTxBroadcast changes? Reading through the code the only usage I can find is here. Which seems to me that if DisablePeerTxBroadcast is set to true then our node will not be receiving notifications about pending transactions? Am I missing something?

@ghost
Copy link

ghost commented Nov 2, 2021

ERROR[11-02|06:02:55.001] Failed to open snapshot tree err="head doesn't match snapshot: have 0x5c17a8fc0164dabedd446e954b64e8a54fc7c8b4fee1bbd707c3cc3ed1e45fff, want 0x431565cee8b7f3d7bbdde1265304fa4574dc3531e511e9ffe43ae79d28e431d6"
head doesn't match snapshot: have 0x5c17a8fc0164dabedd446e954b64e8a54fc7c8b4fee1bbd707c3cc3ed1e45fff, want 0x431565cee8b7f3d7bbdde1265304fa4574dc3531e511e9ffe43ae79d28e431d6

@vae520283995
Copy link

@guagualvcha
https://github.com/binance-chain/bsc/issues#issuecomment-956215679
So do we need to wait until the next version to enable diff sync?

@Nojoix
Copy link

Nojoix commented Nov 3, 2021

I dont want to be rude here, but bsc is in real danger. These past couple of weeks was a nightmare for me: cant resync. I started digging, got in relation with admins etc and it's not just me. Geth 1.1.3 was a nightmare and 1.1.4 not helping much, the solution you give here doesnt solve anything. If you guys dont figure out the syncing issue with a proper patch we dont have a bring future. Yesturday i tried to dl the EU snap, it was corrupted and the full state of today seems corrupted also (reteying a download)

@de-ltd
Copy link

de-ltd commented Nov 3, 2021

I dont want to be rude here, but bsc is in real danger. These past couple of weeks was a nightmare for me: cant resync. I started digging, got in relation with admins etc and it's not just me. Geth 1.1.3 was a nightmare and 1.1.4 not helping much, the solution you give here doesnt solve anything. If you guys dont figure out the syncing issue with a proper patch we dont have a bring future. Yesturday i tried to dl the EU snap, it was corrupted and the full state of today seems corrupted also (reteying a download)

Agreed, its a total mess.

@unclezoro
Copy link
Collaborator Author

@guagualvcha thank you. Could you please elaborate on what exactly DisablePeerTxBroadcast changes? Reading through the code the only usage I can find is here. Which seems to me that if DisablePeerTxBroadcast is set to true then our node will not be receiving notifications about pending transactions? Am I missing something?

Ethereum is a grid network, while BSC is an areatus network. It means the transactions flow from different fullnodes all around the world to the 21 validators. Usually validators are guarded by a sentry node who joins the network directly. As the transaction volume on BSC is much larger, the sentry nodes are under pressure dealing with the transaction exchange protocol. We extend the protocol so that any full nodes can claim that they are not interested in pending transactions as they are not validator/miner, it will save a lot of network and calculation resources. It can be enabled by ​​adding DisablePeerTxBroadcast = true under the [Eth] module of the config.tom file.

@unclezoro
Copy link
Collaborator Author

@guagualvcha https://github.com/binance-chain/bsc/issues#issuecomment-956215679 So do we need to wait until the next version to enable diff sync?

No need to wait.

@unclezoro
Copy link
Collaborator Author

unclezoro commented Nov 3, 2021

I dont want to be rude here, but bsc is in real danger. These past couple of weeks was a nightmare for me: cant resync. I started digging, got in relation with admins etc and it's not just me. Geth 1.1.3 was a nightmare and 1.1.4 not helping much, the solution you give here doesnt solve anything. If you guys dont figure out the syncing issue with a proper patch we dont have a bring future. Yesturday i tried to dl the EU snap, it was corrupted and the full state of today seems corrupted also (reteying a download)

sorry about that. As I know the ops is uploading a new snapshot after they are aware of that, some monitors to ensure the data integrity. For the syncing issue, would you open the pprof port of your nodes and do curl -sK -v http://127.0.0.1:6060/debug/pprof/profile?seconds=60 > profile_60s.out, upload the profile file, I could help to check.

@vae520283995
Copy link

No need to wait.

Is v1.1.3 or v1.1.4 recommended now?

@ghost
Copy link

ghost commented Nov 4, 2021

Is v1.1.3 or v1.1.4 recommended now?

I have upgrade to 1.1.4, and started node with snapshot data.

@vae520283995
Copy link

@guagualvcha Is the pruning done?

INFO [11-04|14:01:41.317] Pruning state data                       nodes=6,918,707,201 size=1.94TiB    elapsed=8h11m1.703s  eta=55.513s
INFO [11-04|14:01:49.318] Pruning state data                       nodes=6,920,547,567 size=1.94TiB    elapsed=8h11m9.704s  eta=47.676s
INFO [11-04|14:01:57.318] Pruning state data                       nodes=6,922,390,198 size=1.94TiB    elapsed=8h11m17.704s eta=39.822s
INFO [11-04|14:02:05.320] Pruning state data                       nodes=6,924,202,706 size=1.95TiB    elapsed=8h11m25.706s eta=32.105s
INFO [11-04|14:02:13.320] Pruning state data                       nodes=6,926,075,421 size=1.95TiB    elapsed=8h11m33.706s eta=24.125s
INFO [11-04|14:02:21.324] Pruning state data                       nodes=6,927,973,074 size=1.95TiB    elapsed=8h11m41.710s eta=16.051s
INFO [11-04|14:02:29.324] Pruning state data                       nodes=6,929,789,240 size=1.95TiB    elapsed=8h11m49.710s eta=8.311s
INFO [11-04|14:02:37.327] Pruning state data                       nodes=6,931,501,962 size=1.95TiB    elapsed=8h11m57.713s eta=1.019s
INFO [11-04|14:02:38.439] Pruned state data                        nodes=6,931,741,625 size=1.95TiB    elapsed=8h11m58.825s
INFO [11-04|14:02:41.037] Compacting database                      range=0x00-0x10 elapsed="3.329µs"

@Lajoix
Copy link

Lajoix commented Nov 4, 2021

sorry about that. As I know the ops is uploading a new snapshot after they are aware of that, some monitors to ensure the data integrity. For the syncing issue, would you open the pprof port of your nodes and do curl -sK -v http://127.0.0.1:6060/debug/pprof/profile?seconds=60 > profile_60s.out, upload the profile file, I could help to check.

well, mirrors are gone, and for now i dont have a node as i can't sync them from genesis (and can't download snapshots).

I tried your method again and got (tryed twice):
Failed to store fast sync trie progress err="leveldb/table: corruption on data-block (pos=2051287): checksum mismatch, want=0x548fe288 got=0x525970c2 [file=060694.ldb]"

To be honest, it's very confusing, i used to fast resync from genesis, it was 6-9hours to import state entries then 3 days to download everything.

@jcaffet
Copy link

jcaffet commented Nov 4, 2021

I dont want to be rude here, but bsc is in real danger. These past couple of weeks was a nightmare for me: cant resync. I started digging, got in relation with admins etc and it's not just me. Geth 1.1.3 was a nightmare and 1.1.4 not helping much, the solution you give here doesnt solve anything. If you guys dont figure out the syncing issue with a proper patch we dont have a bring future. Yesturday i tried to dl the EU snap, it was corrupted and the full state of today seems corrupted also (reteying a download)

I totally agree with that as we face exactly the same scenario. What is the worst is the total lack of communication in comparaison with the size of the project.

@psdlt
Copy link

psdlt commented Nov 4, 2021

@Lajoix @jcaffet
folks, what kind of filing systems do you use on your servers? I've read somewhere that xfs is better than ext4 (sorry, don't remember where; don't have a link).
Also, do you use single disk or RAID?
I have one server on AWS (i3en.2xlarge, RAID0, xfs), been running it for ~half a year, never had issues you're describing.
Few days ago I've setup another server on Vultr (also RAID0, also xfs) - it fast synced to latest block from scratch in under a day.

If you're constantly having sync issues and can't catch up to network - review your hardware setup, maybe spin up a different instance on a different region, maybe you just got a busy host, who knows.

@jcaffet
Copy link

jcaffet commented Nov 4, 2021

@Lajoix @jcaffet folks, what kind of filing systems do you use on your servers? I've read somewhere that xfs is better than ext4 (sorry, don't remember where; don't have a link). Also, do you use single disk or RAID? I have one server on AWS (i3en.2xlarge, RAID0, xfs), been running it for ~half a year, never had issues you're describing. Few days ago I've setup another server on Vultr (also RAID0, also xfs) - it fast synced to latest block from scratch in under a day.

If you're constantly having sync issues and can't catch up to network - review your hardware setup, maybe spin up a different instance on a different region, maybe you just got a busy host, who knows.

Thanks for your feedback. We have aws i3en.xlarge instances with xfs ... but no raid0 yet.
We were using ext4 and also recently moved to xfs (infos in #189).
We have a node for months but we have faced issues since mid-last week.

@Lajoix
Copy link

Lajoix commented Nov 4, 2021

@Lajoix @jcaffet folks, what kind of filing systems do you use on your servers? I've read somewhere that xfs is better than ext4 (sorry, don't remember where; don't have a link). Also, do you use single disk or RAID? I have one server on AWS (i3en.2xlarge, RAID0, xfs), been running it for ~half a year, never had issues you're describing. Few days ago I've setup another server on Vultr (also RAID0, also xfs) - it fast synced to latest block from scratch in under a day.

If you're constantly having sync issues and can't catch up to network - review your hardware setup, maybe spin up a different instance on a different region, maybe you just got a busy host, who knows.

I'm running a 24 cores 64 gb 1Tb NVMe ssd on ubuntu with 1gbps connection. I'm in ext4, but i'm not sure changing to xfs could really make the difference. I used to resync from genesis before with no issue.
I just tried another fastsync from genesis and got this error :

Failed to update chain markers error="leveldb/table: corruption on data-block (pos=1359583): checksum mismatch, want=0xdeca7719 got=0xc3696716 [file=351971.ldb]"

i'm very confused to be honest.

@Cwsor
Copy link

Cwsor commented Nov 4, 2021

@Lajoix @jcaffet folks, what kind of filing systems do you use on your servers? I've read somewhere that xfs is better than ext4 (sorry, don't remember where; don't have a link). Also, do you use single disk or RAID? I have one server on AWS (i3en.2xlarge, RAID0, xfs), been running it for ~half a year, never had issues you're describing. Few days ago I've setup another server on Vultr (also RAID0, also xfs) - it fast synced to latest block from scratch in under a day.
If you're constantly having sync issues and can't catch up to network - review your hardware setup, maybe spin up a different instance on a different region, maybe you just got a busy host, who knows.

Thanks for your feedback. We have aws i3en.xlarge instances with xfs ... but no raid0 yet. We were using ext4 and also recently moved to xfs (infos in #189). We have a node for months but we have faced issues since mid-last week.

This timeline lines up with my experience as well. Node ran fine for months until about a week ago. Current server is ryzen 5950 with 128gb ram + nvme ssd on raid1. Seen others have no issues with lesser specs, but I have been unable to keep up with the current block, always about 50 behind.

@unclezoro
Copy link
Collaborator Author

@guagualvcha Is the pruning done?

INFO [11-04|14:01:41.317] Pruning state data                       nodes=6,918,707,201 size=1.94TiB    elapsed=8h11m1.703s  eta=55.513s
INFO [11-04|14:01:49.318] Pruning state data                       nodes=6,920,547,567 size=1.94TiB    elapsed=8h11m9.704s  eta=47.676s
INFO [11-04|14:01:57.318] Pruning state data                       nodes=6,922,390,198 size=1.94TiB    elapsed=8h11m17.704s eta=39.822s
INFO [11-04|14:02:05.320] Pruning state data                       nodes=6,924,202,706 size=1.95TiB    elapsed=8h11m25.706s eta=32.105s
INFO [11-04|14:02:13.320] Pruning state data                       nodes=6,926,075,421 size=1.95TiB    elapsed=8h11m33.706s eta=24.125s
INFO [11-04|14:02:21.324] Pruning state data                       nodes=6,927,973,074 size=1.95TiB    elapsed=8h11m41.710s eta=16.051s
INFO [11-04|14:02:29.324] Pruning state data                       nodes=6,929,789,240 size=1.95TiB    elapsed=8h11m49.710s eta=8.311s
INFO [11-04|14:02:37.327] Pruning state data                       nodes=6,931,501,962 size=1.95TiB    elapsed=8h11m57.713s eta=1.019s
INFO [11-04|14:02:38.439] Pruned state data                        nodes=6,931,741,625 size=1.95TiB    elapsed=8h11m58.825s
INFO [11-04|14:02:41.037] Compacting database                      range=0x00-0x10 elapsed="3.329µs"

yes

@charliedimaggio
Copy link

I dont want to be rude here, but bsc is in real danger. These past couple of weeks was a nightmare for me: cant resync. I started digging, got in relation with admins etc and it's not just me. Geth 1.1.3 was a nightmare and 1.1.4 not helping much, the solution you give here doesnt solve anything. If you guys dont figure out the syncing issue with a proper patch we dont have a bring future. Yesturday i tried to dl the EU snap, it was corrupted and the full state of today seems corrupted also (reteying a download)

sorry about that. As I know the ops is uploading a new snapshot after they are aware of that, some monitors to ensure the data integrity. For the syncing issue, would you open the pprof port of your nodes and do curl -sK -v http://127.0.0.1:6060/debug/pprof/profile?seconds=60 > profile_60s.out, upload the profile file, I could help to check.

Would it be possible to have some guidance on what tools, if any, we can use to analyse the profile_60s.out file ourselves?

@Sharp-Lee
Copy link

Sharp-Lee commented Nov 5, 2021

@guagualvcha thank you. Could you please elaborate on what exactly DisablePeerTxBroadcast changes? Reading through the code the only usage I can find is here. Which seems to me that if DisablePeerTxBroadcast is set to true then our node will not be receiving notifications about pending transactions? Am I missing something?

Ethereum is a grid network, while BSC is an areatus network. It means the transactions flow from different fullnodes all around the world to the 21 validators. Usually validators are guarded by a sentry node who joins the network directly. As the transaction volume on BSC is much larger, the sentry nodes are under pressure dealing with the transaction exchange protocol. We extend the protocol so that any full nodes can claim that they are not interested in pending transactions as they are not validator/miner, it will save a lot of network and calculation resources. It can be enabled by ​​adding DisablePeerTxBroadcast = true under the [Eth] module of the config.tom file.

so,its means I cannot subscribe pendingTransactions?

@barryz
Copy link
Contributor

barryz commented Nov 5, 2021

After upgrading to version v1.1.3 and running the service with your suggested settings. I have stuck in syncing, error log in brief shows as below:

lvl=eror msg="\n########## BAD BLOCK #########\nChain config: {ChainID: 56 Homestead: 0 DAO: <nil> DAOSupport: false EIP150: 0 EIP155: 0 EIP158: 0 Byzantium: 0 Constantinople: 0 Petersburg: 0 Istanbul: 0, Muir Glacier: 0, Ramanujan: 0, Niels: 0, MirrorSync: 5184000, Berlin: <nil>, YOLO v3: <nil>, Engine: parlia}\n\nNumber: 12384585\nHash: 0xc8f9d3fea1fe05242ed575dec20dbf781a06b978ca3de108fdb2cda4cbae9636\n\t 0: cumulative: 21139 gas: 21139 contract: 0x0000000000000000000000000000000000000000 status: 1 tx: 0xe2059bfb1cc59b2a45887f5cc6e220884b7bb3ae4d2e1702d9a2d5ff17680c1d logs 
... ...
expected tx hash 0xd5a6db9b741519b1332fd32da482c5ddb49d97a361ef8868d256698995c9e871, get 0x78b01570f90ec93f244d3680075db0aca374fc5c54710c8564669b17e6c54099, nonce 613215, to 0x0000000000000000000000000000000000001000, value 206954075983563310, gas 9223372036854775807, gasPrice 0, data f340fa010000000000000000000000002d4c407bbe49438ed859fe965b140dcf1aab71a9\n##############################\n

@hdiass
Copy link

hdiass commented Nov 5, 2021

I dont want to be rude here, but bsc is in real danger. These past couple of weeks was a nightmare for me: cant resync. I started digging, got in relation with admins etc and it's not just me. Geth 1.1.3 was a nightmare and 1.1.4 not helping much, the solution you give here doesnt solve anything. If you guys dont figure out the syncing issue with a proper patch we dont have a bring future. Yesturday i tried to dl the EU snap, it was corrupted and the full state of today seems corrupted also (reteying a download)

I totally agree with that as we face exactly the same scenario. What is the worst is the total lack of communication in comparaison with the size of the project.

Totally agree

@gomes7997
Copy link

I'm following all of these recommendations, and I'm using the recommended AWS hardware for a validator node, even though I'm only operating a regular node. My node still can't catch up after starting from the latest snapshot this morning. The likely problem is that there are not enough healthy nodes in the network providing blocks to nodes that go out of sync. Do you have any suggestions for this problem? Can Binance provide healthy nodes to ensure that others can sync?

@Crypto2
Copy link

Crypto2 commented Nov 5, 2021

It's probably more of a problem of too many blocks and/or too large a size and it takes too long to process them versus any propagation issues.

@gomes7997
Copy link

No it's not. I've regularly seen higher processing throughput on the same node hardware under healthier network conditions. It's reported as mgasps, gas units per second during the "imported new chain segment message." .Increasing the block size wouldn't change the node's rate of processing gas units. The bottleneck is that the node likely isn't receiving enough new blocks to reach it's throughput capacity. I'm using the recommended validator hardware.

This was referenced May 24, 2022
@zyhyung
Copy link

zyhyung commented Jun 6, 2022

Hi guys,

Is the pruning command line still the same? According to this release, there is a new data prune tool introduced.
https://github.com/bnb-chain/bsc/releases/tag/v1.1.8

I am trying to prune on my BSC full node but i'm unsure of the command.

Any help will be appreciated. Thanks!

@michaelr524
Copy link

We've successfully started a full node from snapshot. The snapshot started at 1.5T - is this the minimal size at this time?
BTW, tried pruning and get this error:
head doesn't match snapshot: have 0xdbdcc7285975a1077c86cc1f1913fc490c4a85817b729fdacafdac4da6953b6b, want 0x919fcc7ad870b53db0aa76eb588da06bacb6d230195100699fc928511003b422
Anyone knows what does this mean?

@du5
Copy link

du5 commented Jun 24, 2022

We've successfully started a full node from snapshot. The snapshot started at 1.5T - is this the minimal size at this time? BTW, tried pruning and get this error: head doesn't match snapshot: have 0xdbdcc7285975a1077c86cc1f1913fc490c4a85817b729fdacafdac4da6953b6b, want 0x919fcc7ad870b53db0aa76eb588da06bacb6d230195100699fc928511003b422 Anyone knows what does this mean?

BNB48Club/bsc-snapshots contributing another dump of snapshot (543.61G)

@michaelr524
Copy link

Cool! How do you make these? Could you share any links to docs or explanation? @du5

@ofarukcaki
Copy link

We've successfully started a full node from snapshot. The snapshot started at 1.5T - is this the minimal size at this time? BTW, tried pruning and get this error: head doesn't match snapshot: have 0xdbdcc7285975a1077c86cc1f1913fc490c4a85817b729fdacafdac4da6953b6b, want 0x919fcc7ad870b53db0aa76eb588da06bacb6d230195100699fc928511003b422 Anyone knows what does this mean?

BNB48Club/bsc-snapshots contributing another dump of snapshot (543.61G)

How it can be that much small? I have pruned my node and it only reduced to 1.3TB

@du5
Copy link

du5 commented Jun 26, 2022

follow the official documentation

@tpalaz
Copy link

tpalaz commented Jun 28, 2022

I've successfully setup and ran my fullnode using BNB48Club's snapshots and have been able to get it to sync.
One issue I'm facing is that historic transactions aren't able to be retreived via RPC calls. Specifically, eth_getTransaction and getTransactionReceipt return null. Whereas if the transaction was recent (within a few blocks ago), the node has no problem serving it.

Is this due to the snapshot that was downloaded? Or could it be configuration of the node? For reference, my startup parameters are:

--config ./config.toml --datadir ./mainnet --cache 32000 --rpc.allow-unprotected-txs --txlookuplimit 0 --http --maxpeers 100 --ws --syncmode=full --snapshot=true --diffsync" > /home/geth/start.sh

IOPS and mgasps are exceeding (averaging around 300 mgasps - so not synchronization is not the issue).

Likewise, my config file is here:

[Unit]
Description=BSC Full Node

[Service]
User=geth
Type=simple
WorkingDirectory=/home/geth
ExecStart=/bin/bash /home/geth/start.sh
Restart=on-failure
RestartSec=30
TimeoutSec=300
IOWeight=8000
CPUWeight=8000

[Install]
WantedBy=default.target

Any advice on how to fix this issue? The only possible thing I could think of was the snapshot not having the correct indexes - which doesn't make a ton of sense and probably wouldn't even allow sync.

@du5
Copy link

du5 commented Jun 28, 2022

@tpalaz the reason for the small size is that the historical transactions are prune

@du5
Copy link

du5 commented Jun 28, 2022

@tpalaz If the transaction you need to retrieve is a long time past, it is recommended to use erigon and turn off prune, it will surprise you with its efficiency

@tpalaz
Copy link

tpalaz commented Jun 28, 2022

@tpalaz the reason for the small size is that the historical transactions are prune

@du5
I guess that's why the download is so small. Does BNB48's erigon snapshot prune as well? Or would you recommend syncing from 0 :(

@du5
Copy link

du5 commented Jun 28, 2022

@tpalaz the reason for the small size is that the historical transactions are prune

@du5
I guess that's why the download is so small. Does BNB48's erigon snapshot prune as well? Or would you recommend syncing from 0 :(

Only save the latest 5k blocks🤣

@michaelr524
Copy link

I got it running very quickly. Thanks for sharing. For our purpose we only need recent blocks so it fits very well.

@haumanto
Copy link

haumanto commented Jul 1, 2022

dear @guagualvcha , the mgas value of one of our nodes is below 50, attach is the profile file... please help checking...
profile_60s.out.zip

@banlieu451
Copy link

banlieu451 commented Jul 20, 2022

I'm prunning for 15 hours

How to know if that is finish ?

image

@mj-dcb
Copy link

mj-dcb commented Jul 26, 2022

I'm prunning for 15 hours

How to know if that is finish ?

image

You can attach to the node and run an eth.syncing command.

@PPianAIC
Copy link

PPianAIC commented Aug 1, 2022

Excuse me, it's not fully synced, can I restart full mode from 0? How to set starting from 0?

@stonetown1975
Copy link

stonetown1975 commented Aug 1, 2022

BAD BLOCK with version v1.1.12. I have installed new version and it was running fine for a day. On block 20,023,433 it stopped with BAD BLOCK message and can not move forward.

it was running with --diffsync

I have tried solution recommended in #628. I was trying to restart it with --snapshot=false and then with --snapshot=false. It did not help. Any suggestion how to fix it? Any technical discord forum related to BSC? The invite https://discord.com/invite/binancesmartchain does not work anymore.

@hadideveloper
Copy link

hadideveloper commented Aug 17, 2022

After syncing from snapshot geth-20220816.tar.lz4 I can get block data after block number "19144096". All eth.getBlock from Block 1 to 19144096 return null, however, from 19144097 to lastBlockNumber it returns data.

Geth: 1.1.12
OS: Ubuntu 20.04
Snapshot: geth-20220816.tar.lz4

running command: ./geth_linux --config ./config.toml --datadir ./mainnet --cache 100000 --rpc.allow-unprotected-txs --txlookuplimit 0 --http --maxpeers 100 --ws --syncmode=full --snapshot=true --diffsync

Is snapshot is a full copy of all blocks(from genesis to now) or it's just a copy of lastest blocks?

@leviska
Copy link

leviska commented Aug 18, 2022

Is snapshot is a full copy of all blocks(from genesis to now) or it's just a copy of lastest blocks?

snapshot probably contains only the last 128 blocks. It's for sure not the archive (all the data) snapshot, only the last blocks
Full snapshot takes around 15-20 tb of space, not 2 tb :)

@suyog-bhat
Copy link

After syncing from snapshot geth-20220816.tar.lz4 I can get block data after block number "19144096". All eth.getBlock from Block 1 to 19144096 return null, however, from 19144097 to lastBlockNumber it returns data.

Geth: 1.1.12 OS: Ubuntu 20.04 Snapshot: geth-20220816.tar.lz4

running command: ./geth_linux --config ./config.toml --datadir ./mainnet --cache 100000 --rpc.allow-unprotected-txs --txlookuplimit 0 --http --maxpeers 100 --ws --syncmode=full --snapshot=true --diffsync

Is snapshot is a full copy of all blocks(from genesis to now) or it's just a copy of lastest blocks?

after release v1.1.8 bsc official snapshots are block-pruned, before that it was just state-pruned. thats the reason blockdetails are missing

@acswap
Copy link

acswap commented Aug 26, 2022 via email

@zzzckck
Copy link
Collaborator

zzzckck commented Dec 14, 2023

replaced by: #1947

@zzzckck zzzckck closed this as completed Dec 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests