Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BSC is a lost cause #553

Open
kaber2 opened this issue Nov 16, 2021 · 110 comments
Open

BSC is a lost cause #553

kaber2 opened this issue Nov 16, 2021 · 110 comments

Comments

@kaber2
Copy link

@kaber2 kaber2 commented Nov 16, 2021

Guys, seriously, WTF. This is a blockchain with supposedly billions of value, yet it is governed and developed like the project of a stoned teenager.

I've rarely seen something handled so unprofessionally.

  • There is no code review, patches are simply committed, in most cases even without a proper description of what they do or what problem they try to solve
  • There doesn't appear to be any reasonable testing process in place. Every update appears to make things worse.
  • There is zero responding to bug reports. Hundreds of people report non syncing nodes or nodes falling out of sync. Response from the "developers" - zero.
  • There is no beta testing, stuff is thrown over the fence. Features like diffsync are declared "stable" by decrete, even though hundreds of people reporting the opposite indicated otherwise.
  • The developers apparently don't have a freaking clue of what it is they are building. The "feature" to disable P2P Tx broadcast must be the most stupid idea I've seen in a long time. If people actually use this completely braindead feature, your network will become highly centralized and many nodes will experience issues with non executing Txs. How the phrack are Txs going to be propagated within the network without nodes actually doing that?
  • This "fix" is, on top, just a stab in the dark. There is zero indication that the quite low unexecuted Tx rate of BSC has any responsibility in the current situation. Ethereum has 20 fold the amount of Txs flying around without the slightest problem.
  • As someone else already wrote, the root cause of the problem is that you mindlessly increased the blocksize and reduced the blocktime without doing the actual work required.

Overall, there is only one conclusion. Binance wanted a quick hack to make some money, but is not willing to expand even modest resources to make this thing actually work. Given that they've made billions from this, this is absurd and a huge abuse of the trust (and money) people put in this.

You have proven to be incompetent of leading, developing and governing this. Just be honest and trash it without more people wasting their time and money.

(written by a 4k BNB holder, considering dumping this garbage)

@mcoelho80
Copy link

@mcoelho80 mcoelho80 commented Nov 16, 2021

I'm tired of trying to keep my node synced. Even with NVME, 196Gb RAM, all possible configurations and sync modes, my node losts sync VERY quickly.

Loading

@abarath94
Copy link

@abarath94 abarath94 commented Nov 16, 2021

Guys, seriously, WTF. This is a blockchain with supposedly billions of value, yet it is governed and developed like the project of a stoned teenager.

I've rarely seen something handled so unprofessionally.

  • There is no code review, patches are simply committed, in most cases even without a proper description of what they do or what problem they try to solve
  • There doesn't appear to be any reasonable testing process in place. Every update appears to make things worse.
  • There is zero responding to bug reports. Hundreds of people report non syncing nodes or nodes falling out of sync. Response from the "developers" - zero.
  • There is no beta testing, stuff is thrown over the fence. Features like diffsync are declared "stable" by decrete, even though hundreds of people reporting the opposite indicated otherwise.
  • The developers apparently don't have a freaking clue of what it is they are building. The "feature" to disable P2P Tx broadcast must be the most stupid idea I've seen in a long time. If people actually use this completely braindead feature, your network will become highly centralized and many nodes will experience issues with non executing Txs. How the phrack are Txs going to be propagated within the network without nodes actually doing that?
  • This "fix" is, on top, just a stab in the dark. There is zero indication that the quite low unexecuted Tx rate of BSC has any responsibility in the current situation. Ethereum has 20 fold the amount of Txs flying around without the slightest problem.
  • As someone else already wrote, the root cause of the problem is that you mindlessly increased the blocksize and reduced the blocktime without doing the actual work required.

Overall, there is only one conclusion. Binance wanted a quick hack to make some money, but is not willing to expand even modest resources to make this thing actually work. Given that they've made billions from this, this is absurd and a huge abuse of the trust (and money) people put in this.

You have proven to be incompetent of leading, developing and governing this. Just be honest and trash it without more people wasting their time and money.

(written by a 4k BNB holder, considering dumping this garbage)

I literally can't agree more. The state of the node is simply horrible. We are trying to create a DAPP with real time data, but with the current state it is not possible. At least Ethereum is working like a charm since two weeks, without any irregular activities.
If they can't fix the problems, BSC is a lost cause, and developers are going to switch to another chain.
At the current state, I can't recommend developers to work with BSC.

Loading

@m-e-r-k-l-e-root
Copy link

@m-e-r-k-l-e-root m-e-r-k-l-e-root commented Nov 16, 2021

Roughly a year ago I was able to run bsc geth nodes on literally some of the cheapest VPS's (<$20/mo) I could find with relatively decent specs (ah, the good old days). Now I'm paying thousands a month for dedicated hardware just to keep my infra running smoothly. While some of this can be contributed to network growth, most of it is due to what appears to be ineptitude in regard to the developers of this project.

Loading

@mcoelho80
Copy link

@mcoelho80 mcoelho80 commented Nov 16, 2021

Roughly a year ago I was able to run bsc geth nodes on literally some of the cheapest VPS's (<$20/mo) I could find with relatively decent specs (ah, the good old days). Now I'm paying thousands a month for dedicated hardware just to keep my infra running smoothly. While some of this can be contributed to network growth, most of it is due to what appears to be ineptitude in regard to the developers of this project.

Where is your server located? Are you experiencing lost of sync? What HW do you have?

Loading

@mcoelho80
Copy link

@mcoelho80 mcoelho80 commented Nov 16, 2021

Where are the developers?

Loading

@DamCzech
Copy link

@DamCzech DamCzech commented Nov 16, 2021

There isn't. They also spoiled the Light Client, which since version 1.1.3 does not work properly....

Loading

@edgeofthegame
Copy link

@edgeofthegame edgeofthegame commented Nov 17, 2021

Hello everybody.

I wanna repeat here my suggesion at least as a temporary solution:

what if developers limit amount of transactions per block like 300-400 (sorted by gas price), so blockchain speed would be reduced and every node could sync?

It is a simple and efficient solution. And it could be implemented in a few hours and stabilaze a situation at least for a while, so developers have time to think and maybe figure out of something more elegant.

(written by a 0.34 BNB holder, dumped 66% of this garbage already)

Loading

@ratthakorn2509
Copy link

@ratthakorn2509 ratthakorn2509 commented Nov 17, 2021

Loading

1 similar comment
@ratthakorn2509
Copy link

@ratthakorn2509 ratthakorn2509 commented Nov 17, 2021

Loading

@cyberskycat
Copy link

@cyberskycat cyberskycat commented Nov 17, 2021

i suggest issue a new chain named bsc pro and next year issue bsc pro max and then everyone can synced easily.

Loading

@blackerhot
Copy link

@blackerhot blackerhot commented Nov 17, 2021

damn BSC repo is on fire

Loading

@AwesomeMylaugh
Copy link

@AwesomeMylaugh AwesomeMylaugh commented Nov 17, 2021

image

is that normal?I have wait for 3 days! could sb helps me plz

Loading

@tsarv775
Copy link

@tsarv775 tsarv775 commented Nov 17, 2021

I can't agree more. I have many full nodes running there and now all of them are unable to sync. Each of these servers costs me $800 per month (previously only $200), then you told me that I need faster bandwidth and disk which means the cost will keep rising at a very exaggerated rate. My boss even thought I spent all these money in nightclub because of the goddamn BSC!
We've been telling you to check the shitty BSC code and solve these problems as so many node maintainers are puzzled by it. However, your answer was just that the growing BSC data requires the update of hardwares. WTF???
You just fuxked up all these, it's gonna be the beginning of BSC's failure. Even such a simple issue can't be solved, what else the fuxking shit you can do, huh?

Loading

@dgdeivid
Copy link

@dgdeivid dgdeivid commented Nov 17, 2021

In my case, the problem I have is that I have gone from an average time of obtaining the new blocks of 1-2 seconds to 7-30 seconds.

Loading

@RumeelHussainbnb
Copy link

@RumeelHussainbnb RumeelHussainbnb commented Nov 17, 2021

I choose AWS to build my node, it is located at eu-west, the instance I choose is m5.zn.3xlarge 12vCPU, 48GB memory, the instance monted with a 2TB ssd, volume type is General Purpose SSD(gp3), IOPS is 8k, 250Mb/s throughput, the networking upto 25Gigabit. The latest version is v1.1.3 when I set up the node. I just use the default config from the latest release.

The performance seems fine, there will be few blocks lag some time, but I have backup nodes when I need pruning. My robot will choose the syncd one to call. The performance metrics can be check from logs, the mgasps is around 50 to 100, I have run the node for about two weeks, now the storage comes to 1.6T, possibly need prune soon.

BSCAwsNode

Guys, seriously, WTF. This is a blockchain with supposedly billions of value, yet it is governed and developed like the project of a stoned teenager.

I've rarely seen something handled so unprofessionally.

  • There is no code review, patches are simply committed, in most cases even without a proper description of what they do or what problem they try to solve
  • There doesn't appear to be any reasonable testing process in place. Every update appears to make things worse.
  • There is zero responding to bug reports. Hundreds of people report non syncing nodes or nodes falling out of sync. Response from the "developers" - zero.
  • There is no beta testing, stuff is thrown over the fence. Features like diffsync are declared "stable" by decrete, even though hundreds of people reporting the opposite indicated otherwise.
  • The developers apparently don't have a freaking clue of what it is they are building. The "feature" to disable P2P Tx broadcast must be the most stupid idea I've seen in a long time. If people actually use this completely braindead feature, your network will become highly centralized and many nodes will experience issues with non executing Txs. How the phrack are Txs going to be propagated within the network without nodes actually doing that?
  • This "fix" is, on top, just a stab in the dark. There is zero indication that the quite low unexecuted Tx rate of BSC has any responsibility in the current situation. Ethereum has 20 fold the amount of Txs flying around without the slightest problem.
  • As someone else already wrote, the root cause of the problem is that you mindlessly increased the blocksize and reduced the blocktime without doing the actual work required.

Overall, there is only one conclusion. Binance wanted a quick hack to make some money, but is not willing to expand even modest resources to make this thing actually work. Given that they've made billions from this, this is absurd and a huge abuse of the trust (and money) people put in this.

You have proven to be incompetent of leading, developing and governing this. Just be honest and trash it without more people wasting their time and money.

(written by a 4k BNB holder, considering dumping this garbage)

Loading

@diegoxter
Copy link

@diegoxter diegoxter commented Nov 17, 2021

Just go to Fantom already, jeez

Loading

@havsar
Copy link

@havsar havsar commented Nov 18, 2021

We created a Telegram group for BSC Node discussions, feel free to join and share experience https://t.me/joinchat/zXCza2tQN0tjYzM0

Loading

@Aser2019
Copy link

@Aser2019 Aser2019 commented Nov 18, 2021

Hello,
we doubled the server resources and still we have problem in syncing and transactions broadcasting,
some of TX is done and some not.
Really there is BUG in BSC they have to fix it before its too late.
I really think in leave this network and use other network.

Loading

@edgeofthegame
Copy link

@edgeofthegame edgeofthegame commented Nov 18, 2021

i suggest issue a new chain named bsc pro and next year issue bsc pro max and then everyone can synced easily.

It might be a good idea. Since the majority of transactions are stupid games, no one would even notice.

Loading

@willhamilton24
Copy link

@willhamilton24 willhamilton24 commented Nov 18, 2021

BASED BASED BASED

Loading

@qsvtr
Copy link

@qsvtr qsvtr commented Nov 18, 2021

come on Avalanche guys, https://github.com/ava-labs/avalanchego

Loading

@22388o
Copy link

@22388o 22388o commented Nov 18, 2021

Come on Build on Bitcoin. The future is there!

Loading

@zimbabwean-inflation
Copy link

@zimbabwean-inflation zimbabwean-inflation commented Nov 18, 2021

BSC has always been the definition of a rug chain

Loading

@rssnyder
Copy link

@rssnyder rssnyder commented Nov 18, 2021

"dummies, use this coin" and "code bad" comments are not really helpful in a code repository issue. If you want to have a real conversation about the state of BSC stop commenting nonsense.

Loading

@shreyaspapi
Copy link

@shreyaspapi shreyaspapi commented Nov 18, 2021

Guys, seriously, WTF. This is a blockchain with supposedly billions of value, yet it is governed and developed like the project of a stoned teenager.

I've rarely seen something handled so unprofessionally.

  • There is no code review, patches are simply committed, in most cases even without a proper description of what they do or what problem they try to solve
  • There doesn't appear to be any reasonable testing process in place. Every update appears to make things worse.
  • There is zero responding to bug reports. Hundreds of people report non syncing nodes or nodes falling out of sync. Response from the "developers" - zero.
  • There is no beta testing, stuff is thrown over the fence. Features like diffsync are declared "stable" by decrete, even though hundreds of people reporting the opposite indicated otherwise.
  • The developers apparently don't have a freaking clue of what it is they are building. The "feature" to disable P2P Tx broadcast must be the most stupid idea I've seen in a long time. If people actually use this completely braindead feature, your network will become highly centralized and many nodes will experience issues with non executing Txs. How the phrack are Txs going to be propagated within the network without nodes actually doing that?
  • This "fix" is, on top, just a stab in the dark. There is zero indication that the quite low unexecuted Tx rate of BSC has any responsibility in the current situation. Ethereum has 20 fold the amount of Txs flying around without the slightest problem.
  • As someone else already wrote, the root cause of the problem is that you mindlessly increased the blocksize and reduced the blocktime without doing the actual work required.

Overall, there is only one conclusion. Binance wanted a quick hack to make some money, but is not willing to expand even modest resources to make this thing actually work. Given that they've made billions from this, this is absurd and a huge abuse of the trust (and money) people put in this.

You have proven to be incompetent of leading, developing and governing this. Just be honest and trash it without more people wasting their time and money.

(written by a 4k BNB holder, considering dumping this garbage)

I agree

Loading

@kaber2
Copy link
Author

@kaber2 kaber2 commented Nov 18, 2021

Hello, we doubled the server resources and still we have problem in syncing and transactions broadcasting, some of TX is done and some not. Really there is BUG in BSC they have to fix it before its too late. I really think in leave this network and use other network.

I'm running 15 nodes on 64 Core EPYC 7702P CPUs with 512GB RAM and 2 Intel SSDPE2KX020T8 NVMe in RAID0 configuration with 40GBe network connection each. At any point in time, roughly 1/3 of my nodes has fallen out of sync and need to be manually massaged to sync up again.

This most certainly has nothing to do with hardware specs, you can't get much faster than that. The BSC developers fucked up, but are apparently not the least bit interested in debugging and fixing the problem. Their lack of reaction is just a big fat middle finger.

Loading

@Blackglade
Copy link

@Blackglade Blackglade commented Nov 18, 2021

I hear running a node on Algorand is pretty easy: https://developer.algorand.org/docs/run-a-node/setup/install/

Loading

@cryptobeaver
Copy link

@cryptobeaver cryptobeaver commented Nov 18, 2021

But CZ bought a Mini Van

Loading

@nathanjessen
Copy link

@nathanjessen nathanjessen commented Nov 18, 2021

BSC? More like BDC.

Also, is anyone working on a BSC Cash fork, or BSC CZ's Vision?

Loading

@pefka
Copy link

@pefka pefka commented Nov 18, 2021

this "software" is a piece of shit. They thought that if they thoughtlessly copied the Ethereum blockchain, they could change something there. In fact, it turned out to be a cheap fake, like a Chinese "iphone" on an android for 90$.

Loading

@bostarch
Copy link

@bostarch bostarch commented Nov 20, 2021

Hi, this is Stanley at Ankr, which is a web3 infrastructure company hosting one of binance validator and also providing both RPC and staking service to the public. The following are tricks we utilize at Ankr.

  1. We are majorly using bare metals to achieve the best performance given the same level of hardware among major cloud providers.
  2. CPU: AMD Ryzen 5950x is good enough to power both full and archive mode
  3. Memory: DDR4 ECC 128 GB for full, 256 GB for archive. No matter you are using commerical or comsumer based hardware, please make sure using a ECC type of memory!!!
  4. Storage: This is a tricky one. Local NVME would dramatically improve the performance of BSC. So far archive mode is requiring 16TB data and therefore, please leave 30TB for the foreseeable expansion.
  5. Network: This is not critical, a regular 100MB public bandwidth is good enough for syncing from scratch.
  6. Feel free to use any cloud-based VM match the above specs. But according to our experience, bare metals are the ONLY winner for a RPC service provider in terms of cost and performance.

If you are considering a 3rd party RPC service provider, free free to apply one from https://app.ankr.com. In the meanwhile, Ankr will provide distributed RPC solution for BSC to the public soon. Stay tuned.

128GB? I currently have 64GB bare metal system and regardless what I set, the node will never use more than 32.4GB of ram (commited max is~40GB). How can you effectively use 100+ GB? I'd like to know and use the full hardware.

Loading

@edgeofthegame
Copy link

@edgeofthegame edgeofthegame commented Nov 20, 2021

@zhongfu
last accessible block timestamp minus current time (checking a few times per sec). usually it was 1-2 seconds. now it's 10-20 seconds.
@bostarch
don't bother they have the same lags as everybody else, just tested

Loading

@erc1337
Copy link

@erc1337 erc1337 commented Nov 20, 2021

Wow guys, reading all these comments really hurts.
Why does nobody care about the chain, the network and the people who are running the nodes?!?!!?!? Should be one of the main priorities.

It's also insane to read how immense the hardware requirements are for running a node. It's worse than a SOL node 😆

If you are looking for a top team that is building a blockchain with upgradeable smart contracts, a microservice based node, fee less transactions, support for multi languages and many other innovative things... Read about Koinos.
Try to set up a node for test net. The only thing you need to do is to start a docker-compose script! Check https://github.com/koinos/koinos or koinos.io

Loading

@OiNation
Copy link

@OiNation OiNation commented Nov 20, 2021

I'm not a coder nor do I run a node, I've just been doing my research on where to set up my project for the working class and this is a prime example of what's wrong in the world. Here's a small part quoted from one of my posts I made and why we should change those things mentioned in this repo

I have absolutely nothing against people with a billion $. It’s how many of them, got it and how they use it, that the problem is. Their power and greed gained on the back of the working class people, being used and thrown away as garbage, while governments (put in place by the wealthiest) hope that they die as soon as they get their first pensioncheque.
It’s time for wealth sharing according to the capability of a person to do something and what that’s worth to others. What is the value to society of the builders, the person that cleans operating rooms with an eye for detail, factoryworkers making the tools, a surgeon, a media spokesperson of a hospital? Who needs who and do we need that politician who decides that a hospital can be build in that city? Or can people vote on that decision that there should be a hospital there? I doubt anyone in the world has a problem with people with talent, knowledge and skills to be paid accordingly but is that the reality?

Use your time and talents for the people who care!
The whole post is here if you agree.

I already had my doubts about the intentions of Binance now I'm really sure I won't be building on their chain either.
They just want to be a bank who doesn't care about their workers and customers, like so many others in the current system.

Loading

@rudy-infstones
Copy link

@rudy-infstones rudy-infstones commented Nov 21, 2021

Bro, are you kidding? Nothing will work on c5a.8xlarge since only the cloud disk is supported there. You won't be able to synch a node there even from a snapshot that appeared an hour ago. And 12k or 16k iops won't help you in any way (I checked it personally myself 2 days ago) . Could you just write us the exact technical parameters of the server that is used for your validator node?

Hey @Zer34, not sure why it didn't work in your case. I just launched another node using the exact specs mentioned; the node was synced and ran stably. Would you mind letting me know if you tried all the listed items at once (latest binary, 'diffsync', latest config, RPC port off, etc.)? I found it crucial to do all on the node for it to work.
If you still have synchronization issues, I suggest using an i3.4xlarge or a better instance to get synced faster.

Loading

@guagualvcha
Copy link
Collaborator

@guagualvcha guagualvcha commented Nov 21, 2021

I am one of those who commit many Pull Requests in the BSC repo. We are keeping our heads down on building and optimizing the BSC to meet the requirements from 2M daily active addresses (ATH recently)

I apologize that we did not put more effort to improve enough transparency and communication for each release.

However, we do care a lot for the value of the community, and really appreciated your critizations and suggestions. We will work with our supportive community to keep improving the stability of infrastructure and introduce new scalable solutions.

Let me have some summary here:

  1. BSC node sync issue: Some validators like Ankr and InfStones gave more detailed setups. Here are a few proposed tips - #338; #502. State prune is one key action that many have missed and didn’t perform as a routine. After rigorous testing, BSC has released a Bruno Upgrade 1.1.5. With Diff Sync, the node sync speed has improved substantially as we have witnessed in many nodes. More architecture related improvements are under their way to be proposed as new BEPs soon.
    Please check it out and let us know your thoughts! If you still have issues, please join node operators Telegram https://t.me/joinchat/n2s046C1_OMxMTU1 or discord: https://discord.gg/J4HUc9zK . You will get more support from the hearty BSC community.
  2. BSC is open-sourced since its day one and all of the BSC codes are transparent and viewable for reviewing. However, the BEP Process should be emphasized to improve the transparency of each and every change/update we plan to deploy and improve our overall communication. If you have any specific proposal, please do propose. BSC is currently one of the most active blockchains with huge amounts of active addresses and largest transactions after several rounds of performance enhancement. This has resulted in new challenges and we require more support for better research, development and engineering. We are really looking forward to seeing more strong developers here to join our development together. This is a great example from the community: ledgerwatch/erigon#2991. Please propose more suggestions to help us improve the infra.

We'll keep utilizing Github as a builder collaboration platform to raise the proposal, create concrete issues with facts (e.g., logs), discuss solutions, and build infra. For other types of discussion, join our public communities and feel free to ping our admins whenever necessary.

Node discussion discord: https://discord.gg/gx9E5fCvmW
Dev discussion discord: https://discord.gg/E6yDKuV5MT

Thank you for your patience and support!

Loading

@quinnj102
Copy link

@quinnj102 quinnj102 commented Nov 21, 2021

Come on Build on Bitcoin. The future is there!

why would anyone want to build on bitcoin with 10 min block times?

Loading

@Relicy
Copy link

@Relicy Relicy commented Nov 22, 2021

I've switched to Solana

Loading

@hellodword
Copy link

@hellodword hellodword commented Nov 22, 2021

Bro, are you kidding? Nothing will work on c5a.8xlarge since only the cloud disk is supported there. You won't be able to synch a node there even from a snapshot that appeared an hour ago. And 12k or 16k iops won't help you in any way (I checked it personally myself 2 days ago) . Could you just write us the exact technical parameters of the server that is used for your validator node?

Hey @Zer34, not sure why it didn't work in your case. I just launched another node using the exact specs mentioned; the node was synced and ran stably. Would you mind letting me know if you tried all the listed items at once (latest binary, 'diffsync', latest config, RPC port off, etc.)? I found it crucial to do all on the node for it to work. If you still have synchronization issues, I suggest using an i3.4xlarge or a better instance to get synced faster.

@rudy-infstones Can you give your instance type, system configurations, config.toml, all commands you execute till the node is syncing? I want to follow your steps strictly. Thanks.

Loading

@valamidev
Copy link

@valamidev valamidev commented Nov 22, 2021

Hi, this is Stanley at Ankr, which is a web3 infrastructure company hosting one of binance validator and also providing both RPC and staking service to the public. The following are tricks we utilize at Ankr.

  1. We are majorly using bare metals to achieve the best performance given the same level of hardware among major cloud providers.
  2. CPU: AMD Ryzen 5950x is good enough to power both full and archive mode
  3. Memory: DDR4 ECC 128 GB for full, 256 GB for archive. No matter you are using commerical or comsumer based hardware, please make sure using a ECC type of memory!!!
  4. Storage: This is a tricky one. Local NVME would dramatically improve the performance of BSC. So far archive mode is requiring 16TB data and therefore, please leave 30TB for the foreseeable expansion.
  5. Network: This is not critical, a regular 100MB public bandwidth is good enough for syncing from scratch.
  6. Feel free to use any cloud-based VM match the above specs. But according to our experience, bare metals are the ONLY winner for a RPC service provider in terms of cost and performance.

If you are considering a 3rd party RPC service provider, free free to apply one from https://app.ankr.com. In the meanwhile, Ankr will provide distributed RPC solution for BSC to the public soon. Stay tuned.

I had to request refund form A**r because nodes were giving ~2-3 Hours old out of sync responses, so giving the cutting-edge bare-metal doesn't seems as an ultimate solution to avoid out of sync.
BSC suffer from the problem what ETH try to avoid, increased Gas limit and reduced block-time cannot be hold for a long time in a decentralised fashion. Too many poor quality and resource / storage wasting smart-contract deployed on BSC.

Loading

@ratthakorn2509
Copy link

@ratthakorn2509 ratthakorn2509 commented Nov 22, 2021

Binance Smart Chain Binance Chain I would like to know how much of my assets I have now and how much farming investment I have. I know the details, what should I do? Thank you very much.

Loading

@rudy-infstones
Copy link

@rudy-infstones rudy-infstones commented Nov 22, 2021

We are one of the BSC validator nodes and provide public API services to the BSC community. As we have observed, the volume of transactions on the BSC chain has reached an unprecedented level. We want to share some pointers to help speed up syncing on your full node:

1 Use the latest binary version v1.1.5 and add the --diffsync flag when starting the node 2 Recommended configuration (on AWS): at least c5a.8xlarge with a gp3 disk configured for 12000 IOPS or higher 3 Use the latest official snapshot: https://github.com/binance-chain/bsc-snapshots 4 Configure your node based on the latest config.toml: https://github.com/binance-chain/bsc/releases/download/v1.1.5/mainnet.zip 5 Try temporarily turn off the RPC port to speed up synchronization

If you have more problems with node synchronization, please feel free to utilize our platform (https://infstones.com/) to create a free BSC Public API project or create your dedicated node with only one click.

Bro, are you kidding? Nothing will work on c5a.8xlarge since only the cloud disk is supported there. You won't be able to synch a node there even from a snapshot that appeared an hour ago. And 12k or 16k iops won't help you in any way (I checked it personally myself 2 days ago) . Could you just write us the exact technical parameters of the server that is used for your validator node?

Hey @Zer34, not sure why it didn't work in your case. I just launched another node using the exact specs mentioned; the node was synced and ran stably. Would you mind letting me know if you tried all the listed items at once (latest binary, 'diffsync', latest config, RPC port off, etc.)? I found it crucial to do all on the node for it to work. If you still have synchronization issues, I suggest using an i3.4xlarge or a better instance to get synced faster.

@rudy-infstones Can you give your instance type, system configurations, config.toml, all commands you execute till the node is syncing? I want to follow your steps strictly. Thanks.

Hey @hellodword , please reference my previous comment regarding the setup. We used c5a.8xlarge and the exact official config.toml. Feel free to let me know if you have any further questions. You can also try to set up a BSC node on our platform (https://cloud.infstones.io/)

Loading

@perfectcircle2020
Copy link

@perfectcircle2020 perfectcircle2020 commented Nov 22, 2021

guys despite all the problems the network has currently it reached huge milestones, the fact that people only complain and dont try to come up with practical and implementable solutions makes me sad, even if you are not a coder or programmer, try at least to read and understand the basics of blockchain technology and put in some effort to think what might be the problem for your current situation and talk publicly providing insights and useful information that other people can learn from, only complaining wont speed up the sync and make the network better thats for sure.

let this moment be a uniting moment for all of us, when a new phase of bsc begins.

Loading

@edgeofthegame
Copy link

@edgeofthegame edgeofthegame commented Nov 22, 2021

here for the memez.
5v5uff
5v5uur
5v5v8g

Loading

@its5Q
Copy link

@its5Q its5Q commented Nov 23, 2021

@edgeofthegame jokes aside, once my server's disk died from running 2 nodes at the same time and the hosting provider had to replace it

Loading

@0xyolo
Copy link

@0xyolo 0xyolo commented Nov 23, 2021

At C.R.E.A.M. we’ve been running our BSC validator node on AWS EC2 m5zn.3xlarge with 2.5 TB gp3 SSD 3000 IOPS 125 MB/s and it’s been very stable for a long time. We started seeing sync issues starting November 3rd and started missing blocks then.
Last week we enabled the new diff sync feature from BSC and upgraded the disk performance to 8000 IOPS, 250MB/s and our node has been stable ever since. We recommend others to try the same.
Like any cutting edge technology, there are often challenges in keeping up with the latest and greatest. While this thread and discussion has spawned many valid concerns, we believe this is a healthy discussion around how BSC can improve moving forward. We agree with the need for a shared channel where validators can learn from one another and we’re glad to see BSC has already deployed this. Looking forward to more innovations to come!

Loading

@miohtama
Copy link

@miohtama miohtama commented Nov 24, 2021

BSC, and its issues, stems from GoEthereum that is 2014 tech. BSC can improve, but only if they invest in building a more open and healthy community. For the sync issues, they are very real and I wrote a Twitter thread here:

https://twitter.com/moo9000/status/1463454127095042050

(Contains some useful tips for those who are struggling with syncing.)

Loading

@CryptoManiac
Copy link

@CryptoManiac CryptoManiac commented Nov 24, 2021

Just done syncing a new node on Micron SATA drive, it took less than 14 hours. So there is no need for NVMe or whatever. SATA bandwidth is more than enough (20x more than needed actually) and latency is not much of an issue as it seemed to be.

Loading

@zhongfu
Copy link

@zhongfu zhongfu commented Nov 24, 2021

@CryptoManiac no need, yes, but it's really really not recommended -- you'd very likely see it falling behind quite often

what's the mgasps like on your node anyway

Loading

@CryptoManiac
Copy link

@CryptoManiac CryptoManiac commented Nov 24, 2021

@CryptoManiac no need, yes, but it's really really not recommended -- you'd very likely see it falling behind quite often

Well I can say that NVMe wouldn't be of much help anyway. I spent a week testing and comparing various hardware and came to this exact realization that enterprise-grade SATA drive is more than enough.

what's the mgasps like on your node anyway

I sometimes see 500-700 mgasps but it's usually between 60 and 200. It's like a dice. :)

Loading

@CryptoManiac
Copy link

@CryptoManiac CryptoManiac commented Nov 24, 2021

Ubuntu Linux 20.04 on 8-core Xeon and 32 Gb RAM. I tried to install 64 Gb and check the results and can say that it wouldn't hurt but rather pointless.

No tweaks or whatever, everything is literally running at default settings including the BSC client itself. Perhaps the main difference from other reports is that it's not a virtual machine but a physical hardware.

IMG_0650

P.S. I forgot to mention that system and the BSC data folder are located on Micron 5300 Pro 3.84Tb MTFDDAK3T8TDS.

Loading

@FavourEEZ
Copy link

@FavourEEZ FavourEEZ commented Nov 24, 2021

💭

Loading

@thehood1
Copy link

@thehood1 thehood1 commented Nov 25, 2021

Ubuntu Linux 20.04 on 8-core Xeon and 32 Gb RAM. I tried to install 64 Gb and check the results and can say that it wouldn't hurt but rather pointless.

No tweaks or whatever, everything is literally running at default settings including the BSC client itself. Perhaps the main difference from other reports is that it's not a virtual machine but a physical hardware.

IMG_0650

P.S. I forgot to mention that system and the BSC data folder are located on Micron 5300 Pro 3.84Tb MTFDDAK3T8TDS.

Can you please paste your config.toml and start command? I have almost the same physical hardware (8 core Ryzen, 32gb RAM and NVMe ) with Ubuntu Server 20.04.3 LTS installed and mgasps value is usually between 10 and 80(average 50), making syncing process very slow. Another thing I don't understand...elapsed time values on your screenshot are always in ms, and I usually get 10s??? Also, I see this, is this normal:

INFO [11-25|00:20:23.764] do light process success at block num=12,885,118
INFO [11-25|00:20:24.701] do light process success at block num=12,885,119
INFO [11-25|00:20:25.956] do light process success at block num=12,885,120

image

Loading

@48ClubSirIan
Copy link

@48ClubSirIan 48ClubSirIan commented Nov 25, 2021

the root cause of the problem is that you mindlessly increased the blocksize and reduced the blocktime without doing the actual work required.

Literally can't agree more

Loading

@CryptoManiac
Copy link

@CryptoManiac CryptoManiac commented Nov 25, 2021

Can you please paste your config.toml and start command?

Everything is at defaults.

I have almost the same physical hardware (8 core Ryzen, 32gb RAM and NVMe ) with Ubuntu Server 20.04.3 LTS installed and mgasps value is usually between 10 and 80(average 50), making syncing process very slow. Another thing I don't understand...elapsed time values on your screenshot are always in ms, and I usually get 10s???

I have no idea. I really didn't do anything special. Just ./geth --config ... --datadir ... --diffsync and that is all. Though my server has intel CPU, perhaps it has something to do with these issues. Maybe some compiler optimization glitch or whatever.

IMG_0649

IMG_0648

These screenshots were made with 64 Gb of RAM but it really didn't make any difference for me. I've added some RAM just for the sake of it.

Also, I see this, is this normal:

INFO [11-25|00:20:23.764] do light process success at block num=12,885,118
INFO [11-25|00:20:24.701] do light process success at block num=12,885,119
INFO [11-25|00:20:25.956] do light process success at block num=12,885,120

Yes, this is a normal behaviour.

Loading

@tredondo
Copy link

@tredondo tredondo commented Nov 25, 2021

While this thread and discussion has spawned many valid concerns

I think this thread has gotten way distracted by specific validator issues, when there are far deeper problems here.

The root of the issues is that this repo is managed like a 🗑️ 🔥. See the first post:

This is a blockchain with supposedly billions of value, yet it is governed and developed like the project of a stoned teenager.

I've rarely seen something handled so unprofessionally.

  • There is no code review, patches are simply committed, in most cases even without a proper description of what they do or what problem they try to solve
  • There doesn't appear to be any reasonable testing process in place. Every update appears to make things worse.
  • There is zero responding to bug reports. Hundreds of people report non syncing nodes or nodes falling out of sync. Response from the "developers" - zero.
  • There is no beta testing, stuff is thrown over the fence.

These are the real problems.

Loading

@thehood1
Copy link

@thehood1 thehood1 commented Nov 25, 2021

Can you please paste your config.toml and start command?

Everything is at defaults.

I have almost the same physical hardware (8 core Ryzen, 32gb RAM and NVMe ) with Ubuntu Server 20.04.3 LTS installed and mgasps value is usually between 10 and 80(average 50), making syncing process very slow. Another thing I don't understand...elapsed time values on your screenshot are always in ms, and I usually get 10s???

I have no idea. I really didn't do anything special. Just ./geth --config ... --datadir ... --diffsync and that is all. Though my server has intel CPU, perhaps it has something to do with these issues. Maybe some compiler optimization glitch or whatever.

IMG_0649

IMG_0648

These screenshots were made with 64 Gb of RAM but it really didn't make any difference for me. I've added some RAM just for the sake of it.

Also, I see this, is this normal:

INFO [11-25|00:20:23.764] do light process success at block num=12,885,118
INFO [11-25|00:20:24.701] do light process success at block num=12,885,119
INFO [11-25|00:20:25.956] do light process success at block num=12,885,120

Yes, this is a normal behaviour.

You unzipped mainet.zip v 1.1.5 and didn't change anything in config.toml file, right? Didn't change MaxPeers ? I see that you set --cache to 6000 in start command. As I read here, recommendation is to set it to half of your server memory. Can you please paste your config.toml and start command, maybe I am missing something here?

image

image

Loading

@CryptoManiac
Copy link

@CryptoManiac CryptoManiac commented Nov 25, 2021

You unzipped mainet.zip v 1.1.5 and didn't change anything in config.toml file, right? Didn't change MaxPeers ?

Yep.

You unzipped mainet.zip v 1.1.5 and didn't change anything in config.toml file, right? Didn't change MaxPeers ? I see that you set --cache to 6000 in start command. As I read here, recommendation is to set it to half of your server memory.

It may help with disk latency issues. However, I don't see any difference between --cache 6000, --cache 64000 or --cache 1000. Perhaps it is mostly because geth is setting this value automatically to whatever it does like.

If I'm to make a guess then it would be better to try the same on intel CPU. It's not really honest to compare

P.S. I think this is offtopic though. Software itself is not a problem, but its development and project management policies are. Or their absence, perhaps.

Loading

@thehood1
Copy link

@thehood1 thehood1 commented Nov 25, 2021

You unzipped mainet.zip v 1.1.5 and didn't change anything in config.toml file, right? Didn't change MaxPeers ?

Yep.

You unzipped mainet.zip v 1.1.5 and didn't change anything in config.toml file, right? Didn't change MaxPeers ? I see that you set --cache to 6000 in start command. As I read here, recommendation is to set it to half of your server memory.

It may help with disk latency issues. However, I don't see any difference between --cache 6000, --cache 64000 or --cache 1000. Perhaps it is mostly because geth is setting this value automatically to whatever it does like.

If I'm to make a guess then it would be better to try the same on intel CPU. It's not really honest to compare

P.S. I think this is offtopic though. Software itself is not a problem, but its development and project management policies are. Or their absence, perhaps.

Yes, I see that developers do not care about us. If this continues , no one will stay here.

Thanks for helping, I will try a few times more, if don't succeed, I give up from BSC.

Loading

@ascunha
Copy link

@ascunha ascunha commented Nov 26, 2021

You unzipped mainet.zip v 1.1.5 and didn't change anything in config.toml file, right? Didn't change MaxPeers ?

Yep.

You unzipped mainet.zip v 1.1.5 and didn't change anything in config.toml file, right? Didn't change MaxPeers ? I see that you set --cache to 6000 in start command. As I read here, recommendation is to set it to half of your server memory.

It may help with disk latency issues. However, I don't see any difference between --cache 6000, --cache 64000 or --cache 1000. Perhaps it is mostly because geth is setting this value automatically to whatever it does like.

If I'm to make a guess then it would be better to try the same on intel CPU. It's not really honest to compare

I run it with intel, amd and graviton2 in aws... it's ok, not cpu vendor problem IMO and I run with gp3, so also not the "you need the best nvme in the world" problem.

I see many people in many threads running on gp3 with instances that don't match the specs of their disks or don't understand ebs/ec2 io/bandwidth guarantees... (aws is not very upfront about the capabilities of their services, but nowadays most is in the docs... few years back you would have to spend considerable time with support to even find out what "Up To" actually means in their claims)

I have the same experience for geth cache sizing, setting it to some huge value does not improve things, leaving enough memory for the system works better for me.
When peers are bad and validators are dropping the ball you see reorgs every 5 blocks or so, hard to keep up when the entire chain is fkd.

Devs should probably give the community some love and take time to improve the docs, also make it clear that the full node is not at the set and forget level... it does need babysitting, monitoring, active maintenance and sometimes the entire thing will just not keep up, delays at this point are a reality not an exception.

Loading

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet