Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Current bandwidth calculations allow low powered accounts to spam easily #1800

Closed
Gandalf-the-Grey opened this issue Nov 24, 2017 · 19 comments
Closed

Comments

@Gandalf-the-Grey
Copy link
Contributor

With current network conditions, and bandwidth calculations, an account such as @appbliz, with low SP such as 15, can produce top level post every 5 minutes
while still having: Bandwidth Remaining 98.04% 19mb of 19.6mb

Also, we have to think carefully about HF20 and bandwidth limits that would be really just enough to survive. Something like: one short top level post, two transfers and three upvotes daily (yes, b/w ideally would depend on transaction type, some can be more troublesome despite low b/w consumption in terms of bytes - market orders, smt)

(UI/UX changes needed to avoid confusion)

@mvandeberg
Copy link
Contributor

mvandeberg commented Nov 27, 2017

HF 20 is not going to change how bandwidth is calculated.

Picking an exact value for what the "free" bandwidth should be is tricky because what it gets you is dependent on the total usage of the blockchain. It is the definition of a fractional reserve system, so unless we want to change some of the fundamental design decisions of bandwidth, this is a side effect we will have to deal with.

I am going to leave this issue open and assign it to the HF 20 project because it articulates the problem well.

@theoreticalbts
Copy link
Contributor

One of the problems we have is that we have a single number called "bandwidth," but really we want to have a whole family of consumable resources that we want to rate-limit.

If a transfer or upvote transaction is, say, about 50 bytes, then a 2000-byte post is automatically the same value as 40 transfer / upvote transactions.

If an empty post is about the same size as a transfer / upvote, then setting bandwidth thresholds to allow a low-SP user to have one 2000-byte top-level post per day means that same user will be able to post 40 empty top-level posts per day.

What we really want to do is instead of having a single "mana bar" and everything have a cost in "bandwidth mana," we should have different mana bars.

Here's an example of how this might work: A top-level post costs "TLP mana." The TLP mana bar has a maximum of 5 posts' worth for any account, and the regeneration rate is proportional to your SP, calibrated so that ordinary users regenerate TLP mana at a rate of 1 post per day. The transaction also has a certain size in bytes, and consumes bandwidth mana as well. Most users will be limited by TLP mana, but really both limits are applied.

I've proposed this before, but it's been a while since we talked about this.

@mvandeberg
Copy link
Contributor

We are in agreement about what needs to be done with bandwidth, I have been suggesting the same thing for months. But it is beyond the scope of what needs to be done for HF 20.

@iamsmooth
Copy link

I'm not convinced it really needs to be as complicated as several bars, and in addition we have already seen that arbitrary limits per account are problematic (such as 4 posts per day) because they disadvantage desirable usages (including alternate usages such as microblogging, picture sharing, etc. for which typical/desirable usage may be many more posts/day than long-form blogging), while not effectively impeding deliberate abuse (which can split stake across accounts).

It may well be that the correct answer to the original issue here is that the reported usage (many small posts) is acceptable. However, it is also the case that any comment/post consumes permanent consensus resources (for the permalink). There are already some 10x penalties for certain operation and that concept can be applied here. Bandwidth penalties for operations with long-term costs (comment/posts, regardless of size) should be increased. Also an additive term can also be added (e.g., each post consumes a fixed 4kb, or some other number, as a penalty in addition to the size of its content).

Likewise the penalties for transfer and market operations should be rethought/rebalanced. Transfers do not have long-term cost afaik, and should not have any real penalty, and market orders only do to the extent that the order remains open. So in the latter case some part of the extra bandwidth cost can be returned to the account when the order is closed. The latter is especially the case to the extent that #1449 is implemented (limiting the lifetime of market orders)

@mvandeberg
Copy link
Contributor

By multiple bars, I think @theoreticalbts means that there are three costs associated with an operation.

  1. The size on the blockchain
  2. The size in active state
  3. The computational cost

The big design question here is are there three distinct resources that are consumed at different global rates or is there a linear combination of these three that charges bandwidth?

@iamsmooth
Copy link

iamsmooth commented Nov 28, 2017

@mvandeberg I agree. 2) though is a little different in character from the others because it has both a size and a time component.

@mvandeberg mvandeberg added this to To Do in Steem 0.20.0 Dec 6, 2017
@sneak
Copy link
Contributor

sneak commented Dec 19, 2017

A hybrid model is definitely in order. I’d love to see a concrete spec on what a new composite bandwidth algorithm looks like—after SMTs are fully shipped.

@TimCliff
Copy link
Contributor

TimCliff commented Jan 6, 2018

With the current bandwidth rules, many users run into unexpected 'surprises' when the current_reserve_ratio drops. With no guidelines on what is 'acceptable/basic' use for a new user, this leads to a poor user experience because users have no way to know if they are close to their limit, and then they are all of a sudden locked out.

I can see this impacting SMTs too - if SMT creators do not have a clear understanding of what the bandwidth rules are, and the users they onboard are unable to interact in the way they expected without having to pay extra money - they will likely call the whole thing a scam. Changing the rules after lots of SMTs are launched may lead to complaints as well if "the rules are changed".

I would suggest addressing this before SMTs, so SMT creators know what to expect.


On the design side, I recommend separating out bandwidth for token transfers and content related transactions. It will lead to bad impressions of the platform if users are unable to withdraw money from their account because they overused the social media side of the platform. There can still be limits on the token transfer side of things too, but they should be separate.

I also think that having a minimum (worst case) allowance for each account that is well defined is important, so that users can know "I can _ at least n times per _ if I have x SP" and be able to confidently use the platform knowing that they are at least within the limits.

@iamsmooth
Copy link

iamsmooth commented Jan 7, 2018

I also think that having a minimum (worst case) allowance for each account that is well defined is important

This exists afaik but it is much lower than the usual amount allowable when the reserve ratio is higher.

Probably a large portion of this is getting better UI tools in place so users can see where they stand with respect to bandwidth. Currently it is completely opaque until the limit is hit and then the result is an error message. Obviously that is not ideal.

I could imagine something like showing the user what percentage of their bandwidth they have used (which may be more than 100%) along with the fluctuating current reserve ratio (>100%). Users concerned about never being bandwidth limited would have to keep their usage under the tighter 100% bound or at least not go too far above it. Of course this could be graphical as well or instead of displaying numbers.

@wakproductions
Copy link

I am a developer trying to help content creators establish a presence on Steemit. Some of the people I am working with have been long established Youtubers and have many old, but quality videos they would like to add to their Steemit accounts.

This bandwidth limitation is nowhere explained in the Steemit FAQ, so I was quite surprised when one of the creators I was helping to post started running into bandwidth limitation problems. I have a question:

Does bandwidth recharge with time?

This will drastically affect whether or not I choose to refer new people to Steemit. Building a following takes a long time and a lot of work. The top person I'm working with now for 4 years struggled to get a following on his Youtube page, and then suddenly gained traction and snowballed to 30,000 subscribers in the most recent year. If we have to buy STEEM just to continue posting to get past the hump of recognition, then it makes the whole network seem very much scam - if not a ponzi scheme.

If bandwidth does recharge at a certain rate, then maybe we can figure out a way to tailor or consolidate our posts. I think 1 post a day is a bit draconian. But without clear guidelines we don't know what rules to follow and this bandwidth limitation was a nasty surprise which may ultimately cause us to curtail our plans.

@wakproductions
Copy link

I also want to add that as a new content creator I'm finding it frustrating to get discovered on Steemit. Unless you get the attention of a whale or buy your way in, your posts aren't going to get much Steem or visibility, and I believe that's why the topics on Steemit are so heavily concentrated in a few areas like cryptocurrency topics or about Steem itself. Steemit needs a way of being welcoming to new content producers. This bandwidth limitation sets it up an incentive system that will cause Steemit to stay within the niche interests of the few whales.

@iamsmooth
Copy link

iamsmooth commented Jan 7, 2018

@wakproductions Does bandwidth recharge with time?

Yes. It is a measure of the rate of usage of an account (over a one-week window), not total usage where you need to keep paying to recharge. The latter would be horrible.

In most cases the limit should not be very severe, certainly not anything like one post per day (but it depends on total activity including posts, comments, votes, transfers, etc. and also considers the size of the posts, though not the size of any linked content stored offchain), especially once you have earned a bit more than the starting (free) SP.

@wakproductions
Copy link

@iamsmooth Thanks for the response. That's good to know that it recharges over 7 days. In that case I'll cool it on the posting when we hit the limit and wait it out. We've been getting some (small) authoring rewards and powering up so hopefully this will become less of an issue.

I hope my feedback was helpful. It would be a shame if Steemit suddenly discourages the most enthusiastic new users! I hope this information gets documented somewhere in the FAQ too as I could not find details on the bandwidth limitation & recharge.

@TimCliff
Copy link
Contributor

TimCliff commented Jan 7, 2018

@wakproductions I opened steemit/condenser#2298 to update the FAQ page with information on bandwidth.

@jamzed
Copy link

jamzed commented Jan 15, 2018

Hi, maybe the charts will shed more light on the current situation, I've build simple monitoring tool for tracking curent_reserve_ratio, bandwidth and all types of transactions. Below are the graphs from the last ~30 hours on which we can see that average_block_size is above the limit (>25% of max_block_size) practically by all day in Europe.

image

Also what I've noticed, probably the heaviest operation is custom_json which seems to be not rate limited and highly used by the robots.

@mvandeberg
Copy link
Contributor

What evidence do you have that custom_json is not rate limited? Bandwidth is calculated at the transaction level, which is operation agnostic.

@jamzed
Copy link

jamzed commented Jan 15, 2018

Maybe „rate limit” isn’t the best word, I was talking about the limit of number of custom_json follow operations within a short period of time.

@mvandeberg
Copy link
Contributor

In #1956 it is suggested that duplicate follows be rejected. As I stated in that issue, custom json is non-consensus. The contents of the payload cannot and will not be inspected to reject transactions. That violates the design of the custom json op.

@youkaicountry
Copy link
Contributor

This issue issue informed the plans for the RC Bandwidth System

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Development

No branches or pull requests

9 participants