New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Abuse-prevention: rate limits / "storm shield" #8575

Closed
SuperFloppies opened this Issue Sep 2, 2018 · 24 comments

Comments

Projects
None yet
@SuperFloppies
Copy link

SuperFloppies commented Sep 2, 2018

This request is inspired by the mob incident which occurred on the network in August 2018.

The gist of it is to prevent "ganging up" on a single individual, to provide a little cover until cooler heads prevail or the mob becomes bored. This is not intended as a standalone, comprehensive solution to the problem of mob formation and rule, but rather an individual component that is optional, opt-in, and discouraged from use unless needed (e.g., "use with great need").

This is a summary. The full proposal is at my Web site.

So, then, here is the essence of the proposal:

  • Implement Hashcash or something similar.
  • Possible Method no. 1
    • Implement a preference, it could be named "Mob Protection", "Spam Resistance", or something similar. Boolean, default off.
    • Implement a preference named "Resistance", represented using a slider. The slider would control a numeric value between 0 and 3600. Higher values create higher barriers to entry (require more "postage").
  • Possible Method no. 2
    • Have a "panic button" which activates the feature with a target delay of 1 second; each button press would increase the target delay by 2—4 seconds. User stops hitting the panic button when the assault is no longer felt.
  • Add a field to represent the message's "postage stamp".
  • Optionally, add an info bar at the top of the user's page when they have logged in using the Web client, informing them that they have the feature enabled and should disable the feature as soon as is practical.

Since the default would be "off" there would be little to no impact at rollout, except for the new feature's appearance post-upgrade.

If an account has enabled the boolean preference described above:

  • A client submitting a message with no or insufficient postage would receive an error which signals that a particular "amount" of postage (leading zeroes) is required in order to make the delivery.
  • The postage is computed and attached, and the client transparently attempts redelivery of the message.
  • If the user has not increased the slider, the message is accepted because it now has sufficient "postage".

It may also be useful to include a self-cancellation feature, such that the panic condition automatically clears 72–144 hours after it is triggered.

I need to rework my proposal to fit in an issue properly; it is written and formatted as a blog post. Please see the living proposal document for the full details.

@codesections

This comment has been minimized.

Copy link

codesections commented Sep 2, 2018

I really like this suggestion. Let me make sure I understand it, though. Is the tl;dr; version basically:

  • Right now, a group of users can @ someone essentially "for free"
  • If these messages are unpleasant, they can impose a high (emotional/time) cost on the recipient
  • Thus, it would be nice (in rare instances) to have a way to make the messages "cost" some amount of time to send as well
  • We can do this by using HashCoin and requiring (when a certain setting is activated) that users perform some slow calculation before they can @ a specified user
  • This would give users a way to protect themselves (at least partly) from a group of people @ing them in a coordinated way
  • One disadvantage is that this would also interfere with/increase the cost of non-harassing/non-hostile @ messages, so it's not an ideal solution but could be a good stopgap.

Is that about right or did I miss something crucial?

@SuperFloppies

This comment has been minimized.

Copy link
Author

SuperFloppies commented Sep 2, 2018

Pretty much.

Hashcash need not be used; there are a handful of Proof of Work functions, but Hashcash is the simplest one to implement, and for the scale and frequency that this feature would be intended to be used for, it is likely the most useful. Though, the one unfortunate downside is that it can be GPU-accelerated; there are some alternative algorithms which are memory-bound as opposed to processor-bound which might mitigate that.

Also, the low delay intended for this would have an impact on threads with more than one participant, but it would be tolerable, particularly if the panic button version is used.

Also, as I said on Mastodon:

Imagine a toot with 6 users (A, B, C, D, E, F). A is open. B is at 3sec, C at 3sec, D at 5sec, E at 9sec and F at 20 sec.

Each toot in the thread that includes the six of them will be delayed by 20 seconds. The postage required to pay F is sufficient also to pay B, C, D, and E, and the same stamp can be used.

Each token my be spent once on a single instance. There is no reasonable way to prevent double spend in the [federated universe without] a blockchain, and uh, no.

The intention here is to ensure that no multiplication effect occurs. It makes no sense to compute 5 different stamps for a single message; the tooter should only "pay" once, for the largest cost required to make the message appear in a given actor's inbox.

It is important to stress that this feature cannot stand on its own to prevent harassment by mobs. It will need to be combined with other features, several of which have been proposed either here on blogs, etc..
I have a list of these (which is highly likely to be incomplete).

@SuperFloppies

This comment has been minimized.

Copy link
Author

SuperFloppies commented Sep 2, 2018

One additional comment: the "seconds" metric is an estimate. Different devices will obviously have different CPU speeds, and so a middle ground needs to be considered when determining how many rounds equals "one second".

@ashleyhull-versent

This comment has been minimized.

Copy link
Contributor

ashleyhull-versent commented Sep 3, 2018

Any kind of proof of work is an interesting solution at rate limiting.. but it also allows for DDOS of the target instance by design. I'll read the post and make considerations.

I believe there is a discussion to be had to enable "verified" mastodon accounts via a third party like https://keybase.io/ - "Only allow communication from verified accounts" and this can be offloaded to the client. - sure it adds jsgpg or similar into the stack if you wanna go down that route.

@SuperFloppies

This comment has been minimized.

Copy link
Author

SuperFloppies commented Sep 3, 2018

@ashleyhull-versent Could you clarify your comments? DDoS is far more likely if the server has to track a lot of state.

With this idea, the impact on the server, in terms of state-tracking, is quite minimal. If a user has the feature turned on, only the user's integer ID and the cost need stored in memory; that is two integer values. When turned off, those two values are removed. A third integer representing the expiration of the requirement is optionally needed, if auto-cancellation is implemented.

Client does the PoW, the server never does. The server only verifies it. As I understand the current protocol, more expensive cryptographic operations (specifically, cryptographic signature verification) is performed for each message received; a hash verification is significantly less expensive computationally and can occur concurrently with the signature verification, resulting in what should be no additional time-cost per message.

Pure server-side tracking would require not only the User ID and rate limit values, but state would need to be kept for each message that is processed, from each user, from each instance in the entire federated network. Such a solution would fail on even small instances, due to the sheer amount of state tracking required, and would likely be cut out of larger instances, because it'd be too heavyweight. DDoS is then far likely because the server has to do a lot more work for each message, plus it must then be involved in actual enforcement.

@SuperFloppies

This comment has been minimized.

Copy link
Author

SuperFloppies commented Sep 3, 2018

@ashleyhull-versent I do like the idea of being able to tie into a keybase proof, and it would make for an interesting additional mitigation technique, but I do not believe that either the idea expressed in this issue or the keybase verification are sufficient on their own. A nice upside, though, is that an instance would be able to delegate (re)verification checks to a scheduled external process to be done asynchronously, which would allow it to run at the most optional times for a given instance.

@153

This comment has been minimized.

Copy link

153 commented Sep 3, 2018

Make users pay more to extend the reach of their post

  • Free: sharing within your instance
  • Very cheap: sharing with "partners" of your instance
  • Price varies: sharing with the greater Mastodon network

There are over 1700 servers currently in the federation!

We can empirically measure how many resources are consumed to propagate content across multiple servers and to serve content : requiring a fair, algorithmically-determined fee to propagate a message could encourage users to be mindful of the resources they consume (reminding them to "stay in their lane") while also supporting all the various administrators of the federation.

When I say "very cheap," btw, I mean tenths of pennies.

We really do need to think about long-term scalability of Mastodon.

@SuperFloppies

This comment has been minimized.

Copy link
Author

SuperFloppies commented Sep 3, 2018

@153 Right, because people like me shouldn't have the right to use the system.

Leave the discussions to the big kids, please; your capitalism can be better directed elsewhere.

@ashleyhull-versent

This comment has been minimized.

Copy link
Contributor

ashleyhull-versent commented Sep 3, 2018

Pay to whom?

@Serkan-devel

This comment has been minimized.

Copy link

Serkan-devel commented Sep 3, 2018

@ashleyhull-versent I guess the server admins?

@oct2pus

This comment has been minimized.

Copy link

oct2pus commented Sep 3, 2018

Can we not implement cryptocurrency in Mastodon.
how about something simple as a "can't @ me" mode where all @'s towards a user are rejected.

@PrimordialHelios

This comment has been minimized.

Copy link

PrimordialHelios commented Sep 3, 2018

Ignoring why this would be a bad idea for a moment,

If this is all being done by the client, wouldn't it be incredibly simple to sidestep it with a simple greasemonkey script, or by simply using Pleroma or an instance that doesn't implement the hashcash feature?

Honestly this wouldn't be the first time I've simply deleted a script and made a website run better.

@samanthaghraves

This comment has been minimized.

Copy link

samanthaghraves commented Sep 3, 2018

How about no?

No seems like a good option here

@ashleyhull-versent

This comment has been minimized.

Copy link
Contributor

ashleyhull-versent commented Sep 3, 2018

"Be liberal in what you accept, and conservative in what you send" - Jon Postel

Mastodon is a collection of resources (servers) communicating to pass bundles of information around liberally... the instance could gate the traffic leaving it (but why would it?), or the server would restrict the traffic reaching it (blocking trash instances is good first step) and the client at the edge of this graph could figure it out for themselves. Most of the Federation is passive... like we're getting toots from 14000 "Known instances" (minus the ones I block)...

So we find ourselves in a position of trust, as in, we trust the servers will pass the bundles around - the client can block messages from new accounts, or accounts on servers they dislike, or accounts not registered with keybase or something.... but you need to push the control to the client end and less into the instance end for this to work in the long run.

There is another issue open for the suggestion of befriending/promoting other instances.. but there we could see a world there some instances are friends and some are unfriended and with enough instances grouping up could cause a schism.

I'm sure the developers would foresee a roadmap for this kind of thing.

@XenonFiber

This comment has been minimized.

Copy link

XenonFiber commented Sep 3, 2018

no

@ashleyhull-versent

This comment has been minimized.

Copy link
Contributor

ashleyhull-versent commented Sep 3, 2018

issue #8499
this is my idea...

  1. Federation is mostly passive.. with the multiple thousand of instances.
  2. Block bad instances as a whole (worse case)
  3. Partner/neighbor instances for common Geo-locational or topical reasons.
    or
  4. befriend good instances (you could even do the whole PGP keysigning thing if that helps identify good servers in any way - my instance A is sending a message to instance C which is signed as a good server by my friend server B).

you could build trust around the ecosystem in a few little ways... and that's without even thinking about the end-users.... end users will likely end up with third party verification (keybase) or shared block lists (here be dragons).

the above is more of a stretch tho.. I don't foresee it being done

@adamemerson

This comment has been minimized.

Copy link

adamemerson commented Sep 3, 2018

I'm not sure if hashcash would really be a barrier to harassment. If people are only harassing a few individuals at a time and each individual only wants to harass each target a few times, a waiting period doesn't seem a good disincentive. Perhaps making them click through/additional steps might provide SOME barrier, but probably not much.

@vrzyszn

This comment has been minimized.

Copy link

vrzyszn commented Sep 3, 2018

perhaps the worst idea in the history of ideas

@Crakila

This comment has been minimized.

Copy link

Crakila commented Sep 3, 2018

No, this is a terrible idea. Block/report is already implemented. Hashcash is a terrible cash grab and will drive users away.

Please no.

@manedfolf

This comment has been minimized.

Copy link

manedfolf commented Sep 3, 2018

Keep crypto out of Mastodon.

#8565 would be preferable and immensely simpler to implement. Even basic timed rate limiting on a toot would be easier to implement.

I don't think any of these are perfect or even particularly good solutions to avoid harassment and you can always make a toot Follower-Only. And like @Crakila said, block/report would help identify and limit/isolate/remove bad actors rather than allow them to continue to harass under artificial constraints.

@vrzyszn

This comment has been minimized.

Copy link

vrzyszn commented Sep 3, 2018

a log off button i think has already been implemented too

@codesections

This comment has been minimized.

Copy link

codesections commented Sep 3, 2018

Hashcash is a terrible cash grab and will drive users away.

Factual correction: the suggestion is to use a lengthy computation to delay the sending of toots (in certain specified situations). There is no situation, under that proposal, that anyone would be charged money or that anything of monetary value—including a cryptocurrency—would change hands. The only effect would be to delay the sending of toots until a computation had been completed.

perhaps the worst idea in the history of ideas

This is uncalled for. We're all working together to make Mastodon a better piece of software. Part of that goal is coming up with ways to make Mastodon more resistant to abuse. I think this idea is a bit unconventional, but personally like it—but, even if this idea isn't worth following up on, please remember that we're all on the same side here.

@vrzyszn

This comment has been minimized.

Copy link

vrzyszn commented Sep 3, 2018

and this would make it unquestionably worse, AND wouldn't even work. Delaying posts is also a genius, A+ idea. It's never been tried before on any level, and it certainly didn't cause some instances to end up hours or days behind others on the public TL.

@Gargron

This comment has been minimized.

Copy link
Member

Gargron commented Sep 3, 2018

The 2.6 roadmap already contains some items for preventing report abuse. I have no interest in integrating any proof-of-work technology in Mastodon.

@Gargron Gargron closed this Sep 3, 2018

@tootsuite tootsuite locked as too heated and limited conversation to collaborators Sep 3, 2018

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.