Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document anti-spam and anti-harassment design #698

Closed
novalis opened this Issue Mar 30, 2017 · 12 comments

Comments

Projects
None yet
4 participants
@novalis
Copy link

novalis commented Mar 30, 2017

The first question that I (and at least some other folks) ask when they look at a new platform is: how will this be used to send me unwanted messages, and what can I do about it?

Even if the answer is, "we have no idea how to solve that problem", it would be good if the FAQ or other public documents addressed this directly.

@Gargron

This comment has been minimized.

Copy link
Member

Gargron commented Mar 30, 2017

  1. Settings->Preferences "block notifications from people who aren't following me/who i am not following"
  2. Reporting accounts
  3. Admins can sandbox spammy or trolling/hateful accounts so they aren't visible outside their own bubble
  4. Worst offenders can be suspended, removing them and their content from the instance entirely
@novalis

This comment has been minimized.

Copy link
Author

novalis commented Mar 30, 2017

This is a good start for a FAQ. I still don't understand how these things interact with federation. E.g. can someone just run a spammy/abusive server and ignore all reports?

@Gargron

This comment has been minimized.

Copy link
Member

Gargron commented Mar 30, 2017

Your instance is your gatekeeper in that case. Someone can run a spammy/abusive server, but your instance's admin can blacklist that server. Or a particular account from that server.

@novalis

This comment has been minimized.

Copy link
Author

novalis commented Mar 30, 2017

So I have to trust my admin. Of course, I have to trust my admin anyway, but I have to trust them to be on the ball, rather than just to benignly neglect me.

Can't a spammer just change their server's name? Or their account name?

@Gargron

This comment has been minimized.

Copy link
Member

Gargron commented Mar 30, 2017

It's not easy to do those changes (especially server name, since it involves purchasing a new domain name) and they are easy to block again.

@novalis

This comment has been minimized.

Copy link
Author

novalis commented Mar 30, 2017

a.example.com, b.example.com, c.example.com... No need to buy a new domain name.

@novalis

This comment has been minimized.

Copy link
Author

novalis commented Mar 30, 2017

Anyway, if the solution is that admins have to play whack-a-mole, then that's the solution. It's just worth documenting that so that users know what they have to worry about.

@yiskah

This comment has been minimized.

Copy link
Contributor

yiskah commented Mar 31, 2017

If you don't trust your admin you can migrate your account pretty easily (using follow and block list import/export) to a server run by an admin who you trust. At this point, harassment has not been much of an issue and cases of harassment have not been frequent enough for the whack-a-mole approach to be difficult.

One feature that had been brought up in the past but which had not been implement due to it not yet being needed was a switch that filtered out notifications from users with the default avatar (so "hide eggs" mode.) So if we start having an Egg Problem that is something that could be implemented. Right now it seemed low priority since we don't have people saying they're getting harassed by users with default icons.

@novalis

This comment has been minimized.

Copy link
Author

novalis commented Mar 31, 2017

FWIW, I think "default icon" is not actually what Twitter used (but I didn't work in that department, so check for yourself). Instead, it was the age of the account. Of course, without centralized accounts, that's trickier to track.

If I migrate my account, do my followers also have to migrate? Or is there some sort of forwarding?

The only reason that unwanted content is not yet an issue is that the platform is not yet popular.

@yiskah

This comment has been minimized.

Copy link
Contributor

yiskah commented Mar 31, 2017

  1. That could probably be implemented too?
  2. Yes, people would have to re-follow you. But your old account would still exist so you could tell them to re-follow. Of course some followers will not follow your new account but those who are actively paying attention probably will. (As someone who has done this, I had regained I'd say 80% of my followers within a month.) Having a feature which makes users automatically follow an unknown account on another server without their consent would be difficult to implement safely. Everyone you follow would see your new account follow them at once when you import your follow-list, so that at least helps get mutuals back. The block-list can also be carried over (and if so desired someone could easily pre-create mass-block lists of known trolls that could be imported, since it's just a csv file.)
  3. This is not quite true! The reason it isn't a constant issue is because we aren't massive, but have been "raided" in the past, early on, which made anti-harassment features a priority.

Another feature that's been in the works since before twitter's recent mass-tagging problem is a solution to said problem, before it's actually been a problem. This solution would be thread-muting, a way to say "stop giving me notifications on this thread." This is a pro-active anti-harassment feature before anyone even tries to use this tactic.

As for rapid domain-changing, I'm not sure what could be done about that. I haven't done any work on the back-end, so it's outside my realm of knowledge.

A benefit of the project being open-source is everyone can contribute and, if somehow something happened to gargron, we could just fork the project, make new instances, and continue usage and development... if at a slower pace. Dissatisfied users are not beholden to anyone and have the ability to undo or change things they don't like. (For instance, the website used to be very low contrast, which I went and fixed. I know of someone developing an entire alternative web-interface, which they can do because of the open api.)

A downside to being open-source is we cannot control what other servers do. There was someone who set up a single-user instance and modified the code so that their instance did not have a character limit. This resulted in Go submitting #658 which makes long posts collapse, preventing someone from setting up an instance, following themself from another instance, and then spamming. Though the person who had made the instance without a character limit did not use it to spam, and is actually the same person who implemented user muting, but we recognized the potential.

Anyone can fork and make their instance simply not run any anti-harassment features we try to implement. Our instance would still have these features, but we can't dependably make servers have unique IDs as someone could just modify their code and change the ID. At present, there's no known way in the OStatus protocol to check another server for software and version. A server is a server, whether it runs mastodon, gnusocial, or postActiv. Even if we implemented some sort of handshake, someone could modify their code to falsely declare itself a compatible mastodon server when actually it runs something else.

One thing that limits this is that the federated timeline is only users who someone on that particular instance follows. If we achieve actual de-centralization (rather than most people being piled on mastodon.social) then in order to spam everyone, they would have to make accounts on every instance they wanted to spam and then follow their instance to hook the servers up. This is possible but increasingly a pain. And even then, users can use Local Timeline if someone started to do that, while they wait for the situation to be handled.

Unfortunately, whatever systems we implement to counter-act harassment, anyone committed enough will find a way to circumvent it. Trolls seem to have immense creativity in developing new tactics... which is terribly wasted talent. It'd be nice if rather than using those tactics they helped find ways to counter-act them. Even if we find a way to avoid user/server whack-a-mole there will always be method whack-a-mole. At this point, we have features already implemented or in development which counter-act every harassment method we have identified being potentially used. Except for the rapid domain-changing you brought up, which is definitely something we should make a plan to counter at some point.

If you have other concerns I'm happy to try my best to answer. A very large portion of our users and dev volunteers are LGBT ppl who have been targeted for harassment in the past, and it has been a huge priority for us to make this site feel safe. I myself have been doxxed even. Trust me that I'm always thinking of ways to combat this, though my capabilities personally are only front-end and otherwise all I'm doing is convincing gargron :P, but I know he cares deeply about this as well since harassment is one of the main things that drives people off of twitter, and being better than twitter at handling this will make or break the project.

@yiskah

This comment has been minimized.

Copy link
Contributor

yiskah commented Mar 31, 2017

Anyway, the request here seems to be to create a document or page specifically outlining what is being done to counter-act harassment. I'd be happy to help work on such a document if we could try to reach a consensus on where such a document would live.

@Gargron

This comment has been minimized.

Copy link
Member

Gargron commented Jun 29, 2017

Probably not fit for a Github issue, feel free to use Discourse

@Gargron Gargron closed this Jun 29, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.