Navigation Menu

Skip to content

Commit

Permalink
content moderation
Browse files Browse the repository at this point in the history
  • Loading branch information
benbalter committed Aug 28, 2020
1 parent 69449c2 commit 149b1cf
Showing 1 changed file with 1 addition and 1 deletion.
Expand Up @@ -51,7 +51,7 @@ Just building tools or establishing policies will never be enough. Equally impor

### Tables stakes for any platform with user-to-user interactions

Looking at this list, it may be easy to ask why - if GitHub implemented these features - was Kat's experience still possible. The list above represents the _bare-minumum_ I'd expect of any modern social network today. The reality is that trust and safety is an adversarial space, and it requires a baseline of ongoing investment to stay one step ahead of those who wish to use your platform to do harm to others - [federated community management](https://ben.balter.com/2019/07/18/a-community-of-communities-oscon-2019/), automated flagging, sentiment analysis, [sockpuppet](https://en.wikipedia.org/wiki/Sockpuppet_(Internet)) corelation, [brigade](https://www.merriam-webster.com/words-at-play/brigading-online-poll-meaning) prevention, anomaly detection, reducing the visibility and discoverability of toxic content, temporary interaction limits, reputation scores, identity verification, minimizing in-product bias, detailed platform policy and playbooks - the list goes on. And that's _only_ looking at targeted harassment, without addressing broader trust and safety concerns like privacy, spam, inauthentic behavior, faking signals of trust, intentionally misleading content, impersonation, phishing, illegal content,[^2] malware, namespace reuse, financial fraud, resource abuse, rate limiting, moderator safety, or account security among other potential ongoing threats to your users and your community. Without action on your part, each of these potential negative interactions could irreparably harm your users and erode trust in and the value of your platform.
Looking at this list, it may be easy to ask why - if GitHub implemented these features - was Kat's experience still possible. The list above represents the _bare-minumum_ I'd expect of any modern social network today. The reality is that trust and safety is an adversarial space, and it requires a baseline of ongoing investment to stay one step ahead of those who wish to use your platform to do harm to others - [federated content moderation](https://ben.balter.com/2019/07/18/a-community-of-communities-oscon-2019/), automated flagging, sentiment analysis, [sockpuppet](https://en.wikipedia.org/wiki/Sockpuppet_(Internet)) corelation, [brigade](https://www.merriam-webster.com/words-at-play/brigading-online-poll-meaning) prevention, anomaly detection, reducing the visibility and discoverability of toxic content, temporary interaction limits, reputation scores, identity verification, minimizing in-product bias, detailed platform policy and playbooks - the list goes on. And that's _only_ looking at targeted harassment, without addressing broader trust and safety concerns like privacy, spam, inauthentic behavior, faking signals of trust, intentionally misleading content, impersonation, phishing, illegal content,[^2] malware, namespace reuse, financial fraud, resource abuse, rate limiting, moderator safety, or account security among other potential ongoing threats to your users and your community. Without action on your part, each of these potential negative interactions could irreparably harm your users and erode trust in and the value of your platform.

What may appear to be an "edge case" on the surface, is in fact, the reality of being a service provider on the internet today, one that's unfortunately and increasingly at the front of our ongoing conversation as to the role social networks play in our modern society. While Kat's experience is undeniably terrible, if this can happen to someone who spends their day building welcoming communities (and on a platform that had invested in trust and safety for some time before it happened), imagine what harm you might cause to your users, your community, and your business, if you don't take trust and safety seriously before someone (like Kat) gets hurt.

Expand Down

0 comments on commit 149b1cf

Please sign in to comment.