Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal 0002 - Labelling potential for abuse #23

Open
Bossett opened this issue Jun 25, 2023 · 0 comments
Open

Proposal 0002 - Labelling potential for abuse #23

Bossett opened this issue Jun 25, 2023 · 0 comments

Comments

@Bossett
Copy link

Bossett commented Jun 25, 2023

I sat down to get my thoughts in order about community management and how I wanted to respond to the three proposals. That turned into a longer document available at https://bossett.io/bluesky-on-community-trust-safety/ which more clearly details my thoughts. This is an except from that that I've edited to make it stand-alone and not have a bunch of issues all conflated. I would appreciate any attention/feedback on that document (but not here - maybe at https://bsky.app/profile/bossett.bsky.social).

100% my own thoughts here - I've expressed it as 'recommendation' but only so that there's a strawman to argue the toss about.

Labelling seems, prima facie, like a straightforward solution. Allow end users to label their own posts (to allow other users to self-select), and have systems in place for mass moderation and flagging.

The complexity of the proposed label set, and its potential for the misuse of a Labelling Service gives me pause. As does the balance of categories.

Labelling Services for Harassment

The proposal does not address the use of a labelling service operating on a federated server that is used for harassment. For example, a service could tag everyone from a particular server, or connected to a particular user, with one of the tags, and subscribers could use that to single out and harass posters.

This is significantly worse than lists, as lists are public, and moderation controls (up to defederation) can be used to defend against it to some degree. However, it does not appear that Labelling Services will be accountable. How do we know which users are using a malicious service, or which flags our content has been given by services we don't use?

The unfortunate reality is that this will be weaponised almost immediately.

Recommendation

Reconsider this design to include some kind of trust requirement in order to be labelled in the first place, perhaps just completely hiding things where a poster does not have an opt-in trust relationship with a service.

@Bossett Bossett changed the title Proposal 0002 - Labelling and Moderation Proposal 0002 - Labelling potential for abuse Jun 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant