Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal 0001 - List Management & Membership #21

Open
Bossett opened this issue Jun 25, 2023 · 0 comments
Open

Proposal 0001 - List Management & Membership #21

Bossett opened this issue Jun 25, 2023 · 0 comments

Comments

@Bossett
Copy link

Bossett commented Jun 25, 2023

I sat down to get my thoughts in order about community management and how I wanted to respond to the three proposals. That turned into a longer document available at https://bossett.io/bluesky-on-community-trust-safety/ which more clearly details my thoughts. This is an except from that that I've edited to make it stand-alone and not have a bunch of issues all conflated. I would appreciate any attention/feedback on that document (but not here - maybe at https://bsky.app/profile/bossett.bsky.social).

100% my own thoughts here - I've expressed it as 'recommendation' but only so that there's a strawman to argue the toss about.

Trusted Lists

In the trusted list model, a trusted community member (or perhaps members - I will just use 'Alice' since that's from the diagram) is responsible for a list of 'bad actors' (or 'good actors') for muting/blocking/other community action. In this scenario, a lot of weight is placed upon Alice to 'do the right thing', and the process for managing and maintaining the list is open to both intentional and unintentional abuse.

Calls for inclusion are often public, which serves both to notify Alice but also to allow the community to come to a rapid and joint decision. In some ways, this is essential to the integrity of the process, and while the call-out itself is often a form of harassment, it is done in a way that highlights the bad behaviour. Advocates for both sides can have a public discussion.

At the end of this process however, one side will leave with animosity for Alice, and over time this is a contingent of the social space that has both demonstrated bad behaviour, and now has a specific target for resentment. This is made substantially worse in cases where a community is somewhat divided, as a slow eroding of trust is likely to isolate Alice as a 'capricious' arbiter. Given that the list holder is often a core community member - this is how factions fall-0ut and fall apart.

Anonymous lists and group moderation simply move the target from Alice, to the more general 'community', which has the potential to be somewhat worse as guilt-by-association kicks in.

Recommendation

Should this model be considered, significant care must be taken to ensure that the list holder does not become a pariah, defeating the purpose of the process. This can be done at a community level (i.e. some kind of voting in a joint list), or a technology level (i.e. implement list proposals voted on by list users).

Declared Membership

Declared membership groups are likely to be poison to community-building.

Should they be used to determine a list of 'good actors', I predict that we will immediately see in-group/out-groups form, and the notion of the public square largely dissolve except for the most banal of discussion, or for the conduct of hate speech intended solely for provocation.

Recommendation

As a model for moderation, declared memberships should not be considered. They may have utility for allocating 'committee' membership for list voting (such as in 'Trusted Lists') but I believe the risk of these leading to fractioning is significant.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant