Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Grant users privileges based on activity level #3548

Closed
4 tasks done
8ullyMaguire opened this issue Jul 8, 2023 · 4 comments
Closed
4 tasks done

Grant users privileges based on activity level #3548

8ullyMaguire opened this issue Jul 8, 2023 · 4 comments
Labels
area: moderation enhancement New feature or request

Comments

@8ullyMaguire
Copy link

8ullyMaguire commented Jul 8, 2023

Requirements

  • Is this a feature request? For questions or discussions use https://lemmy.ml/c/lemmy_support
  • Did you check to see if this issue already exists?
  • Is this only a feature request? Do not put multiple feature requests in one issue.
  • Is this a backend issue? Use the lemmy-ui repo for UI / frontend issues.

Is your proposal related to a problem?

Moderators and admins are experiencing burnout due to the increasing number of users on the platform. This leads to a rise in unmoderated communities as the workload becomes too much. Currently, admins must create posts to request community moderation, adding to their existing workload. Potential moderators may also be discouraged by the full-time commitment required.

Describe the solution you'd like.

Implement a hierarchical trust level system similar to Discourse1, where users can gain privileges and responsibilities based on activity metrics2. This distributes moderation and allows admins to focus on adjusting the trust levels of top-tier users, without micromanaging every user.

Describe alternatives you've considered.

Admins could configure:

  • The number of trust levels

  • The number of users per trust level

  • The reputation thresholds for each level

  • The reputation score for different actions

  • The privileges granted at each level

The number of users desired for each level or the reputation thresholds could be automatically calculated based on the other configurable parameters.

This allows instances to define tailored trust and moderation models.

Configurability

The platform would implement trust levels on a per-community or per-instance basis. Instance admins could choose to have:

  • Only moderators
  • Only user trust levels
  • Both moderators and user trust levels

In communities with human moderators, admins can restrict their privileges.

Appeals Process

There could be an appeals process where users can contest moderator actions. A user with a higher trust level would review the appeal and penalize the incorrect party.

Additional context

The current state of moderation across various online communities, especially on platforms like Reddit, has been a topic of much debate and dissatisfaction. Users have voiced concerns over issues such as moderator rudeness, abuse, bias, and a failure to adhere to their own guidelines. Moreover, many communities suffer from a lack of active moderation, as moderators often disengage due to the overwhelming demands of what essentially amounts to an unpaid, full-time job. This has led to a reliance on automated moderation tools and restrictions on user actions, which can stifle community engagement and growth.

In light of these challenges, it's time to explore alternative models of community moderation that can distribute responsibilities more equitably among users, reduce moderator burnout, and improve overall community health. One promising approach is the implementation of a trust level system, similar to that used by Discourse. Such a system rewards users for positive contributions and active participation by gradually increasing their privileges and responsibilities within the community. This not only incentivizes constructive behavior but also allows for a more organic and scalable form of moderation.

Key features of a trust level system include:

  • Sandboxing New Users: Initially limiting the actions new users can take to prevent accidental harm to themselves or the community.
  • Gradual Privilege Escalation: Allowing users to earn more rights over time, such as the ability to post pictures, edit wikis, or moderate discussions, based on their contributions and behavior.
  • Federated Reputation: Considering the integration of federated reputation systems, where users can carry over their trust levels from one community to another, encouraging cross-community engagement and trust.

Implementing a trust level system could significantly alleviate the current strains on moderators and create a more welcoming and self-sustaining community environment. It encourages users to be more active and responsible members of their communities, knowing that their efforts will be recognized and rewarded. Moreover, it reduces the reliance on a small group of moderators, distributing moderation tasks across a wider base of engaged and trusted users.

For communities within the Fediverse, adopting a trust level system could mark a significant step forward in how we think about and manage online interactions. It offers a path toward more democratic and self-regulating communities, where moderation is not a burden shouldered by the few but a shared responsibility of the many.

As we continue to navigate the complexities of online community management, it's clear that innovative approaches like trust level systems could hold the key to creating more inclusive, respectful, and engaging spaces for everyone.

Related

Related Issues

Footnotes

  1. Understanding Discourse Trust Levels

  2. Voting Affinity and Engagement Analysis

@8ullyMaguire 8ullyMaguire added the enhancement New feature or request label Jul 8, 2023
@8ullyMaguire 8ullyMaguire changed the title a Community moderation Jul 8, 2023
@lionirdeadman
Copy link

I would suggest renaming issue to "Grant users powers based on activity level"

I think it would be nice to have as an option. For example, I think editing titles or marking as NSFW could work for powers though it could be closing like #2619

@8ullyMaguire 8ullyMaguire changed the title Community moderation Grant users privileges based on activity level and possibly voting affinity to admin Jul 15, 2023
@8ullyMaguire 8ullyMaguire changed the title Grant users privileges based on activity level and possibly voting affinity to admin Grant users privileges based on user activity and voting affinity with the admin Jul 15, 2023
@erlend-sh
Copy link

There’s a clear interest in this among fedizens: https://writing.exchange/@erlend/110391232157395456

Also worth noting that once a Trust Levels system is in place, it’d be possible to explore ways to federate it.

@8ullyMaguire 8ullyMaguire changed the title Grant users privileges based on user activity and voting affinity with the admin Grant users privileges based on activity level Sep 5, 2023
@erlend-sh
Copy link

erlend-sh commented Sep 6, 2023

The MVP version of this could already mitigate a lot of the recent uploading of illegal images as an attack vector.

Disallow image uploads for users:

  • with less than 5 posts or comments
  • less than 10 total upvotes received
  • account newer than 1 week.

@8ullyMaguire 8ullyMaguire closed this as not planned Won't fix, can't repro, duplicate, stale Oct 17, 2023
@8ullyMaguire 8ullyMaguire reopened this Apr 5, 2024
@dessalines
Copy link
Member

I'm very against adding special abilities and priveledges for users (in the same way that stackoverflow does), outside of admins and mods. These systems get endlessly complicated and quickly become too difficult to maintain.

@dessalines dessalines closed this as not planned Won't fix, can't repro, duplicate, stale May 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area: moderation enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants