New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add trust system (trust carrots) #878

Closed
alangecker opened this Issue Feb 15, 2018 · 8 comments

Comments

Projects
None yet
4 participants
@alangecker
Copy link
Contributor

alangecker commented Feb 15, 2018

related discussions

  • #89 Is it ok if any group member performs any action or do we need hierarchies?
  • #324 Optional legal agreement before joining the group
  • #260 Reputation system
  • #356 Optional admin roles for groups
  • #550 Minimal implementation of admin roles
  • #853 [Brainstorming] Process to remove user from group

Proposal: a self-regulated hierarchy

Key aspects

  • roles & privileges seperated
  • web of trust based on 'trust bananas'

Web of Trust

inspired by duniter's web of trust

  • 'trust bananas' to collect trust relationships between people
  • limited amout of bananas giveable (e.g. one per week?)
  • expire after certain time => trust needs to get renewed
  • trust level calculated for each group individually
  • exact algorithm for calculating not developed yet, but variables probably are
    • distance to each other trusted member
    • count of trust bananas
    • total amount of user
    • total amount of trusts

Roles

(store contact, confilict management person, ...)

  • basically just a badge
  • custom roles for each group possible
  • allocation/managing possible by everyone with a trustworthiness above a certain treshhold
  • in the long term future: maybe elections for roles with a limited group of voters, based on trustworthiness

Implemention steps

Step 1: collect data

  • implement trust bananas (a button like i trust this person)

Step 2: aggregating data

  • db-structure for trustworthiness ()
  • cronjob for aggegate web of trust once per day
  • labels for users, which make there state transparent

Step 3: using data

  • implement roles
  • roles management component for trusted people

Open questions

  • a totally stupid idea?
  • would existing groups accept this hierachy?
  • binary state of trusted/untrusted, labeled steps or a number?
  • algorithm / variables for calculating trustworthiness
  • How are Groupcreators initially trusted? (bootstrapping problem)

Thoughts

  • Problem: It forces a majority of the users to use the trust feature, otherwise the group might stop functioning because not enough people can stay in admin roles (especially if trust has to get renewed)
    Solution: trust level should depend on the amount of total trusts, so if there is only one trust. the person is admin, if there is no trust at all: everybody is admin
@tiltec

This comment has been minimized.

Copy link
Member

tiltec commented Feb 17, 2018

I like the idea very much!

Some additional concerns:

  • If roles are coupled to trust levels, I can't express that I trust an user to fulfill roles x and y, but not role z
  • It forces a majority of the users to use the trust feature, otherwise the group might stop functioning because not enough people can stay in admin roles (especially if trust has to get renewed)
@djahnie

This comment has been minimized.

Copy link
Member

djahnie commented Feb 17, 2018

It definitely is a cool idea! Especially that the roles are customizable and basically just badges. That leaves most of the responsability with the group and not with the software.

Some remarks to open questions:

  • Binary state of trusted/untrusted, labeled steps or a number?
    --> Binary is way to vague, labeled steps could work imho.
  • How are Groupcreators initially trusted? (bootstrapping problem)
    --> Yeah... that's the main issue I'd say. Maybe this could be solved via percentages and self-trust: If a certain percentage of users trusts someone, that person gets more and more rights (details don't matter right now). If I am alone in a group and trust myself, then I have 100% trustworthyness. If another one joins it goes down to 50% automatically, if that person doesn't immediately trusts me as well.
  • If roles are coupled to trust levels, I can't express that I trust an user to fulfill roles x and y, but not role z
    --> The labeled steps could already include more content than just amounts of trust. It could have more dimensions and maybe look more like a trust matrix, where one axis is low to high trust and the other one consists of different categories like 'reliability', 'sociableness', 'punctuality', 'friendlyness' and such. I imagine it looking like this:
    trustmatrix
    Maybe it's too complicated though, dunno...
  • It forces a majority of the users to use the trust feature, otherwise the group might stop functioning because not enough people can stay in admin roles (especially if trust has to get renewed)
    --> That's a problem indeed. Forcing people to rate others often results in nonsense votes and an informative popup is easily ignored... Hm... we'd need some kind of fallback solution if this scenario occured... Like giving back all rights to all users, because apparently nobody gives a shit anyways..? 😅
@tiltec

This comment has been minimized.

Copy link
Member

tiltec commented Jun 25, 2018

As you are progressing quite nicely with the backend implementation, maybe we could clarify "Step 1". How would you display trust data to the affected user and to other users?

Further into the future, in response to your solution:

trust level should depend on the amount of total trusts, so if there is only one trust. the person is admin, if there is no trust at all: everybody is admin

Generally nice idea, but needs to be smoothed around the edge cases. For example, if there is no trust and everybody is admin and then the first person gives trust, then there will be only one admin, excluding the trust giver. Seems a bit harsh ;)

@nicksellen

This comment has been minimized.

Copy link
Member

nicksellen commented Jun 25, 2018

There was some feedback from @djembejohn about this proposal, here's the relevant bit:

A model along the lines of a decentralised hierarchy has been proposed for Karrot (#878) and a version of this has been implemented in foodsharing.de. I would like feedback on how well this has worked on foodsharing.de. There seems to be a strong model that the conferring of status between individuals in a network can work well to check whether people are genuine or not.

Regarding whether this is a good system for assigning administrative roles, there are a number of problems. The first problem is that individuals with high status tend to increase their own status – there’s a widely observed phenomenon called preferential attachment whereby people will make friends with those others who already have many friends. This has the effect of creating super-popular people, and this can generate a centralised hierarchy out of a process which initially appears to be decentralised (I have done simulations which appear to confirm this). A way around this might be to make the conferment of status private. However, that might make the reputation-system less likely to be used as it will be essentially invisible. Other problems (which are discussed in the thread) include the fact that there are many different types of admin task but only one dimension for level of trust, and how you might assign trust to those already using the site.

This sounds a quite plausible analysis of it and as such I feel a bit unsure if this trust model is the right way to go (compared with the alternatives voting and/or undo - or would this compliment it?).

One approach would be to simply try it, but then it would seem useful to define some success criteria, and to replace it with an alternative system if it isn't working.

Which problem specifically that our user groups are facing is this solving? And perhaps in more concrete terms, can you think of a metric we can record to influxdb and graph in grafana that will show us if the feature is working? (or maybe it needs a more human process - feedback/trial within a particular group?).

@tiltec

This comment has been minimized.

Copy link
Member

tiltec commented Jun 27, 2018

This morning @alangecker @nicksellen @djahnie and me had a chat about this topic. I'll try to summarize, but also state my own points again.

Trust carrots are a nice idea to reinforce positive feedback to other group members and have the potential to build a web of trust and use that for assigning roles in a group. However, it needs significant testing with real groups before trust values should be used to assign roles in an automated fashion.

Unfortunately, people might not give trust carrots when they don't have any effect. So we need to find out one or more karrot user groups that are willing to test drive the trust carrots feature.
Alternatively, we could just implement and publish the "step 1" feature as described in this issue, define a period to gather statistics (e.g. 6 months) and the decide how to continue.
The feature should be properly introduced to the user (e.g. with an in-karrot description and an article with background information?)

Further concerns with trust carrots:

  • might put too much burden on group members to continuously give trust
  • doesn't directly solve the current problems of introducing newcomers and and removing harmful members, so needs clearer proposals how this could help

(Short interlude) Bigger issues of karrot right now include:

  • How to prevent new members from doing damage accidentally? #798
  • How to prevent malicious actions from group members? #546

One solution could be to have a role that allows removing users and this role would require a very high trust level. Until the trust carrots will be widely used, this could be set manually by server admins.
I personally don't like this solution, because this would essentially just hand out the power of removing users without defining any requirements for doing so. I'd rather like to define a formal process with a group of interested people and then act on it.

Related solutions to aforementioned issue of removing members:

  • Voting process to remove people from a group (to be implemented in software)
    Requirements are: approval to join group and rework of email invitations, otherwise it's too easy to add fake members for manipulating the vote. Voting might not work with open groups or password-protected groups.
    More details #853

  • Start a formal admin group for karrot.world to allow removing users before it's implemented in software. Decisions could take place in loomio.
    More details in Slack and https://pad.disroot.org/p/karrot-constitution

@tiltec

This comment has been minimized.

Copy link
Member

tiltec commented Jul 16, 2018

Related to my previous comment and the brainstorming in #1062, it seems to me that karrot doesn't need a self-regulated hierarchy soon. The trust carrots feature could be a core part of the process of becoming a full member (user level 3 in #1062).

@tiltec tiltec changed the title WIP-Proposal: a self regulated hierarchy Add trust system (trust carrots) Jul 16, 2018

@tiltec

This comment has been minimized.

Copy link
Member

tiltec commented Jul 22, 2018

I added steps to the implementation proposal #546 (comment) that are very closely related to this issue.

There are some key differences to this proposal:

  • trust is only used to become full member (UL3), if you get more trust it only has informational value
  • trust can't be taken back
  • trust does not expire
  • once you have UL3, you can't lose it

I know that these choices are not optimal, but I think it makes it possible to implement a working feature soon and solve the problem with new members that don't know what they are allowed to click in a group.

@nicksellen

This comment has been minimized.

Copy link
Member

nicksellen commented Jul 27, 2018

Ideas from this threads are being implemented to address group newcomers - #546 is the more accurate issue for tracking this.

The more extended ideas discussed here about a trust system are not being implemented right now, although they could be implemented in the future. So, I close this issue now, but can be reopened if interest/implementation/ideas are renewed again!

@nicksellen nicksellen closed this Jul 27, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment