Skip to content
This repository has been archived by the owner on Nov 9, 2017. It is now read-only.

Create a proposal for a project-wide facilitation framework #132

Closed
nebrius opened this issue Apr 27, 2016 · 15 comments
Closed

Create a proposal for a project-wide facilitation framework #132

nebrius opened this issue Apr 27, 2016 · 15 comments
Assignees
Labels

Comments

@nebrius
Copy link
Contributor

nebrius commented Apr 27, 2016

Purpose

We have discussed moderation in a number of threads, and I would like to consolidate these discussions here, now that we have a rough idea of what we want to implement.

The intent of this proposal is to introduce a framework for moderation and facilitation across the entire project, and as such would be adopted and owned by the Technical Steering Comittee (TSC for short).

The non-comprehensive goals of this framework are:

  • To reduce hostility in heated discussions
  • To reduce dogpiling
  • To encourage people to speak up in threads who may otherwise be afraid to speak up due to the issues mentioned in the above goals
  • To ensure that all threads needing moderator oversight have those resources available
  • To provide clear instructions on moderation so that threads are moderated uniformly and fairly

I would like to mention that we have made good progress working out moderation in the Node.js project over the last 6 months, and my intention is to take what we've done to the next level, not scrap it. Most of the ideas in this proposal came from discussions in various threads about moderation as we've been figuring things out, and I want to recognize and thank everyone who has contributed to the discussion in the past.

Proposal

At a high level, this facilitation framework will divide responsibilities into three groups. The Inclusivity WG will be responsible for creating proposals for this framework, an as-yet-to-be-created group will be responsible for performing facilitation, and the TSC will be responsible for accepting the proposal, by way of a PR, and creating the facilitation group.

The facilitation group could be either an official working group that is chartered by the TSC, or could be some other form of group. This question will be fleshed out as part of the proposal process. The facilitation group will be comprised of people who are trained as facilitators and available to be assigned to threads as the need arises. The details of how facilitators are trained, and how they are assigned still needs to be worked out.

Ideally, facilitators are assigned to threads who are not "close" to that area. For example, if a thread about streams is getting heated, the facilitator would be someone who does not primarily work on streams. Whenever a facilitator(s) are assigned to a thread, then all facilitator and moderation must be done by this person/these people. Input from other Node.js collaborators will be done in the moderation repo, as we more or less do today, but any facilitation/moderation actions in the thread should only be done by the moderator (this is a key component for reducing dogpiling from Node.js collaborators).

I envision this framework extending beyond moderating Code of Conduct violations, and I envision the moderator's role being focused on creating a dialog with people and only moderating when necessary (moderation actions would only be for explicit Code of Conduct violations), and so the terms "moderation" and "facilitation" are both used here, knowing that they are not identical terms.

Focus

The focus of this issue should be on the framework for performing moderation. Discussions of specific moderation incidents can be discussed here, but only if they a) are brought up with the intent of informing how we moderate and b) do not disclose any private or sensitive information.

Some off topic examples include, but are not limited to:

  • Moderation incidents that were not publicly disclosed
  • Whether or not public past moderation incidents should be overturned
  • Whether or not the Node.js Code of Conduct should be modified (please file a separate issue for that)
  • Whether or not we should be moderating at all
  • Whether or not moderation of comments and issues is a free speech issue
@scottgonzalez
Copy link
Contributor

Whenever a moderator is assigned to a thread, then all moderation must be done by this person.

For very active threads, we'll need multiple people moderating in shifts based on their availability.

@rvagg
Copy link
Member

rvagg commented May 13, 2016

I'd be very interested in seeing examples coupled with the proposal so we have a clear idea of what exactly we're trying to deal with. There's likely to be more trouble pushing through a framework that handwaves about problems than looking at actual problems that we've experienced and how such a framework would help better deal with them. We have a good collection of examples now and I would think most people who need to buy-in to this would have had enough experience to be able to make a judgement on how effective any changes might be.

@nebrius
Copy link
Contributor Author

nebrius commented May 13, 2016

Great idea @rvagg, I'll make sure it gets in there.

As a bit of a preview, the big example I've been using so far is the various Promise threads a while back. I find this one interesting because it wasn't nearly as black and white as say, nodejs/node#3721. We did a good job moderating CoC violations, but there was enough going on there that wasn't a CoC violation that nonetheless caused people to leave the conversation, or never join to begin with. What also makes it interesting is that some people who left the conversation because of dog-piling played a part in encouraging said dog-piling.

So this wasn't a case of "this person needs to be moderated for the sake of others" but rather "we need to redirect the conversation from where people are taking it to for the sake of those same people."

I can already say that the term "moderation" isn't a good one in this context, because moderation will only be a part of it. We just haven't come up with a better term yet. I want it to focus not just on moderation in the classic sense, but also getting people to step in and help "cool the room" when it's needed, and anything else that makes sense to prevent the next Promise thread from going off the rails.

(as an aside: there were some people who did a very admirable job trying to do basically what I'm envisioning in a very ad-hoc manner, and I want to study what and how they did it to help inform this process and to formalize it).

@Trott
Copy link
Member

Trott commented May 13, 2016

@nebrius wrote:

I can already say that the term "moderation" isn't a good one in this context,

Likely still imperfect, but perhaps facilitation gets closer to what you're thinking about than moderation? Or maybe there are components of both?

I think the practical process for something like the Promises discussion will look very different from what we do when we're basically being attacked (as happened with the fallout from 3721). It's possible that both situations can be covered by a single broad-and-flexible process. Or it may be best to consider them separately as it will likely take a very different set of skills to handle the two different situations.

For things like the Promises discussion, key things we may wish to consider:

  • It's best (crucial?) that the facilitators are not participants in the conversation, but they do need to be reasonably well-versed in the subject. That can make a good facilitator difficult, but given the size and scope of the Node.js project and community, it should not be impossible.
  • The facilitator must be someone with broad trust from the participants. It would be ideal if the primary participants in a conversation can help select a facilitator, or at least endorse a facilitator when selected.

Those bullet points do not apply for the other type of issue, where we're dealing with absive throw-away accounts.

@Trott
Copy link
Member

Trott commented May 13, 2016

Since everything I've learned about facilitation, I've learned from @groundwater, going to just leave that @-mention there in case he has anything he'd like to add.

@Fishrock123
Copy link
Member

See also: Rust-lang's moderation policy: https://www.rust-lang.org/conduct.html (Scroll down)

@nebrius nebrius changed the title Create a proposal for a project-wide moderation framework Create a proposal for a project-wide facilitation framework May 16, 2016
@nebrius
Copy link
Contributor Author

nebrius commented May 17, 2016

Commenting here for posterity: after some discussions, I think it could be really useful as part of this framework to create a series of "runbooks" for handling certain common cases, especially ones that involve taking moderation action against users, such as banning. This way, there's less ambiguity and having to "figure things out" when these issues crop up, leading to more predictability and uniformity of moderation.

@MylesBorins
Copy link
Member

I think the small module approach would make it easier to adopt and not become an overarching framework...

I am aware of the pun and I am delighted by the fact that I think the logic rings true in both instances

@nebrius
Copy link
Contributor Author

nebrius commented May 17, 2016

Dropping another note for posterity: There is currently a fair amount of ambiguity around what communication channels should be used for dealing with these situations and who should be responsible for dealing with these issues given different types of people and permissions they may or may not have in the org. This should be fleshed out and defined.

@nebrius
Copy link
Contributor Author

nebrius commented May 17, 2016

I think the small module approach would make it easier to adopt and not become an overarching framework...

I kinda like to think of this stage of work, and this issue, as a sort of umbrella issue to figure out the big picture details on this work, and the result will end up being a lot of dependencies and sub issues...

I am aware of the pun and I am delighted by the fact that I think the logic rings true in both instances

...kinda like package.json 😏

@MylesBorins
Copy link
Member

MylesBorins commented May 17, 2016

@nebrius we literally could make each policy a module... and use semver to implement changes.

The overall policy of the org could be another repo, typing them together by a package.json

edit: I'm going to abstain from taking this analogy any further... I got excited about it. If people are intersted in geniunely exploring this I'd be up for discussing

@mcollina
Copy link
Member

This proposal suggest we adopt a classical division of powers between three different groups, inclusivity, "facilitators/moderators", and the TSC.

The only catch is that the role of facilitators is not easy, and it might get unpleasant. I like the proposed approach, and I suggest the numbers of "moderator" to be relatively high, and quick decision-making process is put in place to pick the correct one for a given issue. All of this should really be transpartent on how things are moderated, and how an incident is assigned to a specific moderator, and so on.

What should also be clear is what the moderation repo is and it is not for. Lately it was used for different things, but it should be used only for specific incidents and not general discussion/proposals. The process for "opening an issue" on the moderation repo should be strict.

Finally, I propose that the facilitators/inclusivity wg keep metrics on the various "moderation" activities, categorize them and produce a (quarterly?) report on the blog. This might be a bit of a burden, but it will help in explaining how the whole system works.

@nebrius
Copy link
Contributor Author

nebrius commented Jun 1, 2016

(sorry for letting this sit...too many notifications!)

The only catch is that the role of facilitators is not easy, and it might get unpleasant. I like the proposed approach, and I suggest the numbers of "moderator" to be relatively high, and quick decision-making process is put in place to pick the correct one for a given issue. All of this should really be transpartent on how things are moderated, and how an incident is assigned to a specific moderator, and so on.

+1 to all of this.

What should also be clear is what the moderation repo is and it is not for. Lately it was used for different things, but it should be used only for specific incidents and not general discussion/proposals. The process for "opening an issue" on the moderation repo should be strict.

There's been some discussion around this, in some issue that I can't seen to find anymore. There doesn't currently seem to be consensus oh what the focus of the repo is. I think it makes sense to clarify the role of that repo in this framework, but I do want to get more outside input.

Finally, I propose that the facilitators/inclusivity wg keep metrics on the various "moderation" activities, categorize them and produce a (quarterly?) report on the blog. This might be a bit of a burden, but it will help in explaining how the whole system works.

I'm kinda skeptical about posting this information publicly. Last time we talked about moderation publicly, it led to a sustained harassment campaign from outsiders involving a certain plant-based emoji. There could be value, though, in posting this to the private moderation repo. I am concerned about the overhead though. 🤔 Let's think on this a while and get more input.

@mcollina
Copy link
Member

mcollina commented Jun 3, 2016

I'm kinda skeptical about posting this information publicly. Last time we talked about moderation publicly, it led to a sustained harassment campaign from outsiders involving a certain plant-based emoji. There could be value, though, in posting this to the private moderation repo. I am concerned about the overhead though. 🤔 Let's think on this a while and get more input.

As anything that limits free speech, publishing some data about how this activity is performed is important to ensure that there is trust in the mechanism: see https://www.google.com/transparencyreport/removals/government/ as an example.
I agree on the overhead though.

@nebrius
Copy link
Contributor Author

nebrius commented Jun 3, 2016

To be sure, it is a balancing act. I should point out though that we're not a content creation platform like Google, so those transparency reports don't really apply to what we're doing here.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

7 participants