Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow users to mute/block other users #168

Open
bgilbert opened this Issue Oct 8, 2015 · 26 comments

Comments

Projects
None yet
@bgilbert
Copy link
Contributor

bgilbert commented Oct 8, 2015

Add a way for individuals to prevent specific people from communicating with them. This will be important in deployments with untrusted users. (E.g., a discussion site for an open-source project, open to anyone on the Internet.)

@lfaraone lfaraone added the enhancement label Oct 8, 2015

@DataBranner

This comment has been minimized.

Copy link

DataBranner commented Aug 17, 2016

Is the idea only for User A to be able to block PMs from User B, or would User B's posts to public streams also be blocked or hidden from User A?

@bgilbert

This comment has been minimized.

Copy link
Contributor Author

bgilbert commented Aug 18, 2016

I was thinking that A would see no public or private messages from B.

@timabbott

This comment has been minimized.

Copy link
Member

timabbott commented Aug 18, 2016

This deserves some thought -- it's possible we will want to render something for these messages to avoid making it very confusing to follow a conversation that the blocked person is participating in.

@brainwane

This comment has been minimized.

Copy link
Contributor

brainwane commented Oct 17, 2016

We chatted about this today in the developers' realm and @trueskawka will be writing up some notes about what v1 will look like.

@trueskawka

This comment has been minimized.

Copy link
Member

trueskawka commented Oct 18, 2016

Version 1

It seems sensible to focus first on adding an option for blocking another user, as a user-side feature. The messages sent by the blocked user would simply not appear, which means they would be hidden on the front-end.

Further growth of the feature should be based on feedback gathered from the experience of using the basic feature and any problems it causes.

Use cases

There are several use cases for blocking and muting features:

  • blocking an abusive user by an individual (including private messages) - it's a very high value feature for users who have experienced abuse, allowing them to at least partially address the issue
  • muting a chatty user by an individual - it might be a feature fulfilling some of the purposes of blocking, especially in organizations where blocking another user could be unwelcome
  • blocking/muting between an organization and the individual

Further development

Possible issues:

  • when you've blocked user X, what happens when user X sends a message to a stream you're subscribed to? it might be confusing, if user X's messages don't appear at all, because the blocking user could see replies to messages they can't see
    • the confusion might be worth it, since it's fulfilling the purpose of an anti-abuse feature
  • blocking might not work for all organizations, since there is sometimes a valid reason for keeping up the communication
    • the blocked person could be a manager
    • the organization requires proof of abuse to take action
  • the feature depends on organization:
    • it's an important feature for large open communities
    • for companies it's a marginal problem
    • in smaller communities the user would be identified and dealt with using existing tools (such as deactivating accounts)

Possible alternative approaches:

  • show blocked/muted messages as slightly transparent
  • hide the blocked/muted messages, but have an indication that the blocked/muted user posted something, e.g. substitute the blocked/muted messages with "message from blocked/muted user" placeholder
    • an optional feature would be allowing the user to reveal the hidden messages manually
    • a possible solution is automatically setting the "Collapse" flag on messages sent by that user
@timabbott

This comment has been minimized.

Copy link
Member

timabbott commented Oct 18, 2016

Nice writeup! I've retagged this issue as "help wanted" rather than "feedback wanted", since it sounds like we have an implementation plan for what to do next.

@brainwane

This comment has been minimized.

Copy link
Contributor

brainwane commented Nov 4, 2016

Evidently Slack refuses to implement a block feature (I heard about this via Cate Huston). I'm looking forward to being able to tell people "switch to Zulip and you'll be able to block/filter/mute". :)

@kracekumar

This comment has been minimized.

Copy link
Contributor

kracekumar commented Dec 16, 2016

If no one is working on the feature, I am up for it.

@timabbott

This comment has been minimized.

Copy link
Member

timabbott commented Dec 16, 2016

It's all yours @kracekumar!

@kracekumar

This comment has been minimized.

Copy link
Contributor

kracekumar commented Dec 19, 2016

Thank you @timabbott. I will discuss the high-level details in further comments and start working on the feature.

@kracekumar

This comment has been minimized.

Copy link
Contributor

kracekumar commented Feb 2, 2017

It's time to start working on the issue!

My understanding:

  • Blocked One to One Chat. Let's say A blocks B. B can send a message, but A cannot see the message. When A types the B's name in the To textbox, B name shouldn't show up. Is this assumption right?
  • Stream messages: Let's say A blocks B and A and B are part of common stream S. When A reads B's message in stream S, show a filler text "A Message from blocked user B. Toggle to view the message". Is this assumption right?

Required Tech Changes:

  • Add BlockUser Model with blocker_user_id, blocked_user_id.
  • Add API to list, block, unblock a user.
  • Add a list view to show all the blocked users along with unblocking button.
  • Replace blocked the user with a message with filler text with a toggle button.

Tech Questions:

  • When should user receive a list of blocked users after login/reloading?
@DataBranner

This comment has been minimized.

Copy link

DataBranner commented Feb 2, 2017

I can't believe this is going forward. The step after making filter bubbles rigid is civil war. This is a terrible mistake.

@kracekumar

This comment has been minimized.

Copy link
Contributor

kracekumar commented Feb 2, 2017

Oh! Looks like I set discussion on fire with my understanding and steps.

@DataBranner

This comment has been minimized.

Copy link

DataBranner commented Feb 2, 2017

No, my fault for commenting here. The project has elected to move forward with this. My concerns have been heard, but will not be acted on. If you want to talk, we can do so privately.

@kracekumar

This comment has been minimized.

Copy link
Contributor

kracekumar commented Feb 2, 2017

@brainwane

This comment has been minimized.

Copy link
Contributor

brainwane commented Feb 21, 2017

@kracekumar Here are my answers:

Blocked One to One Chat. Let's say A blocks B. B can send a message, but A cannot see the message. When A types the B's name in the To textbox, B name shouldn't show up. Is this assumption right?

Yes, I think that's right.

Stream messages: Let's say A blocks B and A and B are part of common stream S. When A reads B's message in stream S, show a filler text "A Message from blocked user B. Toggle to view the message". Is this assumption right?

This is where blocking and muting feel like they should have different behavior. If I'm muting a user or set of users then I am more likely to want to see "message from muted user(s) [names]; toggle to see muted messages", because the reminder of their participation in my community is more annoying than upsetting. But if I'm blocking to avoid having to see their harmful participation, then I'm probably okay with paying the price of having a little disjointedness in the conversations I see, so that I don't have to get reminded of them and the harm they've already done. When I've been in IRC or Twitter conversations that include people I've blocked/ignored, I've been able to cope just fine with the slight disjointedness, and I had a much better experience not seeing any metadata about the messages of people I blocked.

As a reference -- not because they are 100% doing what I think is right, but because it's useful to think through how another service designed their implementation -- here's a detailed description of how Twitter handles blocking and muting.

@timabbott

This comment has been minimized.

Copy link
Member

timabbott commented Feb 21, 2017

I don't think Twitter is a great comparable, because Twitter is pretty disjointed in the first place (it has a weak at best concept of a conversation). How do leading IRC clients handle this?

@brainwane

This comment has been minimized.

Copy link
Contributor

brainwane commented Feb 21, 2017

I use Twitter pretty often, including to read threaded conversations, and I'd be interested to hear more about why you think its concept of a conversation is weak at best; still, I agree that there are problems mapping Twitter usage onto group chat environments like Zulip.

In HexChat and Colloquy, if you /IGNORE someone without further specification, you do not see any messages from them and you don't see any notification or placeholder about the message you're filtering out. Colloquy lets you specify whether you want to only ignore them in certain rooms. HexChat lets you specify "types - types of data to ignore, one or all of: PRIV, CHAN, NOTI, CTCP, DCC, INVI, ALL" which means you can, for instance, ignore private messages but not channel messages.

I haven't checked what, for instance, irssi or other popular clients do, though, or how Usenet-type newsreader killfile implementations work.

@rishig

This comment has been minimized.

Copy link
Collaborator

rishig commented Feb 21, 2017

@brainwane I agree with what you said above about blocking and muting feeling like pretty different things. I also agree that for muting having the toggle is what I would want.

For blocking, I think one potentially big difference between Zulip and twitter/etc is that on twitter A blocking B means that B also doesn't see A's messages (to first approximation). This impedes at least some natural attacks B can make against A (e.g., every time A says something, B responds aggressively with something that A now can't defend).

My opinion here would be to implement muting first, since both the policy and the UX are more straightforward, and basically all the work needed to implement muting will also be needed to implement blocking.

@wohali

This comment has been minimized.

Copy link

wohali commented Feb 23, 2017

Hi @brainwane. IRC has a few different ignore functions, both server-side and client-side.

Client-side is much as you state: you can /ignore someone client-side, and the client can offer any number of permutations of ignoring everything, just private messages, CTCP/DCC, invidations, etc. Typically clients silently drop this traffic and do not provide something like the proposed "Someone you are ignoring said something."

Server-side for IRC has some various different implementations. For instance, ircd-hybrid (the IRCD I am most familiar with) provides a user mode, +g, also known as an implicit ignore or callerid / whitelist mode. When you have this mode enabled, (private) messages from other users are blocked and the sender is notified the first time they try to contact them:

<nick> is in +g mode (server-side ignore).
<nick> has been informed that you messaged them.

The +g user mode user can then add people to the ACCEPT list, and their messages will be let through. There are some limitations to the ACCEPT command, primarily that only online users can be added, and disconnecting users are automatically removed; these are limitations of IRC not retaining most information about offline users after disconnection.

Another server-side implementation is called SILENCE. This is more of the blacklist approach, where an ignore list is managed server-side. Nicknames or user@host combinations can be added to this list. Again, this blocks people from sending you any private messages or invites.

So in summary:

  • Client-side ignores can be used to block messages from any user reaching you, either publicly or privately
  • Server-side ignores can be used in a whitelist or blacklist system (or both), but only affect private messages from other people to you. They do not affect traffic in a public channel.

I think this split approach is excellent and mirrors how people tend to work personally. They may request support from a server operator to block a specific person from contacting them privately - say, to block harassment in private. But for public utterances in a group chat, the only way to block someone is via a client-side block, which is unique to that user's installation.

Think of it this way: I could probably use a GreaseMonkey/TamperMonkey script or similar browser plugin to block a Zulip user's public chat if I really, really never wanted to see something they've said. (Similar monkey scripts already exist for e.g. blocking giphy Slack output). You might as well add support to the actual app for it, or someone's just going to patch it in anyway. :)

@kamalmarhubi

This comment has been minimized.

Copy link
Contributor

kamalmarhubi commented Apr 3, 2017

I hope this will be a realm-configurable option. In particular, I think it's a decision that the Recurse Center should make for itself.

@timabbott

This comment has been minimized.

Copy link
Member

timabbott commented Apr 3, 2017

This should definitely be realm-configurable. I expect some companies will want to have their own internal process for handling issues like this, that likely involves an HR department, not a chat product feature.

@azurelunatic

This comment has been minimized.

Copy link

azurelunatic commented Apr 4, 2017

@brainwane Thank you for asking me to weigh in!

Even in the case where the HR department wants to handle all user vs. user conflicts (even the really really petty ones), the chat product should offer a feature to funnel reliable reports to HR (or the direct manager, or selectively either in case it's the direct manager that's the problem).

The chat product should not become a way for problem users to defy an HR mandate of no contact. Managers/HR should be able to use the chat product to enforce zero contact between two users, and (if warranted) report any attempts at contact to the appropriate part of the authority chain.

In general, trust users to tell the difference between something that's threatening and something that's obnoxious, and give them the tools to react accordingly. In a work context, threats would warrant block-and-report; obnoxiousness would warrant something like auto-collapse and possible report to a forum moderator (or manager, if those are different).

When modeling threats to a user, these are the general use cases I'm aware of.

  • Abusive close contact (authority figure or peer; parent, partner, manager, forum moderator, co-worker, fellow student)
  • Stalker (not necessarily a close contact, but a singular/small group, and often known, entity)
  • Swarm attack (the instigator may be a stalker, or it may be a loosely organized interest group with participants who are not above participating in a swarm attack; this is probably less of a problem in a small-ish organization with known users)

Non-threatening:

  • Overly chatty user
  • Obnoxious user
  • Reply-all storm/topic of extreme disinterest
  • 1:1 excessive chattiness or obnoxiousness

It should both streamline reporting for the user, and give reliable and un-tamper-able reports to the appropriate department. (In particular, it should not originate email that a savvy user could spoof.) Consider the possibility of bad reports as an attack vector.

It's been my experience that users (particularly a certain type of developer/engineer) tend to resist formal reporting, in the belief that such things can and should be worked out between them at the user level, and that some things are best left at using rules to send that guy who posts fifteen links a day to the social group to /dev/null. Removing the ability to send that guy's posts to /dev/null increases the general level of crankiness and unwillingness to use the chat product.

Making any reporting into a formal issue can result in people waiting until things get really bad to involve outside help. The choices should not be a binary between "put on your adult pants and choose not to respond" or "file a formal grievance with HR, consuming hours of company time".

Yes, it is theoretically possible to directly contact someone who is causing you a problem, and ask them to stop. Imagined conversation:

Alice: "Bob, please stop attaching large log files to every ticket you file. We will tell you if we need logs; otherwise they're just wasting disk space."
Bob: "Oh, okay! Thanks for letting me know!"

This typically only has good results if the other party considers you a superior or a peer, and/or if they're doing something technically incorrectly, like putting bug reports in the wrong channel, including unhelpful information, or not including necessary information.

When this goes wrong, it has escalated the situation from something merely obnoxious for one person, to the beginnings of an interpersonal feud. Alternate scenario:

Alice: "Bob, I don't think your fart jokes are necessary in the workplace. Please stop."
Bob: "Hahaha, what, you can't handle bodily functions? I don't think your lactation room is necessary in the workplace either! Hey, maybe I should start going in there when I need to fart!"
Alice: "..."

[This specific exchange has never happened, but is pieced together from actual conversations, including between people I consider friends.]

The risk of starting a feud from a disagreement is disproportionately larger for women and oppressed minorities, when someone who unconsciously considers themselves a social superior reacts badly to having their personal choices criticized by someone who they feel is not authorized to make that criticism. (This also happens across other thresholds than gender, race, and sexual orientation -- poor vs. not-poor, self-taught vs. formally educated, spoon-fed vs. bootstrapped, technical support vs. engineer, junior vs. senior, front end vs. back end -- anywhere that social classes and biases exist.) Or it can happen when someone is hyper-vigilant and sees an intentional class-based attack when no conscious bias was present, or when someone is tired of dealing with the unconscious biases of others. The instinct is to double down, not stop. It takes a lot of conscious examination of your own biases and a commitment to fix your reactions, in order to stop doing things like that, and not everyone has chosen to undertake that work. The barrier to action is also larger for people (often women) who have been deliberately groomed by society to not make waves and take emotional labor upon themselves rather than asking the other party to fix things.

A report would:

  • Allow the user to specify impact and severity (for example: distracting me from my work, affecting my ability to do my work, preventing me from doing work; annoying, harmful, threatening) and suggest routing accordingly (bring up with manager at the next scheduled 1:1, schedule a meeting with manager to discuss, file a routine complaint with HR; file a priority complaint with HR and notify security)
  • Allow the user to select multiple messages (across channels and users) for a single report. (Suggest that it may be separate reports, if it involves multiple channels and/or multiple users, but allow the reporting user to decide.)
  • Include a listing of all the users involved
  • Allow users to make reports on other users' behalf (either in general, or as part of a delegation feature)
  • Include an easy way for the HR folks to see context, so the user cannot only cherry-pick the messages that look worse without possibly mitigating context
  • Reveal all content, such that an abusive user cannot "tweet-and-delete", or create an edit storm with an abusive initial message and subsequent similar but benign content (I know someone in particular who is fond of this attack format)
  • Let the HR department specify other (required or optional) fields for the reporting user to fill in
  • Let the HR department include text, such as policy, clarification, or what to include in a report

The product should be able to be made aware of management chains, in an automated way, from external sources.
A user should be able to file a confidential report against anyone in their management chain.
Unless there is a confidential report involving that management chain member, management should be aware of outright blocks. This could manifest as alerts when a block is made, a regular digest of block changes, and somewhere to look up existing blocks.
Allow the originators and/or people in charge of forums/channels to mark them as official/unofficial or business/pleasure; allow users in unofficial forums to block other users and topics without affecting what they receive in official forums, and (possibly) without alerting their management chain.

(If anyone is attending Open Source Bridge in Portland this June and would like to discuss this general topic in person, please look me up there; I'm Azure Lunatic, hanging out with the Dreamwidth crowd. 🍻 ☕️ 🍫 🍕)

@kracekumar

This comment has been minimized.

Copy link
Contributor

kracekumar commented Jun 5, 2017

As I understand there are three things to get this done.

  1. Mute user - Applicable in the public stream.
  2. Block User - Don't deliver/render private message and show public message
  3. Realm configuration to enable blocking users.
@timabbott

This comment has been minimized.

Copy link
Member

timabbott commented Jun 6, 2017

Yep, I believe that's correct.

@kracekumar

This comment has been minimized.

Copy link
Contributor

kracekumar commented Jun 9, 2017

Currently, I'm concentrating on Zulip translation, if anyone is interested in picking up this, please proceed.

ihsavru pushed a commit to ihsavru/zulip that referenced this issue Nov 11, 2017

Merge pull request zulip#168 from geeeeeeeeek/issue/refinements-on-mu…
…lti-tab-view

Refinements on multi tab view
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.