Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upAllow users to mute/block other users #168
Comments
lfaraone
added
the
enhancement
label
Oct 8, 2015
This comment has been minimized.
This comment has been minimized.
DataBranner
commented
Aug 17, 2016
|
Is the idea only for User A to be able to block PMs from User B, or would User B's posts to public streams also be blocked or hidden from User A? |
This comment has been minimized.
This comment has been minimized.
|
I was thinking that A would see no public or private messages from B. |
This comment has been minimized.
This comment has been minimized.
|
This deserves some thought -- it's possible we will want to render something for these messages to avoid making it very confusing to follow a conversation that the blocked person is participating in. |
timabbott
added
area: misc
feedback wanted
labels
Oct 14, 2016
This comment has been minimized.
This comment has been minimized.
|
We chatted about this today in the developers' realm and @trueskawka will be writing up some notes about what v1 will look like. |
brainwane
referenced this issue
Oct 17, 2016
Closed
Community health analytics for realm admins #2052
This comment has been minimized.
This comment has been minimized.
Version 1It seems sensible to focus first on adding an option for blocking another user, as a user-side feature. The messages sent by the blocked user would simply not appear, which means they would be hidden on the front-end. Further growth of the feature should be based on feedback gathered from the experience of using the basic feature and any problems it causes. Use casesThere are several use cases for blocking and muting features:
Further developmentPossible issues:
Possible alternative approaches:
|
timabbott
added
help wanted
and removed
feedback wanted
labels
Oct 18, 2016
This comment has been minimized.
This comment has been minimized.
|
Nice writeup! I've retagged this issue as "help wanted" rather than "feedback wanted", since it sounds like we have an implementation plan for what to do next. |
This comment has been minimized.
This comment has been minimized.
|
Evidently Slack refuses to implement a block feature (I heard about this via Cate Huston). I'm looking forward to being able to tell people "switch to Zulip and you'll be able to block/filter/mute". :) |
This comment has been minimized.
This comment has been minimized.
|
If no one is working on the feature, I am up for it. |
This comment has been minimized.
This comment has been minimized.
|
It's all yours @kracekumar! |
This comment has been minimized.
This comment has been minimized.
|
Thank you @timabbott. I will discuss the high-level details in further comments and start working on the feature. |
This comment has been minimized.
This comment has been minimized.
|
It's time to start working on the issue! My understanding:
Required Tech Changes:
Tech Questions:
|
This comment has been minimized.
This comment has been minimized.
DataBranner
commented
Feb 2, 2017
|
I can't believe this is going forward. The step after making filter bubbles rigid is civil war. This is a terrible mistake. |
This comment has been minimized.
This comment has been minimized.
|
Oh! Looks like I set discussion on fire with my understanding and steps. |
This comment has been minimized.
This comment has been minimized.
DataBranner
commented
Feb 2, 2017
|
No, my fault for commenting here. The project has elected to move forward with this. My concerns have been heard, but will not be acted on. If you want to talk, we can do so privately. |
This comment has been minimized.
This comment has been minimized.
|
Definitely I'd like to know your views privately.
On Thu, 2 Feb 2017 at 11:44 PM David Branner ***@***.***> wrote:
No, my fault for commenting here. The project has elected to move forward
with this. My concerns have been heard, but will not be acted on. If you
want to talk, we can do so privately.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#168 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AATCeT1WKgZES1C_cCigHPmU1tEPgFxXks5rYh0PgaJpZM4GLi3W>
.
--
Sent from Gmail Mobile
|
This comment has been minimized.
This comment has been minimized.
|
@kracekumar Here are my answers:
Yes, I think that's right.
This is where blocking and muting feel like they should have different behavior. If I'm muting a user or set of users then I am more likely to want to see "message from muted user(s) [names]; toggle to see muted messages", because the reminder of their participation in my community is more annoying than upsetting. But if I'm blocking to avoid having to see their harmful participation, then I'm probably okay with paying the price of having a little disjointedness in the conversations I see, so that I don't have to get reminded of them and the harm they've already done. When I've been in IRC or Twitter conversations that include people I've blocked/ignored, I've been able to cope just fine with the slight disjointedness, and I had a much better experience not seeing any metadata about the messages of people I blocked. As a reference -- not because they are 100% doing what I think is right, but because it's useful to think through how another service designed their implementation -- here's a detailed description of how Twitter handles blocking and muting. |
This comment has been minimized.
This comment has been minimized.
|
I don't think Twitter is a great comparable, because Twitter is pretty disjointed in the first place (it has a weak at best concept of a conversation). How do leading IRC clients handle this? |
This comment has been minimized.
This comment has been minimized.
|
I use Twitter pretty often, including to read threaded conversations, and I'd be interested to hear more about why you think its concept of a conversation is weak at best; still, I agree that there are problems mapping Twitter usage onto group chat environments like Zulip. In HexChat and Colloquy, if you I haven't checked what, for instance, irssi or other popular clients do, though, or how Usenet-type newsreader killfile implementations work. |
This comment has been minimized.
This comment has been minimized.
|
@brainwane I agree with what you said above about blocking and muting feeling like pretty different things. I also agree that for muting having the toggle is what I would want. For blocking, I think one potentially big difference between Zulip and twitter/etc is that on twitter A blocking B means that B also doesn't see A's messages (to first approximation). This impedes at least some natural attacks B can make against A (e.g., every time A says something, B responds aggressively with something that A now can't defend). My opinion here would be to implement muting first, since both the policy and the UX are more straightforward, and basically all the work needed to implement muting will also be needed to implement blocking. |
This comment has been minimized.
This comment has been minimized.
wohali
commented
Feb 23, 2017
|
Hi @brainwane. IRC has a few different ignore functions, both server-side and client-side. Client-side is much as you state: you can /ignore someone client-side, and the client can offer any number of permutations of ignoring everything, just private messages, CTCP/DCC, invidations, etc. Typically clients silently drop this traffic and do not provide something like the proposed "Someone you are ignoring said something." Server-side for IRC has some various different implementations. For instance, ircd-hybrid (the IRCD I am most familiar with) provides a user mode, +g, also known as an implicit ignore or callerid / whitelist mode. When you have this mode enabled, (private) messages from other users are blocked and the sender is notified the first time they try to contact them:
The +g user mode user can then add people to the ACCEPT list, and their messages will be let through. There are some limitations to the ACCEPT command, primarily that only online users can be added, and disconnecting users are automatically removed; these are limitations of IRC not retaining most information about offline users after disconnection. Another server-side implementation is called SILENCE. This is more of the blacklist approach, where an ignore list is managed server-side. Nicknames or user@host combinations can be added to this list. Again, this blocks people from sending you any private messages or invites. So in summary:
I think this split approach is excellent and mirrors how people tend to work personally. They may request support from a server operator to block a specific person from contacting them privately - say, to block harassment in private. But for public utterances in a group chat, the only way to block someone is via a client-side block, which is unique to that user's installation. Think of it this way: I could probably use a GreaseMonkey/TamperMonkey script or similar browser plugin to block a Zulip user's public chat if I really, really never wanted to see something they've said. (Similar monkey scripts already exist for e.g. blocking giphy Slack output). You might as well add support to the actual app for it, or someone's just going to patch it in anyway. :) |
This comment has been minimized.
This comment has been minimized.
|
I hope this will be a realm-configurable option. In particular, I think it's a decision that the Recurse Center should make for itself. |
This comment has been minimized.
This comment has been minimized.
|
This should definitely be realm-configurable. I expect some companies will want to have their own internal process for handling issues like this, that likely involves an HR department, not a chat product feature. |
This comment has been minimized.
This comment has been minimized.
azurelunatic
commented
Apr 4, 2017
•
|
@brainwane Thank you for asking me to weigh in! Even in the case where the HR department wants to handle all user vs. user conflicts (even the really really petty ones), the chat product should offer a feature to funnel reliable reports to HR (or the direct manager, or selectively either in case it's the direct manager that's the problem). The chat product should not become a way for problem users to defy an HR mandate of no contact. Managers/HR should be able to use the chat product to enforce zero contact between two users, and (if warranted) report any attempts at contact to the appropriate part of the authority chain. In general, trust users to tell the difference between something that's threatening and something that's obnoxious, and give them the tools to react accordingly. In a work context, threats would warrant block-and-report; obnoxiousness would warrant something like auto-collapse and possible report to a forum moderator (or manager, if those are different). When modeling threats to a user, these are the general use cases I'm aware of.
Non-threatening:
It should both streamline reporting for the user, and give reliable and un-tamper-able reports to the appropriate department. (In particular, it should not originate email that a savvy user could spoof.) Consider the possibility of bad reports as an attack vector. It's been my experience that users (particularly a certain type of developer/engineer) tend to resist formal reporting, in the belief that such things can and should be worked out between them at the user level, and that some things are best left at using rules to send that guy who posts fifteen links a day to the social group to /dev/null. Removing the ability to send that guy's posts to /dev/null increases the general level of crankiness and unwillingness to use the chat product. Making any reporting into a formal issue can result in people waiting until things get really bad to involve outside help. The choices should not be a binary between "put on your adult pants and choose not to respond" or "file a formal grievance with HR, consuming hours of company time". Yes, it is theoretically possible to directly contact someone who is causing you a problem, and ask them to stop. Imagined conversation:
This typically only has good results if the other party considers you a superior or a peer, and/or if they're doing something technically incorrectly, like putting bug reports in the wrong channel, including unhelpful information, or not including necessary information. When this goes wrong, it has escalated the situation from something merely obnoxious for one person, to the beginnings of an interpersonal feud. Alternate scenario:
[This specific exchange has never happened, but is pieced together from actual conversations, including between people I consider friends.] The risk of starting a feud from a disagreement is disproportionately larger for women and oppressed minorities, when someone who unconsciously considers themselves a social superior reacts badly to having their personal choices criticized by someone who they feel is not authorized to make that criticism. (This also happens across other thresholds than gender, race, and sexual orientation -- poor vs. not-poor, self-taught vs. formally educated, spoon-fed vs. bootstrapped, technical support vs. engineer, junior vs. senior, front end vs. back end -- anywhere that social classes and biases exist.) Or it can happen when someone is hyper-vigilant and sees an intentional class-based attack when no conscious bias was present, or when someone is tired of dealing with the unconscious biases of others. The instinct is to double down, not stop. It takes a lot of conscious examination of your own biases and a commitment to fix your reactions, in order to stop doing things like that, and not everyone has chosen to undertake that work. The barrier to action is also larger for people (often women) who have been deliberately groomed by society to not make waves and take emotional labor upon themselves rather than asking the other party to fix things. A report would:
The product should be able to be made aware of management chains, in an automated way, from external sources. (If anyone is attending Open Source Bridge in Portland this June and would like to discuss this general topic in person, please look me up there; I'm Azure Lunatic, hanging out with the Dreamwidth crowd. |
This comment has been minimized.
This comment has been minimized.
|
As I understand there are three things to get this done.
|
This comment has been minimized.
This comment has been minimized.
|
Yep, I believe that's correct. |
This comment has been minimized.
This comment has been minimized.
|
Currently, I'm concentrating on Zulip translation, if anyone is interested in picking up this, please proceed. |
bgilbert commentedOct 8, 2015
Add a way for individuals to prevent specific people from communicating with them. This will be important in deployments with untrusted users. (E.g., a discussion site for an open-source project, open to anyone on the Internet.)