-
-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable Twitter-style Reply Controls on a Per-Toot Basis #14762
Comments
Heartily agreed. This has the potential to eliminate "reply-guy"-ism in one fell swoop. In addition, one of Mastodon's selling points is that it has more privacy- and moderation-friendly features than Twitter. This is an actually useful feature Twitter has introduced which new adopters will find missing if they move to Mastodon. Mr. President, we must not allow a mineshaft gap! |
It's a valuable feature (although I'm slightly afraid this could worsen “echo chambers”), and something we are investigating. However, this requires major changes to the protocol, and there are a few caveats to consider:
Also, there's something that I'm not sure about: what is Twitter's behavior when replying to a reply? Is the reply policy locked to that of the original post, or can it be changed down the road? EDIT: also, note that we are investigating other ways to handle replies, but it's going to be a long road, and the caveats above do remain |
Thank you for your explanation of the technical issues involved! In my opinion, preventing harassment is more important than preventing "echo chambers", which users can choose to avoid by simply…following a wider range of people. |
No offence, but isn't that the purpose of the Block facility? |
No, because blocking happens after abuse. Limiting who can reply reduces the need to block. |
Exactly |
Let's keep +1 comments to a minimum. Personally, I think this introduces a brilliant vector for offensive posts. If this were to get implemented, I could say with confidence that someone is going to post some racist propaganda, then prevent people from responding to it. I'm mixed. This may be good for fixing your problem, but it's definitely going to lay the foundation for a lot more problems, which is being grossly underestimated. Following this, it may not be appropriate to add features that limit the social aspect of a social platform. This may be more trouble than it's worth. I'm not sure. |
The purpose of the feature is to let people control who responds to a post, such that a person does not have to make a post followers-only to avoid abusive responses. This is the equivalent to a blog post without comments. If a reader feels compelled to respond to a blog post without a comment facility, or toot with replies off, they may do it from their own post on their own blog, or social media account. If they disagree with the content, they can mute or block the account. If the content violates that instance's ToS, it can be reported. I'm still not clear on your objection. |
Do we have any evidence that people responding to abusive tweets/toots with criticism makes the person less likely to post abusive tweets/toots in future?
Do we have any evidence or experience of this from Twitter, where they have this feature? Is it being abused by racists/other nasty types to prevent people from responding? My instinct says that people posting abusive stuff will leave replies on because they want people to reply, to get them more attention and outrage. If someone is posting something abusive, they should be reported. I don't believe that allowing people to turn off replies will protect abusive posters. |
i would suppose if i find an offensive post on my TL, that i block them and report them instead of engaging with them. the point from what i gather from this enhancement request is that such person is preventing anyone from responding to that specific offensive toot. sounds good to me, i just block and report. i dont want to engage with that person anyways but also prevents anyone else from engaging them unless specifically mentioned.
just my 2 cents |
I just realised that Mastodon doesn't really have an equivalent to this feature. 😕 |
I don't want to just +1 this so I'll try and elucidate my thoughts as much as possible. I want somewhere to go now that Musk has bought Twitter, not to get too political but it's clear that his policies will result in a rise in hate speech. Further, I feel that Twitter will lose the feature to control which tweets are replied to and by whom in the near future to further enable and realise these goals. I'm considering Mastodon for where I want to go, but it isn't a place I can land without this feature in place. Why? I've suffered various forms of abuse, I had endured over a decade of physical abuse and psychologoical torture, it's very difficult for me to interact with people. I know of others who're in the same boat, and it's very easy to silence us through numbers. If everyone was a one-to-one conversation, it would be potentially possible to handle these volatile situations without it resulting in a traumatic attack (anxiety, PTSD, et cetera). However, one tactic that's often used to silence those who're not of a more healthy, neurotypical nature is numbers. If we dare speak anything that they disagree with, they'll use numbers to silence us. It's a valid tactic. It's quite impossible to deal with a number of people replying at the same time, it's... overloading, and it results in the desire to simply give up. One of the unpopular topics I like to tackle is the intent of the player character in video games. As an empath, I tend to get very immersed in games and I find myself unable to do something I—personally—wouldn't do while I'm playing these games. For example, harming animals just gives me flashbacks to when I was doing voluntary veterinarian and animal sanctuary work. I can't just rush in and murder foes for a variety of reasons. For one, as an abuse victim I have a sense of distrust when it comes to what people woudl consider to be normal and familiar narrators, so if I'm tasked with something I'd want to know what my foe did wrong. I want to investigate. I'm more likely to want to heal or incarcerate than murder anyway because that's what I'd want to do. I mean, I don't consider the home invasion of a dragon's den, smashing her eggs, killing her kids, slaying her, and then stealing all of her loot a valid reaction just because she's mind-controlled. I'd want to rescue her. I often talk up games from the past aswell that had other playstayles, such as avoidance, thievery, and so on. I enjoy talking about these games as I feel they've fallen by the wayside in lieu of how easy to is to make an open world game where the goal is to kill everything in sight. I don't begrudge people doing that, but as an empath who gets deeply immersed I feel a strong diconnect when my character does something I wouldn't. So I just... I can't. I often talk to game developers about this as I know that as an empathetic abuse victim, I'm not alone. I tag them in and share my thoughts. I've found that since Twitter introduced the ability to control those able to reply to one's tweets, I've had more of a voice. I've been able to speak very candidly. I've never had a voice like that before, so it was refreshing. However, as you can probably guess, my intent doesn't really matter to the majority of gamers. It doesn't matter that I'm not targeting them, that I have no interest in ruining their fun, that I'm not at all trying to aggro them or take anything away from them. All I'm trying to do is raise awareness of other demographics who'd benefit from gameplay styles that either don't exist yet (open world game featuring a parkour healer who strips afflictions from cursed creatures to save them), or have been long forgotten. There is a lot of... Well, I'm hesitant to say but it's my intent to be as sincere as I can here, so... There's a lot of white privilege when it comes to video games. It's the way of the human to feel that if they're the majority, they're entitled to what they feel familiar with. Anyone talking about anything dissimilar raises ire as they see it as a threat to their entertainment resources, that they would have less and it's all they can think about. It's very selfish. So you can't really talk with most gamers about why an orc should or would be evil by their nature, as orcs typically are evil and questioning that would be challenging their right to have games where they can kill evil orcs. They don't really think about anyone who's excluded from gaming via bio-essentialism, but I'm getting off track. I covered all of this to make a point that I have a valid topic that I want to raise awareness of. It's become my raison d'etre in recent years, outside of climate activism and general hatred of billionaires. That and following artists who draw art of dragons (I like dragons) is most of what I do on social media. In prior days, I would've just gathered dragon art and not said so much, as... like I said, I didn't have a voice. If I dared to speak up about any of the unusual opinions I had regarding video games, the usual crowd of gamers would show up en masse to ensure that I was put in my place and that I daren't ever challenge their entitlement. (I don't really think that a small per centage of games that aren't of that homogeneous mass of open world murderhobo simulators would really affect them, but that doesn't matter to them.) Twitter, rather than Mastodon (I'm sorry to say), gave me a voice. It allowed me to talk to game developers without being hassled, harassed, and put through the ritual and rites of online abuse. That was rad. I mean, it's really great being able to actually talk on the Internet. I hadn't felt so able to speak my mind since the days of Usenet, it really is a profoundly remarkable feeling. Now, Mastodon may never have this feature, but... I'd be sad about that. It's my favourite platform. I tried Mastodon once before, long ago, and I had exactly the experience I thought I would. The same as I'd had on any other social media platform. Until Twitter devised those wonderfully genius features, I'd given up on social media and even trying to talk to people about the things that matter to me. I have traumas, I have PTSD, it's incredibly easy to shut me down and silence me. I just gave up. And now I'm going to lose my online voice again. I'd love to leave Twitter behind for what I know is a much better platform, but I need to be able to limit who can reply to my tweets so that I can have a voice. So I'm putting my feelings here. This is the perfect time to steal away everyone like me, those who have a unique message but not the capacity with which to speak it unless under very specific circumstances. You could give us a voice, just as Twitter had, and it would be very much appreciated. It wouldn't even need to allow followers, if I could just set some tweets so that they can't be replied to, that'd work too. Just so I can get certain thoughts out there without being bogged down and silenced by people who use that tactic to... Well, silence others, like I said. Perhaps I've given you something to consider, here? I really hope so. I like Mastodon. I like Mastodon a lot. It's where I want to go. Maybe soon I can? |
I had a lengthy discussion with @trwnh about this and other topics (about federation and UX implications of various features we are considering but have no clear path towards yet), and I think we made some progress on how this could be implemented. Several projects claim to solve this by drastically changing how posts are distributed (requiring every reply to go through the original author's server and have that server responsible for distributing the reply), but that's a very significant changes, does not solve everything (discovery after-the-fact remains difficult), and has other issues (depending on how it's done, the person replying may lose agency on who is allowed to see their reply). Instead, I think we can work towards something like the following changes. Rough protocol proposalStep 1: the author signals that they don't want everyone to replyThanks to an additional property on the post object, users can announce who they allow to reply. Some existing projects have already proposed, or are already using, that kind of property. To the best of my knowledge, they are:
Step 2: client software interpret it to assess whether a user is allowed to replyBased on the flags set in step 1, client applications can present the expected policy, as well as disable the reply button if the user is known to not match the policy. At this point, this is still purely advisory and this does not prevent software unaware of the reply policies or willfully ignoring them to post a reply anyway, and in the case of policies like “only people I follow”, third-party servers cannot reject any reply as they have no way of knowing with certainty whether the person replying was followed by the person being replied to. Step 3: the server submits the reply to the remote serverThis is pretty much like what currently happens, when posting a reply, the server of the person replying would submit the reply to the person being replied to. However, being aware of a reply policy and an enforcement mechanism, the server replying could hold off sending the reply to anyone else, consider the message as pending, and wait for step 4. This is how implementations relying on the original poster's server distributing replies work, but it doesn't requiring forfeiting control over who you distribute the reply to. Step 4: the original poster's server
|
@ClearlyClaire's proposal looks really good to me. Is this something you'd start by implementing in glitchsoc? |
Not immediately so, I have other short-term priorities for Mastodon itself right now. I also don't want to rush an implementation before we get a chance to agree on vocabulary and so on with other projects. |
i've been thinking about the vocabulary for this proposal, and here's what I've got so far: Step 1: the author signals that they don't want everyone to replywe covered the existing properties and honestly while they could work, they each have semantic pitfalls in their naming:
however, given the mechanism of Accept Note that would be used, we can use Accept Follow and its related Step 2: client software interpret it to assess whether a user is allowed to replyper
here's a way we can simplify things greatly: just use Collections. from pixelfed (and i think originally litepub), we had "capabilities": {
"announce": "https://www.w3.org/ns/activitystreams#Public",
"like": "https://www.w3.org/ns/activitystreams#Public",
"reply": null now, there are several issues with this. first of all, the mapping of all these capabilities into a set is an unnecessary level of nesting that only really makes sense if you view it as a single ACL with various facets. secondly, it uses a null value which would get stripped and is functionally the same as not including it. but there is one idea we can salvage from this, and that is to use an array of actors and/or collections to signal who is allowed to perform the action. this works similarly to in terms of vocabulary, i am tentatively leaning toward an explicit Step 5: a property that points to the Accept activity from step 4.idk maybe Step 0: Deciding which author gets to approve repliesthis is a point of contention for implementations which have a concept of "comments" rather than "replies". in those systems, you generally have authority belonging not to the immediately-replied-to post, but rather to the first-class "post" object, underneath which are second-class "comment" objects. in other words, the comments exist in "context" of some other post. the applications of this are numerous:
ideally i think this would be expressed by essentially, this step's vocabulary is left open-ended because there are two or three different semantic meanings here that aren't always clearly separated:
summary / practical flowvocabulary (not final)
sample usage
if no |
an open question remains: how do we respect the reply policies of more than one party? for example, say we want to respect both the top-level post and also the person we are immediately replying to. would we need two Accept activities? how do we handle a |
I'd like to see it more granular and robust than even this, with the following options instead:
@trwnh Out of an abundance of caution, I'd like to say that replies to posts with any setting less permissive than "Everyone" should automatically be restricted to one on one conversations between just the top level poster and the person replying. Otherwise, they could easily @ others or add hashtags to invite a dogpile. |
I'd like to add some rationale behind my support for this feature. There has been a lot of discussion in threads started by Black folks on the fediverse. Harassment and dogpiling is a serious issue, especially in big instances over 50K members where moderation is insufficient to handle that amount of abuse. Picture a Black person complaining about harassment and racism, and saying "White people PLEASE DON'T REPLY", and being met with the following replies by... sigh... 🙄 white people.
(And no, I'm not exaggerating; I wish this was an edge case, but it's depressingly common, especially on large instances) If a user can limit who can reply to a post (or at least suppress the replies from their home server), the number of unwanted replies like the above will be much easier to handle. So please, please, PLEASE add this feature soon! 🙏 |
I appreciate the thoughtful consideration above, both of the social and safety aspects of this feature, and the technical implementation / federated protocol considerations. Moderating individual repliesIn addition to a "mode" for replies (eg, "allow all" / "approve" / "none"), there is a use case for original authors to moderate replies to their posts. Extending @emceeaich's Example moderation actions may include "pin reply", "hide reply", "hide reply and block user from replying for 1 week" - there's room for implementations to experiment with different social and safety factors. The minimum viable moderation action, as a safety feature, is "hide reply". Scenario 1 - Hide abusive replyAn author A receives an abusive reply from B that doesn't individually violate the rules from either A or B's instances. (There are many specific examples of such replies in the message immediately above). A does not want to promote the reply to their followers - so they hide the reply. A may choose to additionally report the post to their instance moderators for further action, but A has hidden it from their audience while the report is pending and regardless of the moderators' decision. Scenario 2 - Pin helpful replyAn academic researcher posts asking a question for specialists in their field. They receive dozens of replies like "oh, I'm not sure!" and "that's a good question!" along with one reply that is well-researched and accurately answers the question. They want to be helpful for their followers, so they boost the answer reply. Additionally, for posterity for anyone who comes across their question thread in the future, the original poster marks the answer as a "pinned reply" so that clients know to feature it or give it special UI treatment. (I'm happy to open a separate thread if the participants here think it's warranted - I'm new to the Mastodon dev community and still trying to get a feel for community norms for lumping vs splitting feature discussion) |
I'd like to expand on what I'd mentioned prior. I understand that limiting replies might invalidate moderation and blocking users but often those tools aren't enough. For anyone familiar with KiwiFarms, mobilisation has become a real issue, and they're getting really good at it. Furthermore, manipulators are excellent chameleons. If they weren't, they wouldn't be very successful and they'd be easy to spot. Unless a moderator has some background in psychology, they aren't going to understand what a manipulator looks like. They're very good at being charismatic, friendly, and looking like they're trying to help while saying exactly what they know will hurt the person they're trying to mess with the most. And they will do this for pleasure. As someone with experience with manipulators and abuse, monsters do exist. They look exactly like "us." An expert manipulator will be able to confuse a moderator with gaslighting, twisting truths, and faux plays on empathy using cognitive empathy to do so. They'll play on the concept of disorderly behaviour, acting as though they were trying to help and that the person they'd targeted was disturbing the peace. The "let me help you" that other ethnicities get from white trolls, as explained by yukimx2501, is commonplace and distressing, and it isn't the only kind. Differently-abled people, the neurodiverse, Trans, Otherkin, plural folk, and many others will be targeted in this way. And due to the herd effect, when some see trolls acting well-meaning and charismatic, they might then pile on their victims too. For understanding of why this effect occurs, I would advise learning about the just-world fallacy, which is a real problem that abuse victims regularly face. In an ideal world, there would be enough moderation with enough of a background in psychology to be able to respond to the trolls quickly. In a limited-reply scenario, all the trolls can do is report a user. At which point this allows the user reported to explain why they're likely being targeted to the moderation, without the moderation dealing with an overloading scenario of troll mobilisation where the wrong decision might be made. As I mentioned in my own post, I've seen mobilisation used all too often to silence people. I'm glad that there's evidence out there now of this where people are coming forward and talking about how much of a problem this is. And the only way to tackle is to allow people to set their posts to limit replies. That way, a person who's often targeted can have a way to decide whether they want a particular toot to be replyable or not. In this way, they can feel ready to deal with what happens and flag that toot as no replies, or even delete it if necessary. A lot of those who're targeted will have severe social anxieties and it'll be easy to force them off the platform. I've been... painfully aware of how Alt-Right mobilisations have lead to the suicides of Trans and Otherkin youths. This is what you might be sensing in my tone, if you are. As I said, I've had experiences. This is why any modern social media platform needs this as a way to allow those who're vulnerable to just be, to control their own environment. And with Twitter circling the drain and Mastodon looking like the most viable alternative? It's become an imperative to consider this now more than ever. I wish moderation could be the solution. In an ideal world, it would be. But thinking that it and blocking users could ever be viable ignores the reality of mobilisations of excellent manipulators that vulnerable people have already endured. Edited to Add: I also want to point out that, yes, they will create low-hanging fruit accounts where they just make easy, friendly posts to use for trolling. Along with hacking existing, reliable accounts for that purpose. I've lurked on KiwiFarms enough to come to understand this. After you've been traumatised by the things I have... You want to understand. And I've come to see how awful these monsters really are, how the vulnerable need to be protected, and how limiting replies is the only viable solution. |
Some feedback on #14762 (comment). I hope it's useful. There's a missing step. Step 2.5: The client submits the reply to their home server. What happens if the server of the person who submits the reply ignores steps 3 and 4, and just posts the reply anyway? From a client's perspective, are steps 2.5, 3, 4, and 5 synchronous, so the client can immediately tell the user if their reply has been accepted? Or asynchronous, and the client is going to need to periodically poll the server to find out if the reply has been accepted? If a reply has not been accepted, what's the expected behaviour?
This feels like it could be a DDoS vector. A bad actor could submit replies to many different Mastodon servers. Each one of those servers is then going to try and contact the original poster's server (per step 3). The bad actor has been able to take thousands of requests, fan them out one per server, and turn that in to a fan-in of thousands of requests to a single server. At step 3, what happens if the original poster's server is unavailable? How long are other servers supposed to hold off on pending replies for, before they fail them as being uncheckable? The "If a reply has not been accepted, what's the expected behaviour?" question is relevant here too. Is there an expectation that the original poster can review pending replies, and decide on a reply-by-reply basis whether to allow it? Or is the acceptance of a reply bounded by a set of limited rules ("only people I follow", "only people I follow, and people they follow") that can't be changed? If the original poster can review pending replies what does the API surface for that look like? |
Then the server of the person who submits the reply considers it valid, and the person submitting the reply will see it as valid. But other participants will see that an approval is needed and that there is no approval, and treat the reply accordingly (e.g. drop it altogether, or detach it from what it is in reply to).
That is a good question. My idea was to have visible “pending” and “rejected” states, with rejected posts being automatically cleaned up after a while, but letting the person replying see that their post is rejected.
This is not really different from fetching the reply in the first place! A bad actor can already submit replies to many different Mastodon servers, and each one of those servers is going to try and fetch the thing it is supposedly in reply to.
The protocol proposal supports both (at least shorter-term) goal was to only have a limited set of rules. |
Additional directly related feature request: the ability to exclude from notifications or home (or other) timelines posts which can not be replied to by the active user. Some are here for discussion, not comment-free blog posting. This, of course, is in no way commentary on the broad range of legitimate use-cases outlined in this issue so far. I'd also like to be able to specify on a per-post basis the demographic permitted to respond; mentioned, mutual follows, followers, &c. It'd be extremely useful. |
This may also be a duplicate of #8565 ? |
Is this possible with the curent ActivityPub spec and is there a way for the Mastodon API to tell clients this state so they can properly interact with toots? |
The title of this should be updated, should not assume knowledge of features may or may not be available on other apps or it will age like milk. Probably should be closed as duplicate #8565 |
Not a duplicate. #8565 just wants the ability to disable replies wholesale, period. This issue requests the ability to control who, if anyone, can reply on a granular level. Also, what is this "assume knowledge" crap about? Nothing is assumed, we're referencing a known feature of an app Mastodon is clearly a clone of. Whether the Elongated Muskrat version of it (X) canned that or not contributes nothing to this discussion and just gets me annoyed that I got such a trivial email in my inbox. |
The (United States Constitution's) 2nd Amendment refers to "well regulated militias" and "firearms" You're thinking of the 1st Amendment, which limits government controls over private speech Nothing to do with ActivityPub devs, I'm sure |
I'm not sure where you've seen that. The reason it's not moving forwards is that it's a very complex feature with lots of moving parts. |
But that doesn't mean it's impossible; just difficult to implement. Rejecting a feature simply because it's hard would be just being lazy, IMO. It would be very disappointing if that were the reason for the request to be marked as WONTDO. It doesn't need to be a fully fledged implementation on one go; it could be done in stages. There are various stages of implementation here:
As for the commenters arguing about free speech, we should add the clarification that harassing a user is not what free speech is about; users can perfectly quote post (even by manually linking the post in question) on their own timeline if they consider their speech takes precedent over the other user's wishes. Given individual instances' policies (e.g. quote tooting not allowed, dogpiling not allowed), that might result in reports, banning or defederating, which fits perfectly within the "rules of the game", so to speak.The intention of this feature request is to transfer the effort and headaches of dealing with / attempting unsolicited replies from the poster to the reply commenter, taking care of the low hanging fruit. In other words, it's an anti-spam / anti-harassment feature, and one valid reason for requesting it is that some users are bad actors and are ill-intentioned and never intended to repect poster's boundaries already. |
I could've swore I read something about that when looking into it a few months ago but you're right, I don't see it. I must be confusing it with another project or something. Idk, sorry for the inconvenience. |
This comment was marked as spam.
This comment was marked as spam.
FYI. This, or a version of this, is on the Mastodon Roadmap. |
Seems to me that there are two parts to this suggestion:
And I don't know, but it seems to me that the first part 'should' be relatively easy. The hard part is the 2nd part. Unless there's something I'm missing, might it make sense to split this request? FWIW, you could add: 1.1 Also allow replies from accounts on the same instance. |
A different way to break it down might be:
Enforcement seems to be the hardest part. A text based Reply Warning 'seems' simple enough. At least compared to the rest. A Do Not Reply Guide could easily get very complicated, but at least those complications could be added one at a time. Suggested options:
The reason I like "same site" is, it's a bit like a pub. You can hear our conversation from a different table, but if you want to join our conversation, but you have to join our table. |
Relatedly, at GoToSocial we're trying out the concept of interaction policies, which are documented here: https://docs.gotosocial.org/en/latest/federation/posts/#interaction-policy and available in |
I don't think anyone has implemented my FEP so if your proposal makes more sense, I think it makes more sense to codify it as a FEP rather than have support for two different specifications. I cannot comment on whether I agree with the changes you made from the FEP, though, as I have not been able to find the time to review them yet. In any case, reply controls are still planned, they're just a lot of work, and we're busy wrapping up 4.3.0 at the moment. |
Pitch
Twitter's reply model has been extended with some LJ-like features.
Replies to a tweet can now be restricted to:
Something similar to this was proposed in #8565, from two years ago, but the Twitter implementation is more robust.
One of the objections to this was the fear that a user could @-mention a user, and disable replies. Twitter's implementation is aware of this and allows any account @-mentioned to reply to any tweet.
Motivation
This allows no-comment toots by setting the @-mention only reply setting, and not mentioning any other accounts.
Communities, especially marginalized communities, need a way to have discoverable conversations, but limit posting access to members of the community (via following.)
This can be used to assist in moderation so that, for example people involved in the conversation don't have to spend time explaining background (ie 101, google the topic, etc) or passive aggressive "reply-guys".
This expands and empowers users so they don't have to use brute-force blocks or mutes.
The text was updated successfully, but these errors were encountered: