Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable Twitter-style Reply Controls on a Per-Toot Basis #14762

Open
emceeaich opened this issue Sep 8, 2020 · 40 comments
Open

Enable Twitter-style Reply Controls on a Per-Toot Basis #14762

emceeaich opened this issue Sep 8, 2020 · 40 comments

Comments

@emceeaich
Copy link

Pitch

Twitter's reply model has been extended with some LJ-like features.

Replies to a tweet can now be restricted to:

  • Replies only from accounts @-mentioned in the tweet
  • Replies only from accounts followed by the sender of the tweet and those @-mentioned in the tweet
  • No restriction

Something similar to this was proposed in #8565, from two years ago, but the Twitter implementation is more robust.

One of the objections to this was the fear that a user could @-mention a user, and disable replies. Twitter's implementation is aware of this and allows any account @-mentioned to reply to any tweet.

Motivation

This allows no-comment toots by setting the @-mention only reply setting, and not mentioning any other accounts.

Communities, especially marginalized communities, need a way to have discoverable conversations, but limit posting access to members of the community (via following.)

This can be used to assist in moderation so that, for example people involved in the conversation don't have to spend time explaining background (ie 101, google the topic, etc) or passive aggressive "reply-guys".

This expands and empowers users so they don't have to use brute-force blocks or mutes.

@nevillepark
Copy link
Contributor

Heartily agreed. This has the potential to eliminate "reply-guy"-ism in one fell swoop.

In addition, one of Mastodon's selling points is that it has more privacy- and moderation-friendly features than Twitter. This is an actually useful feature Twitter has introduced which new adopters will find missing if they move to Mastodon. Mr. President, we must not allow a mineshaft gap!

@ClearlyClaire
Copy link
Contributor

ClearlyClaire commented Sep 10, 2020

It's a valuable feature (although I'm slightly afraid this could worsen “echo chambers”), and something we are investigating. However, this requires major changes to the protocol, and there are a few caveats to consider:

  • implementations/server which do not get updated to support whatever protocol changes we come up with (or maliciously ignore them) will still be able to reply to any toot. The best we can ever do is to have those replies not appear on “well-behaved” servers implementing the protocol
  • this dramatically changes how threads can be resolved: so far, toots can be processed before processing their ancestry, but if we need to check whether a reply is allowed, we probably need to fetch the whole thread before saving things to database, which means we probably will have to limit the length of discoverable threads or something
  • the list of people who are blocked or follow someone isn't necessarily public, so the originating server would have to vet each toot independently. This means the originating server can apply different rules than those advertised or understood by other servers, which may lead to replies being dropped for no obvious reasons. One could make an exception that mentioned users do not need the originating server to vet the reply, I guess.

Also, there's something that I'm not sure about: what is Twitter's behavior when replying to a reply? Is the reply policy locked to that of the original post, or can it be changed down the road?

EDIT: also, note that we are investigating other ways to handle replies, but it's going to be a long road, and the caveats above do remain

@nevillepark
Copy link
Contributor

Thank you for your explanation of the technical issues involved!

In my opinion, preventing harassment is more important than preventing "echo chambers", which users can choose to avoid by simply…following a wider range of people.

@resynth1943
Copy link

In my opinion, preventing harassment

No offence, but isn't that the purpose of the Block facility?

@emceeaich
Copy link
Author

No, because blocking happens after abuse. Limiting who can reply reduces the need to block.

@nevillepark
Copy link
Contributor

Exactly

@resynth1943
Copy link

resynth1943 commented Sep 19, 2020

Let's keep +1 comments to a minimum.

Personally, I think this introduces a brilliant vector for offensive posts. If this were to get implemented, I could say with confidence that someone is going to post some racist propaganda, then prevent people from responding to it.

I'm mixed. This may be good for fixing your problem, but it's definitely going to lay the foundation for a lot more problems, which is being grossly underestimated.

Following this, it may not be appropriate to add features that limit the social aspect of a social platform.

This may be more trouble than it's worth. I'm not sure.

@emceeaich
Copy link
Author

The purpose of the feature is to let people control who responds to a post, such that a person does not have to make a post followers-only to avoid abusive responses.

This is the equivalent to a blog post without comments.

If a reader feels compelled to respond to a blog post without a comment facility, or toot with replies off, they may do it from their own post on their own blog, or social media account.

If they disagree with the content, they can mute or block the account. If the content violates that instance's ToS, it can be reported.

I'm still not clear on your objection.

@Cassolotl
Copy link

Cassolotl commented Sep 19, 2020

If this were to get implemented, I could say with confidence that someone is going to post some racist propaganda, then prevent people from responding to it.

Do we have any evidence that people responding to abusive tweets/toots with criticism makes the person less likely to post abusive tweets/toots in future?

This may be good for fixing your problem, but it's definitely going to lay the foundation for a lot more problems, which is being grossly underestimated.

Do we have any evidence or experience of this from Twitter, where they have this feature? Is it being abused by racists/other nasty types to prevent people from responding?

My instinct says that people posting abusive stuff will leave replies on because they want people to reply, to get them more attention and outrage.

If someone is posting something abusive, they should be reported. I don't believe that allowing people to turn off replies will protect abusive posters.

@test2a
Copy link

test2a commented Oct 3, 2020

Personally, I think this introduces a brilliant vector for offensive posts. If this were to get implemented, I could say with confidence that someone is going to post some racist propaganda, then prevent people from responding to it.

i would suppose if i find an offensive post on my TL, that i block them and report them instead of engaging with them. the point from what i gather from this enhancement request is that such person is preventing anyone from responding to that specific offensive toot. sounds good to me, i just block and report. i dont want to engage with that person anyways but also prevents anyone else from engaging them unless specifically mentioned.

  1. "observers" wouldnt get trigger happy to respond to offender, they can shout, no one would respond to them until reported and probably banned.
  2. brigading would be reduced as now people would not have a way to organize against someone except by way of reporting them.

just my 2 cents

@Liquidream
Copy link

I just realised that Mastodon doesn't really have an equivalent to this feature. 😕
It's not often that I use it myself, but the odd times that I do need it - it seems there's currently way to achieve this, sadly.
Hoping that this is still under consideration?
Thx

@LupinePariah
Copy link

I don't want to just +1 this so I'll try and elucidate my thoughts as much as possible.

I want somewhere to go now that Musk has bought Twitter, not to get too political but it's clear that his policies will result in a rise in hate speech. Further, I feel that Twitter will lose the feature to control which tweets are replied to and by whom in the near future to further enable and realise these goals.

I'm considering Mastodon for where I want to go, but it isn't a place I can land without this feature in place. Why? I've suffered various forms of abuse, I had endured over a decade of physical abuse and psychologoical torture, it's very difficult for me to interact with people. I know of others who're in the same boat, and it's very easy to silence us through numbers. If everyone was a one-to-one conversation, it would be potentially possible to handle these volatile situations without it resulting in a traumatic attack (anxiety, PTSD, et cetera). However, one tactic that's often used to silence those who're not of a more healthy, neurotypical nature is numbers. If we dare speak anything that they disagree with, they'll use numbers to silence us. It's a valid tactic. It's quite impossible to deal with a number of people replying at the same time, it's... overloading, and it results in the desire to simply give up.

One of the unpopular topics I like to tackle is the intent of the player character in video games. As an empath, I tend to get very immersed in games and I find myself unable to do something I—personally—wouldn't do while I'm playing these games. For example, harming animals just gives me flashbacks to when I was doing voluntary veterinarian and animal sanctuary work. I can't just rush in and murder foes for a variety of reasons. For one, as an abuse victim I have a sense of distrust when it comes to what people woudl consider to be normal and familiar narrators, so if I'm tasked with something I'd want to know what my foe did wrong. I want to investigate. I'm more likely to want to heal or incarcerate than murder anyway because that's what I'd want to do. I mean, I don't consider the home invasion of a dragon's den, smashing her eggs, killing her kids, slaying her, and then stealing all of her loot a valid reaction just because she's mind-controlled. I'd want to rescue her.

I often talk up games from the past aswell that had other playstayles, such as avoidance, thievery, and so on. I enjoy talking about these games as I feel they've fallen by the wayside in lieu of how easy to is to make an open world game where the goal is to kill everything in sight. I don't begrudge people doing that, but as an empath who gets deeply immersed I feel a strong diconnect when my character does something I wouldn't. So I just... I can't.

I often talk to game developers about this as I know that as an empathetic abuse victim, I'm not alone. I tag them in and share my thoughts. I've found that since Twitter introduced the ability to control those able to reply to one's tweets, I've had more of a voice. I've been able to speak very candidly. I've never had a voice like that before, so it was refreshing.

However, as you can probably guess, my intent doesn't really matter to the majority of gamers. It doesn't matter that I'm not targeting them, that I have no interest in ruining their fun, that I'm not at all trying to aggro them or take anything away from them. All I'm trying to do is raise awareness of other demographics who'd benefit from gameplay styles that either don't exist yet (open world game featuring a parkour healer who strips afflictions from cursed creatures to save them), or have been long forgotten. There is a lot of... Well, I'm hesitant to say but it's my intent to be as sincere as I can here, so... There's a lot of white privilege when it comes to video games.

It's the way of the human to feel that if they're the majority, they're entitled to what they feel familiar with. Anyone talking about anything dissimilar raises ire as they see it as a threat to their entertainment resources, that they would have less and it's all they can think about. It's very selfish. So you can't really talk with most gamers about why an orc should or would be evil by their nature, as orcs typically are evil and questioning that would be challenging their right to have games where they can kill evil orcs. They don't really think about anyone who's excluded from gaming via bio-essentialism, but I'm getting off track.

I covered all of this to make a point that I have a valid topic that I want to raise awareness of. It's become my raison d'etre in recent years, outside of climate activism and general hatred of billionaires. That and following artists who draw art of dragons (I like dragons) is most of what I do on social media. In prior days, I would've just gathered dragon art and not said so much, as... like I said, I didn't have a voice. If I dared to speak up about any of the unusual opinions I had regarding video games, the usual crowd of gamers would show up en masse to ensure that I was put in my place and that I daren't ever challenge their entitlement. (I don't really think that a small per centage of games that aren't of that homogeneous mass of open world murderhobo simulators would really affect them, but that doesn't matter to them.)

Twitter, rather than Mastodon (I'm sorry to say), gave me a voice. It allowed me to talk to game developers without being hassled, harassed, and put through the ritual and rites of online abuse. That was rad. I mean, it's really great being able to actually talk on the Internet. I hadn't felt so able to speak my mind since the days of Usenet, it really is a profoundly remarkable feeling. Now, Mastodon may never have this feature, but... I'd be sad about that. It's my favourite platform. I tried Mastodon once before, long ago, and I had exactly the experience I thought I would. The same as I'd had on any other social media platform. Until Twitter devised those wonderfully genius features, I'd given up on social media and even trying to talk to people about the things that matter to me. I have traumas, I have PTSD, it's incredibly easy to shut me down and silence me. I just gave up.

And now I'm going to lose my online voice again.

I'd love to leave Twitter behind for what I know is a much better platform, but I need to be able to limit who can reply to my tweets so that I can have a voice.

So I'm putting my feelings here.

This is the perfect time to steal away everyone like me, those who have a unique message but not the capacity with which to speak it unless under very specific circumstances. You could give us a voice, just as Twitter had, and it would be very much appreciated. It wouldn't even need to allow followers, if I could just set some tweets so that they can't be replied to, that'd work too. Just so I can get certain thoughts out there without being bogged down and silenced by people who use that tactic to... Well, silence others, like I said.

Perhaps I've given you something to consider, here? I really hope so. I like Mastodon. I like Mastodon a lot. It's where I want to go. Maybe soon I can?

@ClearlyClaire
Copy link
Contributor

ClearlyClaire commented Jul 27, 2022

I had a lengthy discussion with @trwnh about this and other topics (about federation and UX implications of various features we are considering but have no clear path towards yet), and I think we made some progress on how this could be implemented.

Several projects claim to solve this by drastically changing how posts are distributed (requiring every reply to go through the original author's server and have that server responsible for distributing the reply), but that's a very significant changes, does not solve everything (discovery after-the-fact remains difficult), and has other issues (depending on how it's done, the person replying may lose agency on who is allowed to see their reply).

Instead, I think we can work towards something like the following changes.

Rough protocol proposal

Step 1: the author signals that they don't want everyone to reply

Thanks to an additional property on the post object, users can announce who they allow to reply.
This is purely informative and does not help with enforcement, it only signals that enforcement is wanted, and can be used to provide appropriate hints in the user interface.

Some existing projects have already proposed, or are already using, that kind of property. To the best of my knowledge, they are:

  • PixelFed: commentsEnabled provides a simple on/off switch, while capabilities provides a collection (e.g. “as:Public”, or a followers collection) to signal who is allowed. I could not find documentation on this aside from unrelated examples, and I think the capabilities property isn't actually used.
  • Zap: a commentPolicy property is used to set various complex properties. I don't really like the representation nor its complexity, but it exists (see https://codeberg.org/zot/zap/src/branch/release/FEDERATION.md)
  • GoToSocial has a simple replyable toggle similar to PixelFed's commentsEnabled, see https://docs.gotosocial.org/en/latest/user_guide/posts/#replyable

Step 2: client software interpret it to assess whether a user is allowed to reply

Based on the flags set in step 1, client applications can present the expected policy, as well as disable the reply button if the user is known to not match the policy.

At this point, this is still purely advisory and this does not prevent software unaware of the reply policies or willfully ignoring them to post a reply anyway, and in the case of policies like “only people I follow”, third-party servers cannot reject any reply as they have no way of knowing with certainty whether the person replying was followed by the person being replied to.

Step 3: the server submits the reply to the remote server

This is pretty much like what currently happens, when posting a reply, the server of the person replying would submit the reply to the person being replied to.

However, being aware of a reply policy and an enforcement mechanism, the server replying could hold off sending the reply to anyone else, consider the message as pending, and wait for step 4.

This is how implementations relying on the original poster's server distributing replies work, but it doesn't requiring forfeiting control over who you distribute the reply to.

Step 4: the original poster's server Accepts the reply

An Accept activity is sent to the server of the person replying, signaling that the original poster is ok with the reply. (or a Reject is sent otherwise)

This is similar, but not identical to (since it does not seem to send an Accept activity or similar), what Zap does, according to the documentation at https://codeberg.org/zot/zap/src/branch/release/FEDERATION.md

In addition, that activity, referencing both the reply and the post it is in reply to, can be dereferenced, either publicly, or only by people allowed to see the post. That Accept may be cached by the reverse-proxy.

Step 5

The server of the person replying takes note of the Accept and saves it to the reply. It marks the reply as posted, as opposed to pending, and optionally sends it to followers.

Whenever rendering the reply, it renders a property that points to the Accept activity from step 4.

Step 6: on the receiving end

On the receiving end, if the reply is received directly from the actor responsible for the original post, it is considered accepted.

If the reply is by someone who was mentioned in the original post, it is considered accepted.

Otherwise, the receiving end checks for the property mentioned in step 5, fetches the activity, and consider the reply accepted if it mentions both the reply and the original post as expected.

Implementation-wise, in Mastodon, this could all happen in ThreadResolveWorker and keep things minimal by just refusing to attach the parent if the verification fails, though more aggressive handling (like deleting the reply altogether) could be considered.

Step 7 (optional): revocation

Revocation of an accepted reply could work by issuing a Reject via best-effort routes the same way Delete or Update activities are sent. Additionally, the URI in the property cited in Step 5 would raise a 404 or redirect to that Reject activity.

Upgrade path considerations

There could be gradual enforcement of the above proposal. For instance, it seems safe to start with steps 1 to 5, but skip step 6 entirely, or alter it to only consider the case where the property defined in 5 is actually set. Of course, this mean the feature will so far remain advisory rather than enforced, but it would give time to implementations to catch up.

If this gets enough traction, step 6 could then be fully enforced. This means that people using software that willfully ignore these properties would still be able to reply to your comments regardless of reply policy, but those replies would not be visible to software that respect and enforce those policies.

Performance considerations

There is a performance hit, but it is limited to:

  • an extra request from the original post's server to the replying server (step 5)
  • up to an extra request per recipient of the reply (in step 6), but unlike other methods, the reply to this request can safely be cached by the reverse-proxy

Security considerations

By not adding a hash or copy of the reply in the Accept activity, malicious actors could exploit this in a split horizon setting, sending different versions of the same activity to different actors. This is, however, already a concern in pretty much all contexts in ActivityPub, and enshrining that information in the Accept activity would have many drawbacks:

  • significantly more complex implementation
  • inability to change the JSON-LD representation after the fact
  • possibly leaking private information if the Accept activity isn't properly secured

@tsmethurst
Copy link

@ClearlyClaire's proposal looks really good to me. Is this something you'd start by implementing in glitchsoc?

@ClearlyClaire
Copy link
Contributor

@ClearlyClaire's proposal looks really good to me. Is this something you'd start by implementing in glitchsoc?

Not immediately so, I have other short-term priorities for Mastodon itself right now. I also don't want to rush an implementation before we get a chance to agree on vocabulary and so on with other projects.

@trwnh
Copy link
Member

trwnh commented Jul 28, 2022

i've been thinking about the vocabulary for this proposal, and here's what I've got so far:

Step 1: the author signals that they don't want everyone to reply

we covered the existing properties and honestly while they could work, they each have semantic pitfalls in their naming:

  • commentsEnabled assumes that replies are "comments", and it assumes an on-off system
  • replyable also assumes an on-off system
  • commentPolicy is more technically powerful but the naming used for the values is confusing and unclear -- things like having space-separated values for multiple policies, but one of the policies has a space in it.

however, given the mechanism of Accept Note that would be used, we can use Accept Follow and its related manuallyApprovesFollowers property as guidance. perhaps approvesReplies is best? it would be an explicit flag to any implementation that their reply will require approval before being displayed at the origin, irrespective of any actual reply policy.

Step 2: client software interpret it to assess whether a user is allowed to reply

per approvesReplies, we now know that the origin will be approving their replies. the approval may be manually done by a human, but if it wishes, the origin can also signal ahead of time which replies may be accepted automatically. we can reuse commentPolicy if we wish to have compatibility with Zap and Streams, but it might be worth reworking it into a different property with simpler values. commentPolicy accepts the following values:

  • public = "matches anybody at all, may require moderation if the network isn't known", which i interpret as allowing out-of-band comments (see below discussion for authenticated)
  • authenticated = "matches the typical activitypub permissions", which i interpret as "any actor can reply". this is more technically accurate than calling it "public", but "public" already basically means "all actors", so unless you plan on allowing anonymous out-of-band comments, this distinction is pointless -- and if you did, you wouldn't need to signal it anyway since it would be out-of-band.
  • any connections = "matches followers regardless of approval". this won't work for mastodon because there is no special state for having a pending follow request. also, it is awkward to use such a state because activitypub requires approval for all follows, even if it is done automatically and with no restrictions. simply put, a follow is not valid until it is accepted, and thus this will not match anything in practice for most projects.
  • contacts = "matches approved followers". this is a bit too tied-down to its project's semantics imo, and it would be better to be explicit and signal "followers" specifically, because not every project has the concept of "contacts".
  • self = "matches the activity author only". i'm not sure if this useful to signal because at this point, you're dealing with local side effects only and this is no different than rejecting everyone else remotely.
  • site: foobar.com = "matches any actor or clone instance from foobar.com". this is useful but imagine you want to allow some dozens of domains. that would get really messy...
  • until=2001-01-01T00:00Z = "comments are closed after the date given". i am not sure that this should be shoehorned into a policy signal; it is more useful as its own property and perhaps using standard AS2 vocab such as endTime or a more fit-to-purpose extension dealing specifically with comment expiry.

here's a way we can simplify things greatly: just use Collections.

from pixelfed (and i think originally litepub), we had capabilities as a (not really used yet) property, which was a mapping of arrays for who is allowed to perform which action. in plain json, it looked/looks like this:

  "capabilities": {
    "announce": "https://www.w3.org/ns/activitystreams#Public",
    "like": "https://www.w3.org/ns/activitystreams#Public",
    "reply": null

now, there are several issues with this. first of all, the mapping of all these capabilities into a set is an unnecessary level of nesting that only really makes sense if you view it as a single ACL with various facets. secondly, it uses a null value which would get stripped and is functionally the same as not including it. but there is one idea we can salvage from this, and that is to use an array of actors and/or collections to signal who is allowed to perform the action. this works similarly to to/cc addressing.

in terms of vocabulary, i am tentatively leaning toward an explicit canReply property rather than calling it something like replyPolicy. i am not sure if it should be nested in a set like capabilities or not. the value would be an array. for any collections in the array, it could be useful to have some mechanism to know whether an actor is in the collection without knowing the entirety of the collection, but we could also do some heuristics such as determining if the collection matches followers or following on the actor (even though that would be a bit worse than having explicit knowledge of inclusion within the collection).

Step 5: a property that points to the Accept activity from step 4.

idk maybe replyApproval? this one i have no real leanings on

Step 0: Deciding which author gets to approve replies

this is a point of contention for implementations which have a concept of "comments" rather than "replies". in those systems, you generally have authority belonging not to the immediately-replied-to post, but rather to the first-class "post" object, underneath which are second-class "comment" objects. in other words, the comments exist in "context" of some other post. the applications of this are numerous:

  • posts sent to custom audiences, commonly known as "circles"/"aspects", are typically visible to an audience determined by the authoritative context and not by individual replies in the chain.
  • in other implementations the concept of a "comments section" can be modeled as a discussion space or context under the ownership of some author. this can be for some article on a blogging platform like wordpress or plume, for a page on a link aggregator such as reddit or lemmy, and so on.
  • in forum-like settings, there is the concept of a "thread" (sometimes also called a topic). all posts in the thread exist within the context of the thread, which might not have explicit root post semantics, but rather serves as an ordered collection of posts, with replies being purely metadata.

ideally i think this would be expressed by context. the issue is that this property has been used by many different implementations for slightly different things, such as keeping track of conversations, or serving a collection of activities representing interactions on a common object. for such use cases i would propose using an extension property like conversation, because it is much more useful to use context for determining authoritative context. also it just makes sense to use context for (authoritative) context and conversation for the more conversational sense of "context". or maybe there's some other word that expresses the same semantics as the first definition, but if there is, i'm having trouble coming up with one. i do recognize a point of contention here, though, because projects like Zap et al are not particularly "wrong" in serving up a Collection of activities via context.

essentially, this step's vocabulary is left open-ended because there are two or three different semantic meanings here that aren't always clearly separated:

  • existing in "context" of another object (authoritativity)
  • grouping together activities related to a common originating "context" (commonality)
  • grouping together objects within a conversational "context" (topicality)

summary / practical flow

vocabulary (not final)

  • approvesReplies (like manuallyApprovesFollowers) = if true, signals that replies require approval to be displayed at the origin
  • canReply (may or may not be nested within capabilities) = optional. an array of Actor or Collection similar to to/cc whose replies will likely be approved. may be an empty array, indicating no one can reply / comments are currently closed?
  • replyApproval = points to an Accept Note activity. should be validated based on some authoritative context.
  • context (maybe parentContext to be more explicit? idk if parent works clearly here) = the authoritative context in which your object exists. if set, should/must be copied over when replying (unsure which)

sample usage

  1. check for context and copy it over
  2. check for context.approvesReplies and if true, go to step 3
  3. check for context.canReply and show to your user a hint on whether they can reply (e.g. by disabling the reply button if they are not included)
  4. put the IRI of the Accept Note in replyApproval
  5. validate the replyApproval by checking that replyApproval.actor = context.attributedTo, replyApproval.type = Accept, replyApproval.object = the reply id

if no context was found, then maybe defer to checking if the object you are replying to has approvesReplies etc. validate a reply without context by checking that replyApproval.actor = inReplyTo.attributedTo, replyApproval.type = Accept, replyApproval.object = the reply id

@trwnh
Copy link
Member

trwnh commented Jul 30, 2022

an open question remains: how do we respect the reply policies of more than one party? for example, say we want to respect both the top-level post and also the person we are immediately replying to. would we need two Accept activities? how do we handle a replyApproval that is an array? how do we handle the case where only one of the two parties has Accepted?

@SapphireDrew
Copy link

SapphireDrew commented Dec 9, 2022

Replies to a tweet can now be restricted to:

  • Replies only from accounts @-mentioned in the tweet
  • Replies only from accounts followed by the sender of the tweet and those @-mentioned in the tweet
  • No restriction

Something similar to this was proposed in #8565, from two years ago, but the Twitter implementation is more robust.

I'd like to see it more granular and robust than even this, with the following options instead:

  • Everyone (No restrictions)
  • Local (Only members of the same instance as the poster can reply)
  • Follows of Follows (Obviously, anyone the poster trusts people they follow, this option expands that to people who those people follow. I specifically chose the term "Follow" instead of "Follower" since it's too easy for anyone to just follow and parachute in)
  • Followed (Only people the poster has followed may reply)
  • Mentions (Only the poster and the people @ mentioned may reply. If no one is mentioned, this effectively makes it so only the poster may reply - good for venting without fear of unsolicited advice / criticism, and multi-post long announcements)

an open question remains: how do we respect the reply policies of more than one party? for example, say we want to respect both the top-level post and also the person we are immediately replying to. would we need two Accept activities? how do we handle a replyApproval that is an array? how do we handle the case where only one of the two parties has Accepted?

@trwnh Out of an abundance of caution, I'd like to say that replies to posts with any setting less permissive than "Everyone" should automatically be restricted to one on one conversations between just the top level poster and the person replying. Otherwise, they could easily @ others or add hashtags to invite a dogpile.

@yukimx2501
Copy link

I'd like to add some rationale behind my support for this feature.

There has been a lot of discussion in threads started by Black folks on the fediverse. Harassment and dogpiling is a serious issue, especially in big instances over 50K members where moderation is insufficient to handle that amount of abuse.

Picture a Black person complaining about harassment and racism, and saying "White people PLEASE DON'T REPLY", and being met with the following replies by... sigh... 🙄 white people.

  • "I know you said white people shouldn't reply to this, but..."
  • "Why are you trying to limit free speech? This is a free country ..."
  • "You posted public, you're practically inviting us to reply. If you don't like it..."
  • "Have you considered the block feature?"
  • "If you don't like it here, move to another server."
  • "Not a black person, but I'd really like to point out..."
  • "To be fair..."
  • "Before anything, you should also consider OUR point of view."
  • "I think you're a bit biased about this, let me explain..."
  • "I'm going to play the devil's advocate in here, and you may not like to hear this, but..."
  • "I'm so sorry to hear that, but I'd just like to point out that not all white people..."
  • "Actually, if we pay attention to the dictionary definition..."
  • "You're just a cry baby, grow up..."
  • "This is the type of posts that makes me want to legalize bullying again."

(And no, I'm not exaggerating; I wish this was an edge case, but it's depressingly common, especially on large instances)

If a user can limit who can reply to a post (or at least suppress the replies from their home server), the number of unwanted replies like the above will be much easier to handle.

So please, please, PLEASE add this feature soon! 🙏

@junosuarez
Copy link

I appreciate the thoughtful consideration above, both of the social and safety aspects of this feature, and the technical implementation / federated protocol considerations.

Moderating individual replies

In addition to a "mode" for replies (eg, "allow all" / "approve" / "none"), there is a use case for original authors to moderate replies to their posts.

Extending @emceeaich's This is the equivalent to a blog post without comments., this would be like a blog author moderating comments on a post.

Example moderation actions may include "pin reply", "hide reply", "hide reply and block user from replying for 1 week" - there's room for implementations to experiment with different social and safety factors. The minimum viable moderation action, as a safety feature, is "hide reply".

Scenario 1 - Hide abusive reply

An author A receives an abusive reply from B that doesn't individually violate the rules from either A or B's instances. (There are many specific examples of such replies in the message immediately above). A does not want to promote the reply to their followers - so they hide the reply. A may choose to additionally report the post to their instance moderators for further action, but A has hidden it from their audience while the report is pending and regardless of the moderators' decision.

Scenario 2 - Pin helpful reply

An academic researcher posts asking a question for specialists in their field. They receive dozens of replies like "oh, I'm not sure!" and "that's a good question!" along with one reply that is well-researched and accurately answers the question. They want to be helpful for their followers, so they boost the answer reply. Additionally, for posterity for anyone who comes across their question thread in the future, the original poster marks the answer as a "pinned reply" so that clients know to feature it or give it special UI treatment.

(I'm happy to open a separate thread if the participants here think it's warranted - I'm new to the Mastodon dev community and still trying to get a feel for community norms for lumping vs splitting feature discussion)

@LupinePariah
Copy link

LupinePariah commented Dec 20, 2022

I'd like to expand on what I'd mentioned prior.

I understand that limiting replies might invalidate moderation and blocking users but often those tools aren't enough. For anyone familiar with KiwiFarms, mobilisation has become a real issue, and they're getting really good at it.

Furthermore, manipulators are excellent chameleons. If they weren't, they wouldn't be very successful and they'd be easy to spot. Unless a moderator has some background in psychology, they aren't going to understand what a manipulator looks like. They're very good at being charismatic, friendly, and looking like they're trying to help while saying exactly what they know will hurt the person they're trying to mess with the most. And they will do this for pleasure. As someone with experience with manipulators and abuse, monsters do exist. They look exactly like "us."

An expert manipulator will be able to confuse a moderator with gaslighting, twisting truths, and faux plays on empathy using cognitive empathy to do so. They'll play on the concept of disorderly behaviour, acting as though they were trying to help and that the person they'd targeted was disturbing the peace. The "let me help you" that other ethnicities get from white trolls, as explained by yukimx2501, is commonplace and distressing, and it isn't the only kind.

Differently-abled people, the neurodiverse, Trans, Otherkin, plural folk, and many others will be targeted in this way. And due to the herd effect, when some see trolls acting well-meaning and charismatic, they might then pile on their victims too. For understanding of why this effect occurs, I would advise learning about the just-world fallacy, which is a real problem that abuse victims regularly face.

In an ideal world, there would be enough moderation with enough of a background in psychology to be able to respond to the trolls quickly. In a limited-reply scenario, all the trolls can do is report a user. At which point this allows the user reported to explain why they're likely being targeted to the moderation, without the moderation dealing with an overloading scenario of troll mobilisation where the wrong decision might be made.

As I mentioned in my own post, I've seen mobilisation used all too often to silence people. I'm glad that there's evidence out there now of this where people are coming forward and talking about how much of a problem this is. And the only way to tackle is to allow people to set their posts to limit replies. That way, a person who's often targeted can have a way to decide whether they want a particular toot to be replyable or not. In this way, they can feel ready to deal with what happens and flag that toot as no replies, or even delete it if necessary.

A lot of those who're targeted will have severe social anxieties and it'll be easy to force them off the platform. I've been... painfully aware of how Alt-Right mobilisations have lead to the suicides of Trans and Otherkin youths. This is what you might be sensing in my tone, if you are. As I said, I've had experiences. This is why any modern social media platform needs this as a way to allow those who're vulnerable to just be, to control their own environment. And with Twitter circling the drain and Mastodon looking like the most viable alternative? It's become an imperative to consider this now more than ever.

I wish moderation could be the solution. In an ideal world, it would be. But thinking that it and blocking users could ever be viable ignores the reality of mobilisations of excellent manipulators that vulnerable people have already endured.

Edited to Add: I also want to point out that, yes, they will create low-hanging fruit accounts where they just make easy, friendly posts to use for trolling. Along with hacking existing, reliable accounts for that purpose. I've lurked on KiwiFarms enough to come to understand this. After you've been traumatised by the things I have... You want to understand. And I've come to see how awful these monsters really are, how the vulnerable need to be protected, and how limiting replies is the only viable solution.

@nikclayton
Copy link

Some feedback on #14762 (comment). I hope it's useful.


There's a missing step.

Step 2.5: The client submits the reply to their home server.


What happens if the server of the person who submits the reply ignores steps 3 and 4, and just posts the reply anyway?


From a client's perspective, are steps 2.5, 3, 4, and 5 synchronous, so the client can immediately tell the user if their reply has been accepted?

Or asynchronous, and the client is going to need to periodically poll the server to find out if the reply has been accepted?

If a reply has not been accepted, what's the expected behaviour?

  • The content is dropped completely?
  • The content is retained, and the reply author has the option of posting it as a status at the root of a new thread?
  • Something else?

This feels like it could be a DDoS vector.

A bad actor could submit replies to many different Mastodon servers. Each one of those servers is then going to try and contact the original poster's server (per step 3).

The bad actor has been able to take thousands of requests, fan them out one per server, and turn that in to a fan-in of thousands of requests to a single server.


At step 3, what happens if the original poster's server is unavailable? How long are other servers supposed to hold off on pending replies for, before they fail them as being uncheckable?

The "If a reply has not been accepted, what's the expected behaviour?" question is relevant here too.


Is there an expectation that the original poster can review pending replies, and decide on a reply-by-reply basis whether to allow it?

Or is the acceptance of a reply bounded by a set of limited rules ("only people I follow", "only people I follow, and people they follow") that can't be changed?

If the original poster can review pending replies what does the API surface for that look like?

@ClearlyClaire
Copy link
Contributor

What happens if the server of the person who submits the reply ignores steps 3 and 4, and just posts the reply anyway?

Then the server of the person who submits the reply considers it valid, and the person submitting the reply will see it as valid. But other participants will see that an approval is needed and that there is no approval, and treat the reply accordingly (e.g. drop it altogether, or detach it from what it is in reply to).

From a client's perspective, are steps 2.5, 3, 4, and 5 synchronous, so the client can immediately tell the user if their reply has been accepted?

Or asynchronous, and the client is going to need to periodically poll the server to find out if the reply has been accepted?

That is a good question. My idea was to have visible “pending” and “rejected” states, with rejected posts being automatically cleaned up after a while, but letting the person replying see that their post is rejected.

This feels like it could be a DDoS vector.

A bad actor could submit replies to many different Mastodon servers. Each one of those servers is then going to try and contact the original poster's server (per step 3).

This is not really different from fetching the reply in the first place! A bad actor can already submit replies to many different Mastodon servers, and each one of those servers is going to try and fetch the thing it is supposedly in reply to.

Is there an expectation that the original poster can review pending replies, and decide on a reply-by-reply basis whether to allow it?

The protocol proposal supports both (at least shorter-term) goal was to only have a limited set of rules.

@amcgregor
Copy link

Additional directly related feature request: the ability to exclude from notifications or home (or other) timelines posts which can not be replied to by the active user. Some are here for discussion, not comment-free blog posting.

This, of course, is in no way commentary on the broad range of legitimate use-cases outlined in this issue so far. I'd also like to be able to specify on a per-post basis the demographic permitted to respond; mentioned, mutual follows, followers, &c. It'd be extremely useful.

@kkarhan
Copy link

kkarhan commented Jul 10, 2023

This may also be a duplicate of #8565 ?

@rkingett
Copy link

Is this possible with the curent ActivityPub spec and is there a way for the Mastodon API to tell clients this state so they can properly interact with toots?

@hollerith
Copy link

The title of this should be updated, should not assume knowledge of features may or may not be available on other apps or it will age like milk. Probably should be closed as duplicate #8565

@emceeaich
Copy link
Author

The title of this should be updated, should not assume knowledge of features may or may not be available on other apps or it will age like milk. Probably should be closed as duplicate #8565

Looking at #8565, it appears to be a subset of what I had requested.

@SapphireDrew
Copy link

SapphireDrew commented Nov 27, 2023

The title of this should be updated, should not assume knowledge of features may or may not be available on other apps or it will age like milk. Probably should be closed as duplicate #8565

Not a duplicate. #8565 just wants the ability to disable replies wholesale, period. This issue requests the ability to control who, if anyone, can reply on a granular level.

Also, what is this "assume knowledge" crap about? Nothing is assumed, we're referencing a known feature of an app Mastodon is clearly a clone of. Whether the Elongated Muskrat version of it (X) canned that or not contributes nothing to this discussion and just gets me annoyed that I got such a trivial email in my inbox.

@MadokVaur
Copy link

The (United States Constitution's) 2nd Amendment refers to "well regulated militias" and "firearms"

You're thinking of the 1st Amendment, which limits government controls over private speech

Nothing to do with ActivityPub devs, I'm sure

@ClearlyClaire
Copy link
Contributor

Honestly though, this seems like a difficult feature to cobble together given this should be done at the protocol level and the resistance to it upstream from ActivityPub devs cause "free speech"

I'm not sure where you've seen that. The reason it's not moving forwards is that it's a very complex feature with lots of moving parts.

@yukimx2501
Copy link

yukimx2501 commented Nov 28, 2023

But that doesn't mean it's impossible; just difficult to implement. Rejecting a feature simply because it's hard would be just being lazy, IMO. It would be very disappointing if that were the reason for the request to be marked as WONTDO.

It doesn't need to be a fully fledged implementation on one go; it could be done in stages.

There are various stages of implementation here:

  1. Add metadata in the message specifying the reply permissions and filters (something like "who-can-reply: everyone/followers/mutuals/friends/nobody")
  2. Make the poster's instance reject replies when they arrive if they do not comply with the reply permissions.
  3. Make other instances respect the reply permissions in the post when attempting to submit a reply.
  4. Disable reply link in the front end if the current user does not match the reply permissions in the post they're reading.

As for the commenters arguing about free speech, we should add the clarification that harassing a user is not what free speech is about; users can perfectly quote post (even by manually linking the post in question) on their own timeline if they consider their speech takes precedent over the other user's wishes. Given individual instances' policies (e.g. quote tooting not allowed, dogpiling not allowed), that might result in reports, banning or defederating, which fits perfectly within the "rules of the game", so to speak.The intention of this feature request is to transfer the effort and headaches of dealing with / attempting unsolicited replies from the poster to the reply commenter, taking care of the low hanging fruit. In other words, it's an anti-spam / anti-harassment feature, and one valid reason for requesting it is that some users are bad actors and are ill-intentioned and never intended to repect poster's boundaries already.

@SapphireDrew
Copy link

Honestly though, this seems like a difficult feature to cobble together given this should be done at the protocol level and the resistance to it upstream from ActivityPub devs cause "free speech"

I'm not sure where you've seen that. The reason it's not moving forwards is that it's a very complex feature with lots of moving parts.

I could've swore I read something about that when looking into it a few months ago but you're right, I don't see it. I must be confusing it with another project or something. Idk, sorry for the inconvenience.

@brendanjones
Copy link

Just here to add the Bluesky version of this as an example, which has a nice simple modal UI:

Screenshot 2024-02-15 at 20 51 22

@NairobiRom

This comment was marked as spam.

@BenAveling
Copy link

FYI. This, or a version of this, is on the Mastodon Roadmap.
"MAS-37 Restrict who can reply to a post"
Current status: Exploring.
See:
https://joinmastodon.org/roadmap

@BenAveling
Copy link

BenAveling commented Jun 19, 2024

Seems to me that there are two parts to this suggestion:

  1. Allow replies only from accounts @-mentioned in the tweet (No mentions = No Replies)
  2. Also allow Replies from accounts followed by the sender of the tweet

And I don't know, but it seems to me that the first part 'should' be relatively easy. The hard part is the 2nd part.

Unless there's something I'm missing, might it make sense to split this request?

FWIW, you could add: 1.1 Also allow replies from accounts on the same instance.

@BenAveling
Copy link

A different way to break it down might be:

  1. Add a text based Reply Warning, like a Content Warning, but displayed when someone starts writing a reply
  2. Add a Do Not Reply Guide
  3. Enforcement of the Do Not Reply Guide

Enforcement seems to be the hardest part.

A text based Reply Warning 'seems' simple enough. At least compared to the rest.

A Do Not Reply Guide could easily get very complicated, but at least those complications could be added one at a time.

Suggested options:

  1. mentioned people only. <- should always be allowed
    • followed <- could be tricky to implement?
    • same site <- 'seems' simple enough
    • contact server for permission <- client and server negotiate - could get complicated, but complications would be added one at a time, so that might be OK.

The reason I like "same site" is, it's a bit like a pub. You can hear our conversation from a different table, but if you want to join our conversation, but you have to join our table.

@tsmethurst
Copy link

Relatedly, at GoToSocial we're trying out the concept of interaction policies, which are documented here: https://docs.gotosocial.org/en/latest/federation/posts/#interaction-policy and available in @context form here: https://gotosocial.org/ns They're very similar in substance to the FEP here but with some changes to allow more granular controls, including controls over other types of interaction like boosting and liking. We've heard from Pixelfed that they're looking at implementing the same thing, though I expect they're waiting for us to iron out any bugs we find, as we're still in the "work-in-progress" stage. We're going to also try to implement support for the abovementioned FEP so when Mastodon implements reply controls, we should be compatible :)

@ClearlyClaire
Copy link
Contributor

I don't think anyone has implemented my FEP so if your proposal makes more sense, I think it makes more sense to codify it as a FEP rather than have support for two different specifications. I cannot comment on whether I agree with the changes you made from the FEP, though, as I have not been able to find the time to review them yet.

In any case, reply controls are still planned, they're just a lot of work, and we're busy wrapping up 4.3.0 at the moment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests