Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for subscribing to communal block lists #116

Open
AdaRoseCannon opened this issue Nov 1, 2016 · 35 comments
Open

Support for subscribing to communal block lists #116

AdaRoseCannon opened this issue Nov 1, 2016 · 35 comments
Labels
suggestion Feature suggestion

Comments

@AdaRoseCannon
Copy link

Although it is an issue which is not fun to think about, part of the issue with Twitter was that it was hard to deal with trolls and harassment on a community level, I am no expert in this field but one solution I have seen suggested was communal block lists.

i.e. a user could subscribe to lists of users, which managed by one or more people, would invisibly filter out the listed users to the people who subscribe to that list. Perhaps this could be a filter or an auto-block.

Thus not driving users away but allowing potentially vulnerable communities to protect themselves from harassers.

By having these protections in early they can be much more effective than tacked on the side later.

@Gargron
Copy link
Member

Gargron commented Nov 1, 2016

I agree with this, see also: https://github.com/Gargron/mastodon/issues/62

@hach-que
Copy link

My thoughts are to implement lists generally, and then to allow people to use them for either following or blocking (so a list one person follows might be a list another person blocks)?

alpaca-tc pushed a commit to pixiv/mastodon that referenced this issue Jun 15, 2017
Twitter APIのように、タグ変換していないテキストを返す
takayamaki pushed a commit to takayamaki/mastodon that referenced this issue Aug 31, 2017
lnanase added a commit to lnanase/mastodon that referenced this issue Dec 7, 2017
* update getting-started css on max-height short phone

* update getting-started scrollable-wrapper
GenbuHase pushed a commit to GenbuHase/Yzu-don that referenced this issue Aug 21, 2018
@JMendyk
Copy link
Contributor

JMendyk commented Sep 10, 2018

@wxcafe Could this be closed, based on your #1092 (comment).

@Gargron Gargron added suggestion Feature suggestion and removed enhancement labels Oct 20, 2018
tomoasleep added a commit to tomoasleep/mastodon that referenced this issue Dec 5, 2018
Exclude `OpenSSL::SSL::SSLError` exception
@Cassolotl
Copy link

Other people have started issues asking for a feature that's a bit like BlockTogether - where User B subscribes to User A and automatically blocks everyone User A blocks, but if everyone is following each other that can be a huge mess - with a big enough network and with enough people subscribing to each other, you can end up blocking people and not knowing why, or not even knowing that you've done it, etc.

So I think this issue is the one I'd like to actually happen, because it doesn't try to mimic that. It just says "blocklists that are maintained by one or more people", and it doesn't say anything about automatically adding people to the blocklist based on whether they've been blocked by some other personal account.

Here's my recent comment on another issue:

I would like to be able to maintain a block list that I can subscribe to and that others can subscribe to, that have particular names and purposes.

So instead of it saying "subscribe and automatically block the people that Cassolotl blocks", it might say "subscribe to this blocklist, maintained by Cassolotl. This blocklist only contains trans-exclusionary radical feminists." Or something.

Ideally there would be a private UI where I could type a little note to say why they're on the blocklist, like "said that trans women are men in dresses" or something, or links to particular posts of theirs that made them block-worthy, so that if they ask to be unblocked or if someone asks why they are blocked I can refer to it.

@Cassolotl
Copy link

My thoughts are to implement lists generally, and then to allow people to use them for either following or blocking (so a list one person follows might be a list another person blocks)?

I don't know if that works. Lists on Mastodon require you to follow someone in order to add them, right? So it wouldn't be possible for me to maintain a blocklist for others to subscribe to, while also subscribing myself.

@Laurelai
Copy link

Block lists are exploitable and create perverse incentives no matter how well you think you have it set up. ive seen ones that had all sorts of checks and balances devolve into a source of conflict and eventually those checks and balances were removed to become someones personal power tool. Very few if any blocklists on twitter do only what they say they do.

They dont create any real safety. They just use the promise of safety to give a single person inordinate ,unaccountable power.

@trwnh
Copy link
Member

trwnh commented Mar 18, 2019

i see "communal block lists" as something that can be handled similar to relays. if someone hosts a single-user site, then that means they also have a single-user mod team. if they subscribe to a public relay in order to populate their federated TL, then they will instantly be overwhelmed in potential moderation load. so having a way to separate the moderation from the administration would make it easier to self-host or to choose generalist instances by availability rather than moderation policy.

in fact, i would propose that moderation could instead happen at the existing relay level, so that subscribing to a relay is in effect the same as subscribing to that relay's moderation! in other words, instead of forwarding just the public Create activities, mastodon should also forward Flag activities to the relay. if a relay is running at relay.joinmastodon.org then this relay can be moderated according to the code of conduct of the mastodon project (e.g. no nazis, etc.). reports should include the summaries and attached posts, and users should be able to audit the log of reports and accept or reject reports they agree with.

this might require some rework of relays, and it would probably have the effect of turning relays into meta-instances. of course, communities can simply continue operating as they already do, without relays, if they wish to have local-only moderation.

@Laurelai
Copy link

If a single user instance subscribes to a relay and gets their blocklist too then why even have a single user instance, why not just join their instance. You are exporting everything to that instance anyways.

Many single user instances exist so that said single user can make their own moderation decisions. So they have the choice to use the relay and sign over their mod choices or dont and lose out on an active feed.

This gives the larger instance a lot of power over smaller ones as well. "Do what we say or we will add your instance/admins/users etc to the federated blocklist" this will be the best case scenario, the worst case scenario is that people become even more hesitant to ban or block anyone on any instance until the tensions build up to the point of severe conflict.

There is no technological way to prevent this from being an abusable tool of power.

@trwnh
Copy link
Member

trwnh commented Mar 18, 2019

If a single user instance subscribes to a relay and gets their blocklist too then why even have a single user instance, why not just join their instance. You are exporting everything to that instance anyways.

the idea would be that multiple sites export their public posts to a relay, so there is no singular "their instance" to join.

Many single user instances exist so that said single user can make their own moderation decisions. So they have the choice to use the relay and sign over their mod choices or dont and lose out on an active feed.

and they would continue to be able to make their own moderation decisions. i propose allowing users to accept or reject publicly-auditable forwarded reports. the relay is meant to solve the problem of discoverability, after all.

This gives the larger instance a lot of power over smaller ones as well. "Do what we say or we will add your instance/admins/users etc to the federated blocklist" this will be the best case scenario, the worst case scenario is that people become even more hesitant to ban or block anyone on any instance until the tensions build up to the point of severe conflict.

the "federated blocklist" wouldn't be this massive tool, and in fact, i am personally against having a "blocklist" in its current conception. no one instance should have power over the relay. the relay exists as a communal structure, to separate the community layer from the site layer, so that single-user sites are not single-user communities.

There is no technological way to prevent this from being an abusable tool of power.

and thus the technology should only aid the social infrastructure. right now, reports can only be sent to your own admin and optionally federated to the originating instance. this increases moderation load on everyone, as now a bad actor must be reported on each individual instance before mods are made aware of their existence. if E is a bad actor, then users must report E on the instances of A, B, C, D, F, and G, because if only A reports E, then B/C/D/F/G are unaware that E exists until E starts causing problems.

what i am worried about is that doing nothing will cause others to take action in a more naive and un-auditable way. community efforts already exist, e.g. dzuk's blocklist, which helpfully include documentation and screenshots of why certain bad actors were added to the list. however, other community efforts exist that do not provide any logs whatsoever, e.g. consisting solely of someone posting a toot CW'd "recommended block" and then providing little-to-no context, causing blocks to propagate solely on the social capital of the person making the declaration. the latter is what i think should be pre-empted by a much better solution.

@Laurelai
Copy link

what i am worried about is that doing nothing will cause others to take action in a more naive and un-auditable way. community efforts already exist, e.g. dzuk's blocklist, which helpfully include documentation and screenshots of why certain bad actors were added to the list. however, other community efforts exist that do not provide any logs whatsoever, e.g. consisting solely of someone posting a toot CW'd "recommended block" and then providing little-to-no context, causing blocks to propagate solely on the social capital of the person making the declaration. the latter is what i think should be pre-empted by a much better solution.

And any kind of organized structural system of sharing blocks will be exploited by those very same people with social capital, it will benefit them the most. The fact that reports have to be sent individually is what limits them to just directly telling others to block people. dzuk's blocklist isnt maintained much anymore due to the effort int maintaining it becoming too great, which is great because it shows the attempt doesnt scale and you shouldnt worry too much about it.

And if its only reports that get relayed then you are making single and low user instance mods have to do the same ammount of work as larger instances. In that case it will just discourage relaying altogether or just learning to ignore the mod queue. Both of these are bad outcomes and will discourage the creation and use of smaller instances.

Sharing the blocks means power concentration. Sharing the reports means more work for everyone involved. Neither one is a good idea.

@Cassolotl
Copy link

@Laurelai

They dont create any real safety. They just use the promise of safety to give a single person inordinate ,unaccountable power.

If it was as transparent as possible, would that help? Maybe some things like:

  • Anyone can view the list and see reasons why someone is on the list.
  • When someone is added to the list there could be the option to approve a block rather than just automatically add. (This person has been added to the blocklist for [reason]. Would you like to block them?)
  • A built-in way to contact the maintainer and ask to be removed from the blocklist (or ask for someone else to be removed).
  • When unsubscribing from the blocklist, "do you want to delete all the blocks you got from this blocklist y/n?"
  • Each blocklist has a description laying out the exact criteria for why someone would be added to the list. Obviously it's going to be pretty subjective, but if I was considering subscribing and wanted to make sure I wasn't going to accidentally block people who are okay actually, that's what I'd look for. Like looking for a code of conduct when picking an instance.

Obviously people can post like "this blocklist is trash and here's why", and anyone could look at the names on the blocklist and decide to unsubscribe.

I don't know, are there any things you can think of that would put your mind at ease about this? Since we're starting from scratch, anything starting with "I would only be okay with this feature if..." is a good idea to mention, and it's probably hard to go overboard!

@Laurelai
Copy link

There is nothing that would put my mind at ease. Because nothing would make this a good idea. Its at its core a bad idea and trying to turn it into a good one is trying to put lipstick on a pig. On blocktogether lists you can see who is blocked, thats how we knew randi harpers blocklists was full of trans activists. It didnt help because she had more social capital and anyone who would complain was already blocked by those who used it.

Anyone can view the list and see reasons why someone is on the list.

The bigger the list gets over time the more people wont check the list because it will be unreasonbable to do so. Once this state is reached its arbitary to add people who dont belong on it.

Transparency wont help because i used to help run one with said transparency and accountability and it took multiple people to actually add someone, and it devolved into one person slowly eliminating the others and gaining power and the rest losing interest. I even caught one person adding bad blocks and they had gotten away with it for 6 months and i only caught them because i was hyper attentive at the time. And if the person who runs it is using it for good you wind up with dzuk's blocklist which isnt really maintained anymore because it became too much work and it doesnt really scale well.

A built-in way to contact the maintainer and ask to be removed from the blocklist (or ask for someone else to be removed).

If they added you to a blocklist a way to contact them so they can tell you no again isnt going to help. Plus someone has to reply to all of those appeals and if the blocks were for good reasons then thats exposing someone to the abusive messages of others for little gain. And there will be a lot of abusive messages sent to whoever that is.

Each blocklist has a description laying out the exact criteria for why someone would be added to the list. Obviously it's going to be pretty subjective, but if I was considering subscribing and wanted to make sure I wasn't going to accidentally block people who are okay actually, that's what I'd look for. Like looking for a code of conduct when picking an instance

Its super easy to lie on the internet and fabricate screenshots and the like. You are just begging for the alt-right and bad actors to game you. And they will.

LIke this is fundementally a bad idea. I keep saying theres no right way to do one. I keep saying ive seen this all happen before.

Heres another issue you havent thought of. Factional fighting among people who otherwise have similar ideologies. Do you all really want to be caught in the middle of those with screaming people on both sides of it demanding you add their enemies to the blocklists and violently retaliating if you dont give them what they want?

Because thats happened too.

A core question developers need to start asking themselves is "Does this create power that people can fight over?" Because if it creates power, people will fight over it and act in machiavellian ways to try to game it and they will find a way to game it people always do. The reprecussions of this being fought over or gamed are very very bad.

@Cassolotl
Copy link

The bigger the list gets over time the more people wont check the list because it will be unreasonbable to do so. Once this state is reached its arbitary to add people who dont belong on it.

Maybe a maximum number of people on the list? 🤔

(I did read the other stuff you wrote, @Laurelai, I just don't have anything to say about them right now, so yeah, don't think I'm not listening or anything!)

@coreyreichle
Copy link

coreyreichle commented Mar 18, 2019

I think perhaps some will not be satisfied in any way that allows third parties to curate a list, in any way, shape, or form; regardless of the possible benefits, because it may impact some edge cases.

Perfect example are email blacklists. They work pretty well, except for the edge cases they prevent a server from sending out mail.

@trwnh
Copy link
Member

trwnh commented Mar 18, 2019

Its at its core a bad idea and trying to turn it into a good one is trying to put lipstick on a pig.

it's a bad idea and someone will end up doing it in a really bad way, if it's not pre-empted by something that addresses the need of delegation of power.

On blocktogether lists you can see who is blocked, thats how we knew randi harpers blocklists was full of trans activists. It didnt help because she had more social capital and anyone who would complain was already blocked by those who used it [...] Transparency wont help because i used to help run one with said transparency and accountability and it took multiple people to actually add someone, and it devolved into one person slowly eliminating the others and gaining power and the rest losing interest. I even caught one person adding bad blocks and they had gotten away with it for 6 months

this is a really good argument for why blocklists should not be blindly propagated a la blocktogether. but that still means that there has to be enough done to prevent something similar from being built independently. e.g. by allowing auditing and establishing manual accept/reject rather than automatic imports. i fear not doing this will simply cause the worse solution to proliferate. even if nothing gets implemented, at least the discussion needs to happen.

The bigger the list gets over time the more people wont check the list because it will be unreasonbable to do so. Once this state is reached its arbitary to add people who dont belong on it [...] Its super easy to lie on the internet and fabricate screenshots and the like. You are just begging for the alt-right and bad actors to game you. And they will.

so don't use screenshots or other circumstantial evidence. reports have a summary and can select multiple toots as attachments, and this info can be forwarded. basically allow single-user instances to receive forwarded stuff from a relay, which is a decent opt-in way to discover bad actors before they harass you. you argued above that this would cause people to stop checking the mod queue but i'd instead argue that the mod queue is basically zero if you're not subscribing to a relay. if you do subscribe to a relay, then you are opting in to being flooded by unmoderated content, which is placing disproportionate moderation on you. i don't know how much more i can emphasize that all of this stuff should be 100% opt-in and manual human-reviewed stuff, but having nothing is creating a vacuum that allows a worse thing to be built. the role of technology is not to make decisions for people, but to instead ease their burden.

@Laurelai
Copy link

it's a bad idea and someone will end up doing it in a really bad way, if it's not pre-empted by something that addresses the need of delegation of power.

The proper response is to tell them why the idea is bad and why they shouldnt do it. Not give them the means to enact their bad idea. If they dont listen well thats on them, stand back from a safe distance and let them self destruct.

this is a really good argument for why blocklists should not be blindly propagated a la blocktogether. but that still means that there has to be enough done to prevent something similar from being built independently. e.g. by allowing auditing and establishing manual accept/reject rather than automatic imports. i fear not doing this will simply cause the worse solution to proliferate. even if nothing gets implemented, at least the discussion needs to happen.

Then you are just propagating reports instead of blocks which makes the moderation queue of the biggest instance in this system everyones moderation queue who participates it. A large workload is just copied to many places making many people have to do it all. Either they will over time just start ignoring them including the reports on their own instance because the workload is too much or just start approving all the bans without looking which is effecively the same as an automated block list.

but i'd instead argue that the mod queue is basically zero if you're not subscribing to a relay.

Yes and going from zero to a lot is a shock. Lets say someone subscribed to .socials relay which gets enough mod work that they have to actually pay another human being to handle it and it visibly causes them distress to do the job.

Now you will make sure many people see that garbage instead of a few. If i was a malicious actor who wanted to get a bunch of admins to close up shop id just pick the biggest instance in the relay system and flood it with shocking text and images and then start reporting my own posts using multiple accounts until the admins of all of those instances either ignored reports or were miserable. The instances that just ignore reports are ones i can make accounts on knowing that the admins wont act against me quickly and the misery is also an acceptable win condition for a hypothetical bad actor.

Or if i wanted to spread lies about someone id use this report relay to do it the same way, by making posts with convincing but false information, then report my own posts. That lovely captive audience of mods would see it all, some of them would believe it, enough to cause problems.

Never give people the tools to mass message mods they will abuse it in the most vile ways.

Also i really dont get the logic of "someone will do this bad idea eventually so we should do it first". That just means you are the one doing the bad idea. Not good.

@trwnh
Copy link
Member

trwnh commented Mar 18, 2019

propagating reports instead of blocks which makes the moderation queue of the biggest instance in this system everyones moderation queue who participates it

only assuming everyone subscribes to the same giant relay, while you manage your own small instance.

going from zero to a lot is a shock. Lets say someone subscribed to .socials relay which gets enough mod work that they have to actually pay another human being to handle it and it visibly causes them distress to do the job.

this sounds like an argument not to join a relay. but instead, what you are saying is that people who subscribe to relays are basically 100% on their own, and they would indeed be "better off" simply joining the biggest instance that is part of the relay, which means that there is no incentive to self-host unless you already have an existing contact address book. i'm not comfortable with that conclusion. all this ensures is that moderation and service providers are tied together instead of decoupled.

tell them why the idea is bad and why they shouldnt do it. Not give them the means to enact their bad idea [...] i really dont get the logic of "someone will do this bad idea eventually so we should do it first". That just means you are the one doing the bad idea. Not good.

i am not proposing giving anyone the means to enact the bad idea. i am proposing that we find a way to prevent the bad idea from ever being palatable or deemed socially necessary at any significant scale. doubtless that blocktogether was written to address a real need -- but one specifically rooted in the failure of twitter's governance. when analyzing mastodon's current setup of "instances" as the moderation center, e.g. site-wide or domain-wide rules, the issue i see is not necessarily one of governance but rather of locality. if a spammer starts making throwaway accounts that keep linking to antifeminist screeds about baby boomers, then either the originating instance has to take action, or every other instance in existence has to take their own action.

with that said: there are still some points i haven't really addressed:

  • i would be fine if potentially the only Flag activities that were relayed were the ones made after the subscription; that would prevent previous baggage from overloading you as soon as you subscribed to the relay
  • i am also not necessarily proposing that the current "instance" system should continue to exist as-is; it is feasible to instead consider relays as "instances" (i.e. communities) made up of selfhosted servers rather than just one server. essentially the relay would be the community layer, similar to a group, basically a bona fide federal structure rather than a loudspeaker that amplifies everything fed into it.
  • not only am i against automatic blocklists, but i am also not comfortable with instance admins holding all the power and making admin decisions based on their allegiances or what they hear from other instance admins. it shouldn't be acceptable to worry about pissing off the wrong instance admin in the exact same way that it shouldn't be acceptable to worry about pissing off the wrong blocktogether list owner. you can consider the instance-wide blocklist to be essentially the same as a shared blocklist that users can't opt out of.
  • there's even a larger point to be made that mute/block/silence/suspend is entirely too limited of a framework to fit all of moderation into; moderation should be more granular and accountable to something other than how a certain person is feeling on a certain day, with a very imprecise scope that is basically a binary scale between being ignored and being nuked.
  • i did say that governance wasn't really the main issue with the fediverse right now but it is still kind of relevant; one analogy that can be made is that the current instance-based community system is like establishing a bunch of nation-states that can establish trade relations rather arbitrarily based on their individual governance models (bdfl, co-op, etc). i would like to see a more accountable and transparent federal level so that membership in a nation-state is not necessary if you would like basic amenities like "having a moderator" or "being able to participate in public discussion". lower levels of governance could apply their own rules on top of that, but there should be a solid foundation/baseline on stuff where there's clear consensus, e.g. "no nazis/spammers"

@Laurelai
Copy link

only assuming everyone subscribes to the same giant relay, while you manage your own small instance.

Thats whats likely to happen. 80/20 rule. We have to operate with how people act, not how we want them to act.

this sounds like an argument not to join a relay. but instead, what you are saying is that people who subscribe to relays are basically 100% on their own, and they would indeed be "better off" simply joining the biggest instance that is part of the relay, which means that there is no incentive to self-host unless you already have an existing contact address book. i'm not comfortable with that conclusion. all this ensures is that moderation and service providers are tied together instead of decoupled.

Theres a reason why most mastodon users are on big instances and you just nailed it as to why. If you want to fix that id be happy to support it and in fact encourage you to create systems that break up big instances. Good luck getting the people who run the big instances, one of who happens to control the mastodon codebase to support that though.

not only am i against automatic blocklists, but i am also not comfortable with instance admins holding all the power and making admin decisions based on their allegiances or what they hear from other instance admins. it shouldn't be acceptable to worry about pissing off the wrong instance admin in the exact same way that it shouldn't be acceptable to worry about pissing off the wrong blocktogether list owner. you can consider the instance-wide blocklist to be essentially the same as a shared blocklist that users can't opt out of.

What you are looking for is called democracy. In principle im all for it. But thats not what mastodon is. mastodon is a federation of fiefdoms. Again if you want to change that id support it. I wish you the best of luck convincing the people with the most power right now to give that up. Im an anarchist, i dont like hierarchial structures, even ones that mask themselves as horizontal ones. Block sharing inevitably becomes this because people tend to just go along with the loudest voice.

You have to make willful specific structures to prevent this social capital building and leverage. Mastodon doesnt even have the foundation yet to make this happen and virtually no incentive to start.

There are fundemental core problems that the mastodon system has and it seems there is no political will to fix them. There are governance and scaling problems. This issue is just a symptom of these greater unsolved problems.

I mean i do run a mastodon instance, I have users on it. I care about its future. Fix the core issues and problems like this become much easier to solve.

you can consider the instance-wide blocklist to be essentially the same as a shared blocklist that users can't opt out of.

Except its just one instance instead of say twenty and users can opt out of it via making a new account on another instance. This becomes harder with shared lists. While not ideal when one instance blocks another for petty reasons its better than twenty doing it.

This is a core governance and scaling issue and this idea is at best a band aid that will cause more problems than it solves.

@trwnh
Copy link
Member

trwnh commented Mar 19, 2019

mastodon is a federation of fiefdoms.

this really is the biggest issue but i see the fact that moderation is tied up with the domain as part of that issue. i.e. you can't "break up" the biggest instances because the moderation load is a big part of the sell with joining someone else's instance. thus people will gravitate to the instance that provides the best service, with moderation and service being a single package. you need to extricate moderation out of the service provider, and separate the community layer from the service layer. otherwise, you see economies of scale being applied at the service level and the community layer, because they are the same.

maybe that's a separate issue to this one, although it is orthogonal -- there needs to be infrastructure for the community-level moderation in order to prevent users from relying on individual-level moderation, and doubly so if you run the software at the individual level.

put another way, i am more in support of this issue #116 as a "communal" solution, as opposed to #10304 as the user-level solution. maybe not necessarily in the form of a "block list", and as i've said several times i would oppose any mechanism that was automated (disrespecting the value of human judgement), contextless (containing no moderation notes or evidence, and ideally auditable), or flattened propagation (as opposed to a web of trust that took distance into account). i've got plenty of trauma from being put on nearly a dozen blocklists and finding myself blocked by at least half of twitter myself, too.

users can opt out of it via making a new account on another instance

which is the same as saying that if you don't like the laws of your nation-state you can just uproot yourself and move somewhere else. not ideal. sure, this can be made easier with better migration support or minimized with location-independent profiles, but we're not there yet because no one wants to fend for themselves. and again, a big part of that is because instances provide both service and moderation. the fiefdoms/nation-states largely exist because there is no meta-federation to the fediverse. i'd like to see the community layer being done with relays as effectively large groups, and ideally managed by consensus -- it should still be possible to run your own sub-community layer with traditional instances that apply their own rules on top of the relay-level meta-rules, just like state-level laws can be applied on top of federal laws. but users should not be required or coerced into joining the nation-states if they want to instead participate in the federal level directly (and to still benefit from the federal regulation).

@Laurelai
Copy link

which is the same as saying that if you don't like the laws of your nation-state you can just uproot yourself and move somewhere else. not ideal. sure, this can be made easier with better migration support or minimized with location-independent profiles

What we need is the mastodon equivalent of open borders.

@DarckCrystale
Copy link

I had today a very unpleasant experience on the Fediverse.

I have a lot of friends on this social medias and they are from different instances. A lot of admins know each others and work together to create nice places for their communities.

Today, a member of this community was verbally assaulted. I reported the user assaulting them and make a public post about it. I took the time to take the screenshots on my phone, to edit them to anonymize it so the victim could not be recognized and more harassed than they already were, to post them and to write an understandable message with convenient CWs. It took a lot of my time and of my energy.

Then I poked some admins I know on my post to warn them. It took again some time and energy. Those admins told me that they already muted and blocked the user I was reporting.

Then, people started to harass me under my own post. I received a few notifications from them that I quickly removed by reporting then silencing&blocking them. It also took a bit of my time and energy. You can have a look at the whole shitstorm here: https://freespeechextremist.com/notice/9lbzOI1jIQbFtXL9XM (also, they think I'm the one who interacted with the first harasser, but I'm not, so the whole whiny thing about them being victims of harassment is ridiculous).

And all of it would not happened if admins, moderators and users could federate their blocklists when they work together:

  • the very first people being harassed would not have been,
  • I wouldn't used my very little time and energy to report the user,
  • admins wouldn't have to check if the user would have been blocked,
  • I wouldn't have been harassed (please read the shitstorm to see what I'm talking about),
  • I wouldn't be here looking if there is existing issues about federating blocklist between instances.

@Gargron you and only you have the last word on every feature Mastodon implements. You need to take a decision quickly about how you want to manage it. Right now, with all the users coming from freespeechxtremists platform, us, the minorities, are in danger. You shouldn't spend time on implementing new features. Your job now is to stabilize your platform and your community. Do you want your community to be composed by awful people like the ones harassing us? Because if you do nothing, that's what will happen. You NEED to talk with your team about this BIG issue and you need to find sustainable solutions.

I will copy paste this on every issue talking about blocking if I think it's relevant, feel free to delete if you want to keep it on a single place.

@stale
Copy link

stale bot commented Oct 26, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the status/wontfix This will not be worked on label Oct 26, 2019
@progval
Copy link
Contributor

progval commented May 26, 2020

I don't think this is fixed

@stale stale bot removed the status/wontfix This will not be worked on label May 26, 2020
abcang added a commit to CrossGate-Pawoo/mastodon that referenced this issue Aug 25, 2020
Use Status.group instead of Status.distinct in HashQueryService
@eloquence
Copy link

What's the current thinking on this? Is this, or a similar feature to share blocklists, still planned?

@trwnh
Copy link
Member

trwnh commented Nov 23, 2022

i think the idea is to allow forwarding reports or otherwise announce blocks to trusted instance actors or relays -- the primary concerns are that any solution should preserve context (reason, attached statuses, etc), but not directly fetch the offending material and convert it into a status (so using the feature shouldn't force users to be exposed to objectionable material)

@kkarhan
Copy link

kkarhan commented Jul 10, 2023

Q: Can't Mastodon just automatically import blocklists from a feed URL?
Here's an example for such a text-based feed...

This may be useful as instances could collaborate and transparently share their blocklists in an organized manner

Also #11510 is duplicating more or less the same issue...

@kkarhan
Copy link

kkarhan commented Jan 5, 2024

I think being able to just import a blocklist from a feed-URL should provide the same functionality, as admins could just publish their blocklists or share a link to those privately for collaborating instances.

See also:

@kkarhan
Copy link

kkarhan commented Feb 23, 2024

@trwnh accidentially marked #28605 as duplicate to #116...

please fix & reopen

@moodler
Copy link

moodler commented Feb 28, 2024

This is the oldest issue on the topic, it makes perfect sense for it to be the main issue and others (#28605, #29022, #29256, #11510 etc) are closed as duplicates, but then this issue should be where the activity is happening (and it seems quiet here).

@LWFlouisa
Copy link

LWFlouisa commented Jul 30, 2024

I've had blocklists on twitter used target LGBT people like myself, to try to intimidate me out of the comics industry. While I've silenced the particular instance that does this, this is exactly the opposite direction I want the fediverse more generally to go.

At the very least, we would need a way to prevent block list abuse, and its misuse as a tool for targeted harassment.

@BenAveling
Copy link

Note FWIW that MAS-139 is showing as "Exploring" at https://joinmastodon.org/roadmap

@BenAveling
Copy link

Question 1: is this issue talking about giving users the ability to 'follow' a blocklist? or instances? or both? If both, aren't those separate issues?

Question 2: there are plenty of standalone applications that will maintain a shared blocklist, at least for instances. Is there anything wrong with them? What problem is solved by having this functionality built in?

@BenAveling
Copy link

We could go further.

Under threats, we have:

  • disposable attack accounts, created on otherwise legitimate instances
  • disposable attack instances
  • broadly legitimate accounts with really bad takes on some specific topic

Under targets, we have

  • 'highly attacked' people (typically high profile minorities)
  • highly attacked instances (typically instances with a high percentage of highly attacked people.)
  • everyone else

And I could be wrong, but I don't think there's a single solution that addresses all of these combinations.

Highly targeted people and instances need a highly reactive response, probably even a proactive response, where only 'trusted' accounts/instances can interact. In practice, that possibly means allow-lists, rather than block-lists.

For the rest of us, I'm not so sure. For many of us, we don't have a problem, but that doesn't mean we might not have to make changes in order to be part of the solution.

Another complication: Badly designed block-lists can cause problems, in part because there is overlap between 'highly attacked accounts' and 'broadly legitimate accounts with really bad takes on some specific topic'.

@BenAveling
Copy link

One option might be to provide built-in support for integrating with sites like:

Someone smarter than I would have to decide whether there should be a default list of blocklists to chose from, and if so, what should be on it, let alone whether any of those should be active by default.

That said, an argument for having some dynamic blocklist on by default might be that providing a mechanism but not activating it out of the box is asking for trouble - bad actors would be only too keen to be 'helpful' to new admins.

@Masked-Witch
Copy link

I haven't read the entire thread, so I'm not sure if my suggestions are duplicates of what others have posted here, but I have some suggestions here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
suggestion Feature suggestion
Projects
None yet
Development

No branches or pull requests