-
-
Notifications
You must be signed in to change notification settings - Fork 6.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for subscribing to communal block lists #116
Comments
I agree with this, see also: https://github.com/Gargron/mastodon/issues/62 |
My thoughts are to implement lists generally, and then to allow people to use them for either following or blocking (so a list one person follows might be a list another person blocks)? |
Twitter APIのように、タグ変換していないテキストを返す
merge v1.5.1
* update getting-started css on max-height short phone * update getting-started scrollable-wrapper
🎉 add Markdown to Yづ丼 🎉
@wxcafe Could this be closed, based on your #1092 (comment). |
Exclude `OpenSSL::SSL::SSLError` exception
Other people have started issues asking for a feature that's a bit like BlockTogether - where User B subscribes to User A and automatically blocks everyone User A blocks, but if everyone is following each other that can be a huge mess - with a big enough network and with enough people subscribing to each other, you can end up blocking people and not knowing why, or not even knowing that you've done it, etc. So I think this issue is the one I'd like to actually happen, because it doesn't try to mimic that. It just says "blocklists that are maintained by one or more people", and it doesn't say anything about automatically adding people to the blocklist based on whether they've been blocked by some other personal account. Here's my recent comment on another issue:
|
I don't know if that works. Lists on Mastodon require you to follow someone in order to add them, right? So it wouldn't be possible for me to maintain a blocklist for others to subscribe to, while also subscribing myself. |
Block lists are exploitable and create perverse incentives no matter how well you think you have it set up. ive seen ones that had all sorts of checks and balances devolve into a source of conflict and eventually those checks and balances were removed to become someones personal power tool. Very few if any blocklists on twitter do only what they say they do. They dont create any real safety. They just use the promise of safety to give a single person inordinate ,unaccountable power. |
i see "communal block lists" as something that can be handled similar to relays. if someone hosts a single-user site, then that means they also have a single-user mod team. if they subscribe to a public relay in order to populate their federated TL, then they will instantly be overwhelmed in potential moderation load. so having a way to separate the moderation from the administration would make it easier to self-host or to choose generalist instances by availability rather than moderation policy. in fact, i would propose that moderation could instead happen at the existing relay level, so that subscribing to a relay is in effect the same as subscribing to that relay's moderation! in other words, instead of forwarding just the public this might require some rework of relays, and it would probably have the effect of turning relays into meta-instances. of course, communities can simply continue operating as they already do, without relays, if they wish to have local-only moderation. |
If a single user instance subscribes to a relay and gets their blocklist too then why even have a single user instance, why not just join their instance. You are exporting everything to that instance anyways. Many single user instances exist so that said single user can make their own moderation decisions. So they have the choice to use the relay and sign over their mod choices or dont and lose out on an active feed. This gives the larger instance a lot of power over smaller ones as well. "Do what we say or we will add your instance/admins/users etc to the federated blocklist" this will be the best case scenario, the worst case scenario is that people become even more hesitant to ban or block anyone on any instance until the tensions build up to the point of severe conflict. There is no technological way to prevent this from being an abusable tool of power. |
the idea would be that multiple sites export their public posts to a relay, so there is no singular "their instance" to join.
and they would continue to be able to make their own moderation decisions. i propose allowing users to accept or reject publicly-auditable forwarded reports. the relay is meant to solve the problem of discoverability, after all.
the "federated blocklist" wouldn't be this massive tool, and in fact, i am personally against having a "blocklist" in its current conception. no one instance should have power over the relay. the relay exists as a communal structure, to separate the community layer from the site layer, so that single-user sites are not single-user communities.
and thus the technology should only aid the social infrastructure. right now, reports can only be sent to your own admin and optionally federated to the originating instance. this increases moderation load on everyone, as now a bad actor must be reported on each individual instance before mods are made aware of their existence. if E is a bad actor, then users must report E on the instances of A, B, C, D, F, and G, because if only A reports E, then B/C/D/F/G are unaware that E exists until E starts causing problems. what i am worried about is that doing nothing will cause others to take action in a more naive and un-auditable way. community efforts already exist, e.g. dzuk's blocklist, which helpfully include documentation and screenshots of why certain bad actors were added to the list. however, other community efforts exist that do not provide any logs whatsoever, e.g. consisting solely of someone posting a toot CW'd "recommended block" and then providing little-to-no context, causing blocks to propagate solely on the social capital of the person making the declaration. the latter is what i think should be pre-empted by a much better solution. |
And any kind of organized structural system of sharing blocks will be exploited by those very same people with social capital, it will benefit them the most. The fact that reports have to be sent individually is what limits them to just directly telling others to block people. dzuk's blocklist isnt maintained much anymore due to the effort int maintaining it becoming too great, which is great because it shows the attempt doesnt scale and you shouldnt worry too much about it. And if its only reports that get relayed then you are making single and low user instance mods have to do the same ammount of work as larger instances. In that case it will just discourage relaying altogether or just learning to ignore the mod queue. Both of these are bad outcomes and will discourage the creation and use of smaller instances. Sharing the blocks means power concentration. Sharing the reports means more work for everyone involved. Neither one is a good idea. |
If it was as transparent as possible, would that help? Maybe some things like:
Obviously people can post like "this blocklist is trash and here's why", and anyone could look at the names on the blocklist and decide to unsubscribe. I don't know, are there any things you can think of that would put your mind at ease about this? Since we're starting from scratch, anything starting with "I would only be okay with this feature if..." is a good idea to mention, and it's probably hard to go overboard! |
There is nothing that would put my mind at ease. Because nothing would make this a good idea. Its at its core a bad idea and trying to turn it into a good one is trying to put lipstick on a pig. On blocktogether lists you can see who is blocked, thats how we knew randi harpers blocklists was full of trans activists. It didnt help because she had more social capital and anyone who would complain was already blocked by those who used it.
The bigger the list gets over time the more people wont check the list because it will be unreasonbable to do so. Once this state is reached its arbitary to add people who dont belong on it. Transparency wont help because i used to help run one with said transparency and accountability and it took multiple people to actually add someone, and it devolved into one person slowly eliminating the others and gaining power and the rest losing interest. I even caught one person adding bad blocks and they had gotten away with it for 6 months and i only caught them because i was hyper attentive at the time. And if the person who runs it is using it for good you wind up with dzuk's blocklist which isnt really maintained anymore because it became too much work and it doesnt really scale well.
If they added you to a blocklist a way to contact them so they can tell you no again isnt going to help. Plus someone has to reply to all of those appeals and if the blocks were for good reasons then thats exposing someone to the abusive messages of others for little gain. And there will be a lot of abusive messages sent to whoever that is.
Its super easy to lie on the internet and fabricate screenshots and the like. You are just begging for the alt-right and bad actors to game you. And they will. LIke this is fundementally a bad idea. I keep saying theres no right way to do one. I keep saying ive seen this all happen before. Heres another issue you havent thought of. Factional fighting among people who otherwise have similar ideologies. Do you all really want to be caught in the middle of those with screaming people on both sides of it demanding you add their enemies to the blocklists and violently retaliating if you dont give them what they want? Because thats happened too. A core question developers need to start asking themselves is "Does this create power that people can fight over?" Because if it creates power, people will fight over it and act in machiavellian ways to try to game it and they will find a way to game it people always do. The reprecussions of this being fought over or gamed are very very bad. |
Maybe a maximum number of people on the list? 🤔 (I did read the other stuff you wrote, @Laurelai, I just don't have anything to say about them right now, so yeah, don't think I'm not listening or anything!) |
I think perhaps some will not be satisfied in any way that allows third parties to curate a list, in any way, shape, or form; regardless of the possible benefits, because it may impact some edge cases. Perfect example are email blacklists. They work pretty well, except for the edge cases they prevent a server from sending out mail. |
it's a bad idea and someone will end up doing it in a really bad way, if it's not pre-empted by something that addresses the need of delegation of power.
this is a really good argument for why blocklists should not be blindly propagated a la blocktogether. but that still means that there has to be enough done to prevent something similar from being built independently. e.g. by allowing auditing and establishing manual accept/reject rather than automatic imports. i fear not doing this will simply cause the worse solution to proliferate. even if nothing gets implemented, at least the discussion needs to happen.
so don't use screenshots or other circumstantial evidence. reports have a summary and can select multiple toots as attachments, and this info can be forwarded. basically allow single-user instances to receive forwarded stuff from a relay, which is a decent opt-in way to discover bad actors before they harass you. you argued above that this would cause people to stop checking the mod queue but i'd instead argue that the mod queue is basically zero if you're not subscribing to a relay. if you do subscribe to a relay, then you are opting in to being flooded by unmoderated content, which is placing disproportionate moderation on you. i don't know how much more i can emphasize that all of this stuff should be 100% opt-in and manual human-reviewed stuff, but having nothing is creating a vacuum that allows a worse thing to be built. the role of technology is not to make decisions for people, but to instead ease their burden. |
The proper response is to tell them why the idea is bad and why they shouldnt do it. Not give them the means to enact their bad idea. If they dont listen well thats on them, stand back from a safe distance and let them self destruct.
Then you are just propagating reports instead of blocks which makes the moderation queue of the biggest instance in this system everyones moderation queue who participates it. A large workload is just copied to many places making many people have to do it all. Either they will over time just start ignoring them including the reports on their own instance because the workload is too much or just start approving all the bans without looking which is effecively the same as an automated block list.
Yes and going from zero to a lot is a shock. Lets say someone subscribed to .socials relay which gets enough mod work that they have to actually pay another human being to handle it and it visibly causes them distress to do the job. Now you will make sure many people see that garbage instead of a few. If i was a malicious actor who wanted to get a bunch of admins to close up shop id just pick the biggest instance in the relay system and flood it with shocking text and images and then start reporting my own posts using multiple accounts until the admins of all of those instances either ignored reports or were miserable. The instances that just ignore reports are ones i can make accounts on knowing that the admins wont act against me quickly and the misery is also an acceptable win condition for a hypothetical bad actor. Or if i wanted to spread lies about someone id use this report relay to do it the same way, by making posts with convincing but false information, then report my own posts. That lovely captive audience of mods would see it all, some of them would believe it, enough to cause problems. Never give people the tools to mass message mods they will abuse it in the most vile ways. Also i really dont get the logic of "someone will do this bad idea eventually so we should do it first". That just means you are the one doing the bad idea. Not good. |
only assuming everyone subscribes to the same giant relay, while you manage your own small instance.
this sounds like an argument not to join a relay. but instead, what you are saying is that people who subscribe to relays are basically 100% on their own, and they would indeed be "better off" simply joining the biggest instance that is part of the relay, which means that there is no incentive to self-host unless you already have an existing contact address book. i'm not comfortable with that conclusion. all this ensures is that moderation and service providers are tied together instead of decoupled.
i am not proposing giving anyone the means to enact the bad idea. i am proposing that we find a way to prevent the bad idea from ever being palatable or deemed socially necessary at any significant scale. doubtless that blocktogether was written to address a real need -- but one specifically rooted in the failure of twitter's governance. when analyzing mastodon's current setup of "instances" as the moderation center, e.g. site-wide or domain-wide rules, the issue i see is not necessarily one of governance but rather of locality. if a spammer starts making throwaway accounts that keep linking to antifeminist screeds about baby boomers, then either the originating instance has to take action, or every other instance in existence has to take their own action. with that said: there are still some points i haven't really addressed:
|
Thats whats likely to happen. 80/20 rule. We have to operate with how people act, not how we want them to act.
Theres a reason why most mastodon users are on big instances and you just nailed it as to why. If you want to fix that id be happy to support it and in fact encourage you to create systems that break up big instances. Good luck getting the people who run the big instances, one of who happens to control the mastodon codebase to support that though.
What you are looking for is called democracy. In principle im all for it. But thats not what mastodon is. mastodon is a federation of fiefdoms. Again if you want to change that id support it. I wish you the best of luck convincing the people with the most power right now to give that up. Im an anarchist, i dont like hierarchial structures, even ones that mask themselves as horizontal ones. Block sharing inevitably becomes this because people tend to just go along with the loudest voice. You have to make willful specific structures to prevent this social capital building and leverage. Mastodon doesnt even have the foundation yet to make this happen and virtually no incentive to start. There are fundemental core problems that the mastodon system has and it seems there is no political will to fix them. There are governance and scaling problems. This issue is just a symptom of these greater unsolved problems. I mean i do run a mastodon instance, I have users on it. I care about its future. Fix the core issues and problems like this become much easier to solve.
Except its just one instance instead of say twenty and users can opt out of it via making a new account on another instance. This becomes harder with shared lists. While not ideal when one instance blocks another for petty reasons its better than twenty doing it. This is a core governance and scaling issue and this idea is at best a band aid that will cause more problems than it solves. |
this really is the biggest issue but i see the fact that moderation is tied up with the domain as part of that issue. i.e. you can't "break up" the biggest instances because the moderation load is a big part of the sell with joining someone else's instance. thus people will gravitate to the instance that provides the best service, with moderation and service being a single package. you need to extricate moderation out of the service provider, and separate the community layer from the service layer. otherwise, you see economies of scale being applied at the service level and the community layer, because they are the same. maybe that's a separate issue to this one, although it is orthogonal -- there needs to be infrastructure for the community-level moderation in order to prevent users from relying on individual-level moderation, and doubly so if you run the software at the individual level. put another way, i am more in support of this issue #116 as a "communal" solution, as opposed to #10304 as the user-level solution. maybe not necessarily in the form of a "block list", and as i've said several times i would oppose any mechanism that was automated (disrespecting the value of human judgement), contextless (containing no moderation notes or evidence, and ideally auditable), or flattened propagation (as opposed to a web of trust that took distance into account). i've got plenty of trauma from being put on nearly a dozen blocklists and finding myself blocked by at least half of twitter myself, too.
which is the same as saying that if you don't like the laws of your nation-state you can just uproot yourself and move somewhere else. not ideal. sure, this can be made easier with better migration support or minimized with location-independent profiles, but we're not there yet because no one wants to fend for themselves. and again, a big part of that is because instances provide both service and moderation. the fiefdoms/nation-states largely exist because there is no meta-federation to the fediverse. i'd like to see the community layer being done with relays as effectively large groups, and ideally managed by consensus -- it should still be possible to run your own sub-community layer with traditional instances that apply their own rules on top of the relay-level meta-rules, just like state-level laws can be applied on top of federal laws. but users should not be required or coerced into joining the nation-states if they want to instead participate in the federal level directly (and to still benefit from the federal regulation). |
What we need is the mastodon equivalent of open borders. |
I had today a very unpleasant experience on the Fediverse. I have a lot of friends on this social medias and they are from different instances. A lot of admins know each others and work together to create nice places for their communities. Today, a member of this community was verbally assaulted. I reported the user assaulting them and make a public post about it. I took the time to take the screenshots on my phone, to edit them to anonymize it so the victim could not be recognized and more harassed than they already were, to post them and to write an understandable message with convenient CWs. It took a lot of my time and of my energy. Then I poked some admins I know on my post to warn them. It took again some time and energy. Those admins told me that they already muted and blocked the user I was reporting. Then, people started to harass me under my own post. I received a few notifications from them that I quickly removed by reporting then silencing&blocking them. It also took a bit of my time and energy. You can have a look at the whole shitstorm here: https://freespeechextremist.com/notice/9lbzOI1jIQbFtXL9XM (also, they think I'm the one who interacted with the first harasser, but I'm not, so the whole whiny thing about them being victims of harassment is ridiculous). And all of it would not happened if admins, moderators and users could federate their blocklists when they work together:
@Gargron you and only you have the last word on every feature Mastodon implements. You need to take a decision quickly about how you want to manage it. Right now, with all the users coming from freespeechxtremists platform, us, the minorities, are in danger. You shouldn't spend time on implementing new features. Your job now is to stabilize your platform and your community. Do you want your community to be composed by awful people like the ones harassing us? Because if you do nothing, that's what will happen. You NEED to talk with your team about this BIG issue and you need to find sustainable solutions. I will copy paste this on every issue talking about blocking if I think it's relevant, feel free to delete if you want to keep it on a single place. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
I don't think this is fixed |
Use Status.group instead of Status.distinct in HashQueryService
What's the current thinking on this? Is this, or a similar feature to share blocklists, still planned? |
i think the idea is to allow forwarding reports or otherwise announce blocks to trusted instance actors or relays -- the primary concerns are that any solution should preserve context (reason, attached statuses, etc), but not directly fetch the offending material and convert it into a status (so using the feature shouldn't force users to be exposed to objectionable material) |
Q: Can't Mastodon just automatically import blocklists from a feed URL? This may be useful as instances could collaborate and transparently share their blocklists in an organized manner Also #11510 is duplicating more or less the same issue... |
I think being able to just import a blocklist from a feed-URL should provide the same functionality, as admins could just publish their blocklists or share a link to those privately for collaborating instances. See also: |
I've had blocklists on twitter used target LGBT people like myself, to try to intimidate me out of the comics industry. While I've silenced the particular instance that does this, this is exactly the opposite direction I want the fediverse more generally to go. At the very least, we would need a way to prevent block list abuse, and its misuse as a tool for targeted harassment. |
Note FWIW that MAS-139 is showing as "Exploring" at https://joinmastodon.org/roadmap |
Question 1: is this issue talking about giving users the ability to 'follow' a blocklist? or instances? or both? If both, aren't those separate issues? Question 2: there are plenty of standalone applications that will maintain a shared blocklist, at least for instances. Is there anything wrong with them? What problem is solved by having this functionality built in? |
We could go further. Under threats, we have:
Under targets, we have
And I could be wrong, but I don't think there's a single solution that addresses all of these combinations. Highly targeted people and instances need a highly reactive response, probably even a proactive response, where only 'trusted' accounts/instances can interact. In practice, that possibly means allow-lists, rather than block-lists. For the rest of us, I'm not so sure. For many of us, we don't have a problem, but that doesn't mean we might not have to make changes in order to be part of the solution. Another complication: Badly designed block-lists can cause problems, in part because there is overlap between 'highly attacked accounts' and 'broadly legitimate accounts with really bad takes on some specific topic'. |
One option might be to provide built-in support for integrating with sites like:
Someone smarter than I would have to decide whether there should be a default list of blocklists to chose from, and if so, what should be on it, let alone whether any of those should be active by default. That said, an argument for having some dynamic blocklist on by default might be that providing a mechanism but not activating it out of the box is asking for trouble - bad actors would be only too keen to be 'helpful' to new admins. |
I haven't read the entire thread, so I'm not sure if my suggestions are duplicates of what others have posted here, but I have some suggestions here |
Although it is an issue which is not fun to think about, part of the issue with Twitter was that it was hard to deal with trolls and harassment on a community level, I am no expert in this field but one solution I have seen suggested was communal block lists.
i.e. a user could subscribe to lists of users, which managed by one or more people, would invisibly filter out the listed users to the people who subscribe to that list. Perhaps this could be a filter or an auto-block.
Thus not driving users away but allowing potentially vulnerable communities to protect themselves from harassers.
By having these protections in early they can be much more effective than tacked on the side later.
The text was updated successfully, but these errors were encountered: