-
-
Notifications
You must be signed in to change notification settings - Fork 6.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Global and Local timelines open to sensitive images by bad actors #9487
Comments
I'm against this. I think moderators have enough tools at the moment: Adding such a restriction will only make life harder for users, which is a problem Mastodon is trying to solve not add to. As for image filters: Understanding meaning in 2D images will never be accurately possible using binary code. Those filters are very costly, whereas expecting them to actually work right is a fantasy not rooted into the realities of how today's computers work. They would also add an extra dependency on external services... we want Mastodon to be decentralized, not embedded with tools linking to the databases of other tech firms. Those services might also be able to spy on every image posted by an user and know exactly who is uploading what (with IP address). |
Another issue with this idea: The verification you're suggesting would do little to stop trolls anyway. Every bad actor could pretend to be a normal person, by first posting a few perfectly normal images in order to get approved. Then once the restriction is lifted they could proceed to post anything. If you want a safety that works, allow banning based on IP address to prevent known bad actors from making new accounts on the instance. I imagine Mastodon already has this feature. |
I'm in favor of a light verification process. I don't think unverified images should be completely blocked, simply hidden from local and global timelines, and perhaps given a specialized content warning noting that the user hasn't been verified. @MirceaKitsune While a user could easily impersonate normal people until they get verified, this would absolutely weed out some of the less-dedicated trolls. Anything that could even somewhat minimize the effects of a raid seems worth considering. This feature could even be useful on instances that leave it off a majority of the time; a mod could only enable it in the event of a raid to prevent new users from contributing more to it. |
I think you are right about the moderation review feature being a bad idea, and its sounds like running a nsfw filter on every image would be difficult. I think either dnaf's idea, maybe with the text 'This is a new user' or a single day grace period before uploading would be useful. |
What I do support: An optional feature instances can use, which require moderators to review each account before its posts can appear in any public timeline. This shouldn't apply exclusively to images but all posts: Mods would simply have a section of new accounts, click on each, browse a bit through their profiles and posts, then approve or ban based on what they see. Unfortunately this will become a huge hurdle for large instances with 100's of registrations per day. What I'm fully against: Any kind of automated filter. Those are expensive, technologically unreliable, and set a bad precedent for an already dangerous and controversial form of censorship. |
Pitch
I think that some kind of review feature that requires the first photo attachment by a new account be confirmed ok by a mod would help reduce the danger of users being exposed to untagged gore. An alternative could be using Yahoo's sensitive image filter on the first non-CW'd post by an account.
Motivation
Mastodon is growing in popularity, and as it does so will those who target it and wish to disturb those browsing it by "raiding" it with sensitive content.
This is my first bug, feel free to tell me if I've made a mistake.
The text was updated successfully, but these errors were encountered: