Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Global and Local timelines open to sensitive images by bad actors #9487

Open
bepvte opened this issue Dec 10, 2018 · 5 comments
Open

Global and Local timelines open to sensitive images by bad actors #9487

bepvte opened this issue Dec 10, 2018 · 5 comments
Labels
partially a bug Architecture or design-imposed shortcomings

Comments

@bepvte
Copy link

bepvte commented Dec 10, 2018

Pitch

I think that some kind of review feature that requires the first photo attachment by a new account be confirmed ok by a mod would help reduce the danger of users being exposed to untagged gore. An alternative could be using Yahoo's sensitive image filter on the first non-CW'd post by an account.

Motivation

Mastodon is growing in popularity, and as it does so will those who target it and wish to disturb those browsing it by "raiding" it with sensitive content.

This is my first bug, feel free to tell me if I've made a mistake.

@MirceaKitsune
Copy link

I'm against this. I think moderators have enough tools at the moment: Adding such a restriction will only make life harder for users, which is a problem Mastodon is trying to solve not add to.

As for image filters: Understanding meaning in 2D images will never be accurately possible using binary code. Those filters are very costly, whereas expecting them to actually work right is a fantasy not rooted into the realities of how today's computers work. They would also add an extra dependency on external services... we want Mastodon to be decentralized, not embedded with tools linking to the databases of other tech firms. Those services might also be able to spy on every image posted by an user and know exactly who is uploading what (with IP address).

@MirceaKitsune
Copy link

Another issue with this idea: The verification you're suggesting would do little to stop trolls anyway. Every bad actor could pretend to be a normal person, by first posting a few perfectly normal images in order to get approved. Then once the restriction is lifted they could proceed to post anything.

If you want a safety that works, allow banning based on IP address to prevent known bad actors from making new accounts on the instance. I imagine Mastodon already has this feature.

@k80w
Copy link

k80w commented Dec 10, 2018

I'm in favor of a light verification process. I don't think unverified images should be completely blocked, simply hidden from local and global timelines, and perhaps given a specialized content warning noting that the user hasn't been verified.

@MirceaKitsune While a user could easily impersonate normal people until they get verified, this would absolutely weed out some of the less-dedicated trolls. Anything that could even somewhat minimize the effects of a raid seems worth considering.

This feature could even be useful on instances that leave it off a majority of the time; a mod could only enable it in the event of a raid to prevent new users from contributing more to it.

@bepvte
Copy link
Author

bepvte commented Dec 10, 2018

I think you are right about the moderation review feature being a bad idea, and its sounds like running a nsfw filter on every image would be difficult. I think either dnaf's idea, maybe with the text 'This is a new user' or a single day grace period before uploading would be useful.

@MirceaKitsune
Copy link

MirceaKitsune commented Dec 10, 2018

What I do support: An optional feature instances can use, which require moderators to review each account before its posts can appear in any public timeline. This shouldn't apply exclusively to images but all posts: Mods would simply have a section of new accounts, click on each, browse a bit through their profiles and posts, then approve or ban based on what they see. Unfortunately this will become a huge hurdle for large instances with 100's of registrations per day.

What I'm fully against: Any kind of automated filter. Those are expensive, technologically unreliable, and set a bad precedent for an already dangerous and controversial form of censorship.

@Gargron Gargron added the partially a bug Architecture or design-imposed shortcomings label May 1, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
partially a bug Architecture or design-imposed shortcomings
Projects
None yet
Development

No branches or pull requests

4 participants