Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CSAM #5

Open
mattwynne opened this issue Dec 7, 2022 · 1 comment
Open

CSAM #5

mattwynne opened this issue Dec 7, 2022 · 1 comment
Assignees
Labels
code New feature or request

Comments

@mattwynne
Copy link
Member

No description provided.

@tillkruss tillkruss added the code New feature or request label Dec 8, 2022
@blaine
Copy link

blaine commented Dec 8, 2022

I'm hopeful that we'll never see this. One way to prevent it is for each nelson.social user to be individually vetted; if they post CSAM, they're going to jail.

We will possibly need to deal with federated CSAM, but hopefully RBLs can help with that, and also, honestly, neural networks. Apple scanning every image is one thing, but I would be extremely upset with the world if we said that we're going to scan any incoming federated media (or require that it's vetted by some so-far imaginary external moderation collective) with AI in order to prevent our users from being subject to CSAM attacks or spam (side-effect being that if any of our users intentionally subscribe to a CSAM account, they're probably going to jail)

(I am an abolitionist, so I use the term "jail" in a complicated way, but also I have very little sympathy for anyone who would create or have CSAM)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
code New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants