You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm hopeful that we'll never see this. One way to prevent it is for each nelson.social user to be individually vetted; if they post CSAM, they're going to jail.
We will possibly need to deal with federated CSAM, but hopefully RBLs can help with that, and also, honestly, neural networks. Apple scanning every image is one thing, but I would be extremely upset with the world if we said that we're going to scan any incoming federated media (or require that it's vetted by some so-far imaginary external moderation collective) with AI in order to prevent our users from being subject to CSAM attacks or spam (side-effect being that if any of our users intentionally subscribe to a CSAM account, they're probably going to jail)
(I am an abolitionist, so I use the term "jail" in a complicated way, but also I have very little sympathy for anyone who would create or have CSAM)
No description provided.
The text was updated successfully, but these errors were encountered: