-
-
Notifications
You must be signed in to change notification settings - Fork 6.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NSFW account and instance declaration and NSFW mode #9468
Comments
Fully Agree! |
I like this idea. Can I expand it further? Twitter has a setting that allows users to flag their profile as sensitive. When it's enabled, visitors have to confirm they wish to see the content. Perhaps it'll be useful to add this flag here as well? So that other instances' can easily filter these out as per OP's suggestion. Then we can have a setting for instance admins to default all new accounts to be marked as sensitive, if so they wish. I'm on the fence about flagging a whole instance as NSFW because there are definitely some safe profiles among, say, humblr.social and my own. I feel like some flexibility is preferred, but I'm fine with either way. I have some time off next week, so I can definitely help with coding, once we have a clear decision on implementation. |
@ismag |
@Humblr fair enough, I take that back. On second thought, flagging the whole instance as NSFW actually does make sense, though I still think profile level flagging is important, particularly for instances that don't have a NSFW focus but do have NSFW profiles. |
@ismag |
The federated timeline will often pick up freshly-made pornographic instances, leading to friction with established communities, especially those where the use of Content Warnings is entrenched. In particular, this will help instance admins who operate family-friendly instances with users under the age of 18. There's been a demonstrated use-case for mastodon in sharing pornographic content, and I don't feel that banning these instances entirely from the federated timeline is the only way to protect minors. Allowing users to declare they are over 18 in their profiles would be a good way to solve this. |
These two features already exist. Here's what it looks like to a user: And the admin can configure the default setting for this instance in this file: https://github.com/tootsuite/mastodon/blob/master/config/settings.yml |
I already suggest that to be done in my instance, although I wasn't aware it could be set by default, thanks for the tip :) It still doesn't solve @Humblr's suggestion regarding NSFW avatars, though. Safe browsing mode too. |
I'm pretty sure this is tangential to the original topic but since it was brought up here, I may as well add a response in this thread: @nightpool linked to https://github.com/tootsuite/mastodon/blob/master/config/settings.yml, which states:
I have actually searched my Admin UI and couldn't find where to change these. Perhaps it's just my instance, or that isn't in the code yet? The documentation it refers to also has no mention of changing these fields: https://github.com/tootsuite/documentation/blob/master/Running-Mastodon/Administration-guide.md#administration-web-interface Nonetheless, this is great advice. I'll fiddle with this file to set |
@lsmag some of those settings are instance settings and some are default user settings. the default user settings are not normally configurable (basically, since there are a lot of them). |
It is not just photos and video though. It is avatars and profile headers that are causing the most issues. |
What would we show for a sensitive avatar in the federated timeline? Show an Identicon instead, or a default Image selected by the instance admin? I think we should consider what to do for sensitive text posts too - Ideally they don't show up on the federated timeline except for users who want to see it IMO. At that point it would seem safe to show them the sensitive avatar too. |
Just don't show any content tagged as #NSFW (post, account or instance) if the user is in SFW mode. |
That sounds, effectively, like a self-suspend. I'm not necessarily against that if one instance is decidedly NSFW/sensitive and another is determinedly not–and it still seems a little harsh? At an instance-wide level, though, I guess it makes sense. |
When I first started on Mastodon, the general instance, I posted adult content with tags on them. After a while, an admin or mod contacted me, telling me, that there are minors on that platform and that I should hide the tags I am using, as they were descriptive on the content I was posting. I later switched over to humblr.social and added my tags publicly again. On humblr.social that is not a problem. The strange thing happens if someone from the general instance or any other reblogs my posts. By doing so, the tags are visible again to everyone, like minors and "normal" users. While I am posting my images as sensitive content, many people on humblr do not do that. Moreover, I am sure, that we will not get everyone to use that function. If someone from another instance reblogs that content, everyone can see it. No matter where they are. That is the reason there should be a workaround for those cases. Whether it is a global flag, where an instance can be flagged as adult (nsfw) and content will only be visible on that instance, even if reblogged from somewhere else, or a switch on the profile, which works global. I have the feeling that this was not thought through. With all due respect! |
@HumblrUser The responsibility for the content of of a boosted post is on the booster. |
Flagging an instance as NSFW would greatly enhance usability. Currently the only options there are to deal with NSFW instances is to either activate reject_media or silence them. Neither of those really solve the problem of hiding sensitive content but being able to expand when desired. The reject_media feature is way too harsh and silencing doesn’t affect the home timeline and hides non-media content as well. So in conclusion I think it should go both ways: Allowing instance admins to flag their own instances and their users precautionally as NSFW, but also, and more importantly, being able to enforce those flags for foreign instances. |
I fully agree with the idea. Especially is this relevant for federated timelines. I get a lot of complaints about NSFW content on my instance's federated timeline. However, I'm not in favor of blocking instances only on premise of them being NSFW. I also fully agree that when the user is in SFW mode, NSFW content shouldn't be visible at all. Nobody wants a timeline consisting of black squares. However, the NSFW mode switch shouldn't be burried - it should be visible and accessible. I think, this should be a three-way switch then:
And then let the user decide what experience do they really want. |
I think... the solution is adding |
Right. The difference is, though, that the user doesn't have to make a list of sensitive accounts themselves. |
Something worth bringing up: Gravatar allows you to set an avatar rating, and also to upload alternative avatars that will be shown to different audiences. It might be worth exploring allowing users to set a "safe" avatar and a "sensitive" avatar, at minimum -- there's no explicit need to set G/PG/R/X per se, but 2 levels ought to be enough for most use-cases. Although it might be interesting to look into whether it makes sense to adopt Gravatar (or at least remodel the avatar/banner spec after their prior work). |
So the various problems, summarized, are:
Unfortunately, we must also look at how Twitter handles things: Twitter hides detected NSFW accounts from search results and hashtag timelines (they don't have other public timelines), and hides the entire profile behind a "sensitive content" spoiler when you open it. People call this shadowbanning and are not happy to be on the receiving end of it. I don't know how to resolve this situation. I thought that granular NSFW toggles per-post would satisfy everyone, but I was wrong, as people whose primary work output must be put behind a NSFW spoiler (like sex workers and adult artists) are not happy that their posts are disadvantaged like that, and many want to have avatars and headers reflecting their work. |
Is there any progress on this? |
+1 I would like to point out a method used by another decentralized service, ZeroNet. This way users not only get to choose whether or not to see NSFW content but also who censors it for them. Long term this could be built into a community led database. Instances could also then craft their own individual blocklists. just a thought :) |
Strongly agree with this, although I'd prefer not using "NSFW" specific tagging, and I wouldn't want the setting to be limited to instance admins. I can think of at least two cases where a user might want to mark all media as sensitive/not auto-expand on the client side:
@Gargron has there been any progress on this? I don't have any bandwidth for implementation but I'd be happy to help with design and scope. |
Thinking about this a little recently @Gargron, and I think there's a few ways to look at dealing with content that might not be desired to be shown to different audiences:
Also, is it worth asking: should we enable age based segregation? E.g., allow my content to be viewable by anyone over a certain age or limit my content to those below a certain age (e.g., for teenagers, allowing them to prevent interactions from adults or at least flag interactions for review; or for allowing adult creators to say "yeah, no, this content isn't for anyone under the age of 18/21"). From there, it'd certainly be possible to let instances decide how they wish to mark an account as having a certain age (whether that's the user's choice, a moderator setting a flag on the account, or a third-party service performing age verification), which would then influence the content they see and the reach that it has. |
Absolutely nothing having to do with segregation and censorship has any place on the Fediverse in my book, apart from the existing standard / voluntary tools which I think do the job fine (when not themselves abused). The Victorian superstition that children shouldn't watch certain things because they cause "spooky mind effects" has already destroyed what was left of the internet due to not disappearing in the 90's when they should have: Last thing we need is defaulting this sort of stuff beyond generic use tools bringing Mastodon a step into the past as well as closer to becoming Twitter / Tumblr 2.0. Nowadays I've come to believe the content warning system is good enough as is: Most instances will require you to use it for anything the majority may find objectionable. While better would technically be possible, in a world like today where any pretext for control is a slippery slope bound to run rampant, I'd say shelve such ideas forever if possible. At most we could add a feature to pick certain icons and categories for the CW field so it's clearer. |
hello, I found an article that relates to this issue and also generally describes the experience of minors in Fediverse: https://blog.hellbeast.eu.org/hellbeast.eu.org/Published/Fedi%20fucking%20sucks%20(as%20a%20minor) |
Author here, article is now at https://blog.hellbeast.eu.org/Fedi%20fucking%20sucks%20(as%20a%20minor) although I think Cloudflare might be caching the page for now. I've since stopped using Fedi because I've had too many incidents of Afterdark accounts interacting with me. |
As adult instances are becoming more popular and their reach through the fediverse grows, content on those instances are making their way into the lives of those who are not wanting to see it or are at work and have fed timeline open..
I think it would be a good idea to implement NSFW mode which would allow safe browsing of the fed for those not wishing to see such content.
I see tagging Toots with #nsfw already stops that content from showing. And we along with others have implemented Twatter's CW fix for all feds seeing our content to not show it.
Problem is, this only fixes some of the content. It does not take into account the account avatar for example which on our instance is 90% of the time adult themed.
My suggestion is to add safe browsing mode which can be triggered by the user when ever they wish or as default to new users.
Accounts can then add a hash tag of #NSFW to their posts and account profile to disable their content even coming up on the timeline of SFW users.
Might be worth also allowing instances to mark their instance as NSFW and allow them to block NSFW content showing up on fed timeline.
The reason I am suggesting this is that without it. We will likely be blocked my many instances to the point of we might as well just be selective of the instances we (Humblr) network with as every 5 mins we get a report of NSFW content.
It would also be nice for users to allow all content to show even if CW is applied to that post.
Humblr
Humblr.social
The text was updated successfully, but these errors were encountered: