Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NSFW account and instance declaration and NSFW mode #9468

Open
Humblr opened this issue Dec 8, 2018 · 29 comments
Open

NSFW account and instance declaration and NSFW mode #9468

Humblr opened this issue Dec 8, 2018 · 29 comments
Labels
suggestion Feature suggestion

Comments

@Humblr
Copy link

Humblr commented Dec 8, 2018

As adult instances are becoming more popular and their reach through the fediverse grows, content on those instances are making their way into the lives of those who are not wanting to see it or are at work and have fed timeline open..

I think it would be a good idea to implement NSFW mode which would allow safe browsing of the fed for those not wishing to see such content.

I see tagging Toots with #nsfw already stops that content from showing. And we along with others have implemented Twatter's CW fix for all feds seeing our content to not show it.

Problem is, this only fixes some of the content. It does not take into account the account avatar for example which on our instance is 90% of the time adult themed.

My suggestion is to add safe browsing mode which can be triggered by the user when ever they wish or as default to new users.
Accounts can then add a hash tag of #NSFW to their posts and account profile to disable their content even coming up on the timeline of SFW users.

Might be worth also allowing instances to mark their instance as NSFW and allow them to block NSFW content showing up on fed timeline.

The reason I am suggesting this is that without it. We will likely be blocked my many instances to the point of we might as well just be selective of the instances we (Humblr) network with as every 5 mins we get a report of NSFW content.

It would also be nice for users to allow all content to show even if CW is applied to that post.

Humblr
Humblr.social

@rx65m
Copy link

rx65m commented Dec 8, 2018

Fully Agree!
How can we help?

@lsmag
Copy link

lsmag commented Dec 8, 2018

I like this idea. Can I expand it further?

Twitter has a setting that allows users to flag their profile as sensitive. When it's enabled, visitors have to confirm they wish to see the content. Perhaps it'll be useful to add this flag here as well? So that other instances' can easily filter these out as per OP's suggestion.

Then we can have a setting for instance admins to default all new accounts to be marked as sensitive, if so they wish.

I'm on the fence about flagging a whole instance as NSFW because there are definitely some safe profiles among, say, humblr.social and my own. I feel like some flexibility is preferred, but I'm fine with either way.

I have some time off next week, so I can definitely help with coding, once we have a clear decision on implementation.

@Humblr
Copy link
Author

Humblr commented Dec 8, 2018

@ismag
You find a SFW profile on Humblr and ill eat my hat

@lsmag
Copy link

lsmag commented Dec 8, 2018

@Humblr fair enough, I take that back. On second thought, flagging the whole instance as NSFW actually does make sense, though I still think profile level flagging is important, particularly for instances that don't have a NSFW focus but do have NSFW profiles.

@Humblr
Copy link
Author

Humblr commented Dec 8, 2018

@ismag
Both options allow for higher control for the end user. I think profile tagging is the most important but instance wide while selective in terms of who uses it, is also needed.

@earfolds
Copy link

earfolds commented Dec 8, 2018

The federated timeline will often pick up freshly-made pornographic instances, leading to friction with established communities, especially those where the use of Content Warnings is entrenched. In particular, this will help instance admins who operate family-friendly instances with users under the age of 18. There's been a demonstrated use-case for mastodon in sharing pornographic content, and I don't feel that banning these instances entirely from the federated timeline is the only way to protect minors.

Allowing users to declare they are over 18 in their profiles would be a good way to solve this.

@nightpool
Copy link
Member

Twitter has a setting that allows users to flag their profile as sensitive. When it's enabled, visitors have to confirm they wish to see the content. Perhaps it'll be useful to add this flag here as well? So that other instances' can easily filter these out as per OP's suggestion.

Then we can have a setting for instance admins to default all new accounts to be marked as sensitive, if so they wish.

These two features already exist. Here's what it looks like to a user:
image

And the admin can configure the default setting for this instance in this file: https://github.com/tootsuite/mastodon/blob/master/config/settings.yml

@lsmag
Copy link

lsmag commented Dec 8, 2018

I already suggest that to be done in my instance, although I wasn't aware it could be set by default, thanks for the tip :)

It still doesn't solve @Humblr's suggestion regarding NSFW avatars, though. Safe browsing mode too.

@lsmag
Copy link

lsmag commented Dec 8, 2018

I'm pretty sure this is tangential to the original topic but since it was brought up here, I may as well add a response in this thread: @nightpool linked to https://github.com/tootsuite/mastodon/blob/master/config/settings.yml, which states:

# This file contains default values, and does not need to be edited
# when configuring an instance.  These settings may be changed by an
# Administrator using the Web UI.
#
# For more information, see docs/Running-Mastodon/Administration-guide.md

I have actually searched my Admin UI and couldn't find where to change these. Perhaps it's just my instance, or that isn't in the code yet? The documentation it refers to also has no mention of changing these fields: https://github.com/tootsuite/documentation/blob/master/Running-Mastodon/Administration-guide.md#administration-web-interface

Nonetheless, this is great advice. I'll fiddle with this file to set default_sensitive: true for all new accounts. Thanks :)

@nightpool
Copy link
Member

@lsmag some of those settings are instance settings and some are default user settings. the default user settings are not normally configurable (basically, since there are a lot of them).

@Humblr
Copy link
Author

Humblr commented Dec 8, 2018

It is not just photos and video though. It is avatars and profile headers that are causing the most issues.
This would be solved with NSFW tagging of profiles and instances.

@Esteth
Copy link
Contributor

Esteth commented Dec 9, 2018

What would we show for a sensitive avatar in the federated timeline? Show an Identicon instead, or a default Image selected by the instance admin?

I think we should consider what to do for sensitive text posts too - Ideally they don't show up on the federated timeline except for users who want to see it IMO. At that point it would seem safe to show them the sensitive avatar too.

@Humblr
Copy link
Author

Humblr commented Dec 9, 2018

Just don't show any content tagged as #NSFW (post, account or instance) if the user is in SFW mode.
Showing it blurred or defaulted would be a waste of screen real-estate

@thraeryn
Copy link

thraeryn commented Dec 9, 2018

That sounds, effectively, like a self-suspend.

I'm not necessarily against that if one instance is decidedly NSFW/sensitive and another is determinedly not–and it still seems a little harsh? At an instance-wide level, though, I guess it makes sense.

@HumblrUser
Copy link

When I first started on Mastodon, the general instance, I posted adult content with tags on them. After a while, an admin or mod contacted me, telling me, that there are minors on that platform and that I should hide the tags I am using, as they were descriptive on the content I was posting.

I later switched over to humblr.social and added my tags publicly again. On humblr.social that is not a problem. The strange thing happens if someone from the general instance or any other reblogs my posts. By doing so, the tags are visible again to everyone, like minors and "normal" users.

While I am posting my images as sensitive content, many people on humblr do not do that. Moreover, I am sure, that we will not get everyone to use that function. If someone from another instance reblogs that content, everyone can see it. No matter where they are.

That is the reason there should be a workaround for those cases. Whether it is a global flag, where an instance can be flagged as adult (nsfw) and content will only be visible on that instance, even if reblogged from somewhere else, or a switch on the profile, which works global. I have the feeling that this was not thought through. With all due respect!

@sparr
Copy link

sparr commented Dec 31, 2018

@HumblrUser The responsibility for the content of of a boosted post is on the booster.

@Gargron Gargron added the suggestion Feature suggestion label Jan 20, 2019
@buckket
Copy link

buckket commented Feb 10, 2019

Flagging an instance as NSFW would greatly enhance usability. Currently the only options there are to deal with NSFW instances is to either activate reject_media or silence them. Neither of those really solve the problem of hiding sensitive content but being able to expand when desired. The reject_media feature is way too harsh and silencing doesn’t affect the home timeline and hides non-media content as well.

So in conclusion I think it should go both ways: Allowing instance admins to flag their own instances and their users precautionally as NSFW, but also, and more importantly, being able to enforce those flags for foreign instances.

@drequivalent
Copy link

I fully agree with the idea. Especially is this relevant for federated timelines. I get a lot of complaints about NSFW content on my instance's federated timeline. However, I'm not in favor of blocking instances only on premise of them being NSFW.

I also fully agree that when the user is in SFW mode, NSFW content shouldn't be visible at all. Nobody wants a timeline consisting of black squares.

However, the NSFW mode switch shouldn't be burried - it should be visible and accessible.
Then there's a problem of CWs. When you have a NSFW mode enabled - do you want to disable CWs or not? Do you want this extra step?

I think, this should be a three-way switch then:

  • Safe mode (do not show NSFW-tagged content and content from NSFW instances)
  • Unfiltered mode (show everything, but respect the CWs)
  • NSFW mode (show everything always)

And then let the user decide what experience do they really want.

@Gargron
Copy link
Member

Gargron commented Feb 19, 2019

I think... the solution is adding sensitive on the accounts table and filtering public timelines on it just like on silenced, and exposing that sensitive boolean in profile settings. Which reminds me that silenced was originally a user-controlled field to opt out of the public timelines, which then became the function of the unlisted status visibility.

@drequivalent
Copy link

drequivalent commented Feb 19, 2019

Right. The difference is, though, that the user doesn't have to make a list of sensitive accounts themselves.

@trwnh
Copy link
Member

trwnh commented Feb 20, 2019

Something worth bringing up: Gravatar allows you to set an avatar rating, and also to upload alternative avatars that will be shown to different audiences.

image

It might be worth exploring allowing users to set a "safe" avatar and a "sensitive" avatar, at minimum -- there's no explicit need to set G/PG/R/X per se, but 2 levels ought to be enough for most use-cases. Although it might be interesting to look into whether it makes sense to adopt Gravatar (or at least remodel the avatar/banner spec after their prior work).

@Gargron
Copy link
Member

Gargron commented Feb 20, 2019

So the various problems, summarized, are:

  • Avatar and header images of sensitive character (possibly also display name/bio itself)
  • People who always post NSFW stuff and don't want to always mark it as such
  • Even when posts are CW'd, some people don't like lots of NSFW posts in public timelines

Unfortunately, we must also look at how Twitter handles things: Twitter hides detected NSFW accounts from search results and hashtag timelines (they don't have other public timelines), and hides the entire profile behind a "sensitive content" spoiler when you open it. People call this shadowbanning and are not happy to be on the receiving end of it.

I don't know how to resolve this situation. I thought that granular NSFW toggles per-post would satisfy everyone, but I was wrong, as people whose primary work output must be put behind a NSFW spoiler (like sex workers and adult artists) are not happy that their posts are disadvantaged like that, and many want to have avatars and headers reflecting their work.

@jamescallumyoung
Copy link

Is there any progress on this?

@asddsaz
Copy link

asddsaz commented May 7, 2019

+1 I would like to point out a method used by another decentralized service, ZeroNet.
They enable any user to create a blocklist and share it: https://zero.acelewis.com/#138R53t3ZW7KDfSfxVpWUsMXgwUnsDNXLP/?Page:blocklists

This way users not only get to choose whether or not to see NSFW content but also who censors it for them. Long term this could be built into a community led database.

Instances could also then craft their own individual blocklists.

just a thought :)

@ehashman
Copy link

Flagging an instance as NSFW would greatly enhance usability. Currently the only options there are to deal with NSFW instances is to either activate reject_media or silence them. Neither of those really solve the problem of hiding sensitive content but being able to expand when desired. The reject_media feature is way too harsh and silencing doesn’t affect the home timeline and hides non-media content as well.

So in conclusion I think it should go both ways: Allowing instance admins to flag their own instances and their users precautionally as NSFW, but also, and more importantly, being able to enforce those flags for foreign instances.

Strongly agree with this, although I'd prefer not using "NSFW" specific tagging, and I wouldn't want the setting to be limited to instance admins. I can think of at least two cases where a user might want to mark all media as sensitive/not auto-expand on the client side:

  • Food-themed instances. There is a social convention on fedi that food should be CW'd/marked as sensitive to accommodate folks with eating disorders. This is an excellent norm that improves accessibility, but on kith.kitchen or other similar food-themed instances, I don't want to have to CW every single one of my posts as "food". I have seen some admins set an automatic outbound CW on all posts from the instance like "could contain food", but this is clunky, and then doesn't provide followers with the choice as to whether or not they want a CW applied.
  • I follow some accounts on some instances with different lewd content policies than mine. My instance requires lewd content to have a CW, theirs doesn't. I'd prefer for the lewds to be hidden behind a CW so I can enjoy the non-lewd content without getting surprised by lewd content.

@Gargron has there been any progress on this? I don't have any bandwidth for implementation but I'd be happy to help with design and scope.

@ThisIsMissEm
Copy link
Contributor

Thinking about this a little recently @Gargron, and I think there's a few ways to look at dealing with content that might not be desired to be shown to different audiences:

  1. Can a user disclose that they content may be explicit or undesired by an audience? Yes, via sensitive media + CWs
  2. Can an instance mark all content coming from it as sensitive by default? Yes, via the default settings
  3. Can a receiving instance mark all content coming from a given instance as sensitive or add a CW? Currently no, they can just limit the instance entirely.
  4. Can a user opt to show CW'd or sensitive media only if it matches a given hashtag? Currently no, it's all sensitive content or nothing (this would be good for where I want to see adult content but I don't want to see violence or gore)
  5. Can a user apply rules to specific instances or users to always mark their media as sensitive or CW'd? No, but this is a lot trickier to implement.

Also, is it worth asking: should we enable age based segregation?

E.g., allow my content to be viewable by anyone over a certain age or limit my content to those below a certain age (e.g., for teenagers, allowing them to prevent interactions from adults or at least flag interactions for review; or for allowing adult creators to say "yeah, no, this content isn't for anyone under the age of 18/21").

From there, it'd certainly be possible to let instances decide how they wish to mark an account as having a certain age (whether that's the user's choice, a moderator setting a flag on the account, or a third-party service performing age verification), which would then influence the content they see and the reach that it has.

@MirceaKitsune
Copy link

Absolutely nothing having to do with segregation and censorship has any place on the Fediverse in my book, apart from the existing standard / voluntary tools which I think do the job fine (when not themselves abused). The Victorian superstition that children shouldn't watch certain things because they cause "spooky mind effects" has already destroyed what was left of the internet due to not disappearing in the 90's when they should have: Last thing we need is defaulting this sort of stuff beyond generic use tools bringing Mastodon a step into the past as well as closer to becoming Twitter / Tumblr 2.0.

Nowadays I've come to believe the content warning system is good enough as is: Most instances will require you to use it for anything the majority may find objectionable. While better would technically be possible, in a world like today where any pretext for control is a slippery slope bound to run rampant, I'd say shelve such ideas forever if possible. At most we could add a feature to pick certain icons and categories for the CW field so it's clearer.

@asdreemurr844
Copy link

hello, I found an article that relates to this issue and also generally describes the experience of minors in Fediverse: https://blog.hellbeast.eu.org/hellbeast.eu.org/Published/Fedi%20fucking%20sucks%20(as%20a%20minor)

@regalialong
Copy link

Author here, article is now at https://blog.hellbeast.eu.org/Fedi%20fucking%20sucks%20(as%20a%20minor) although I think Cloudflare might be caching the page for now.

I've since stopped using Fedi because I've had too many incidents of Afterdark accounts interacting with me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
suggestion Feature suggestion
Projects
None yet
Development

No branches or pull requests