Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Appropriately defined properties for Content Warnings and Content Labels #583

Closed
1 of 5 tasks
ThisIsMissEm opened this issue Feb 16, 2024 · 14 comments
Closed
1 of 5 tasks
Labels
needs-fep Needs a FEP

Comments

@ThisIsMissEm
Copy link
Contributor

Please Indicate One:

  • Editorial
  • Question
  • Feedback
  • Blocking Issue
  • Non-Blocking Issue

Description

Currently mastodon (and inspired/compatible fediverse software) use the summary and summaryMap properties for content warnings, which isn't ideal since these are also used for actual summaries of Articles, Documents and other object types.

We need to define a new property or set of properties specifically for the purposes of doing content warnings and content labeling.

Content Warnings can largely be just a duplication of what we have for summary/summaryMap (i.e., contentWarning/contentWarningMap), as it's just a formalisation of an existing practice.

Content Labeling may be more controversial, so may be necessary to split into its own issue, but the two are certainly related. Content Labeling is inspired by the ideas in the Bluesky proposal, obviously this shouldn't replace good moderation, but it can aid in moderating incoming and outgoing content.

I would propose that content labeling should be an array of IRIs to predefined labels, such that any label provider can exist, instead of limiting ourselves to an explicit set of labels. The usage of IRIs here also gives the ability to fetch additional information about the label e.g., title, description, application logic, sameAs, age appropriate range, etc).

Example

You have a server that is child-friendly. You do not wish to ingest content which is adult in nature, graphically violent, or related to alcohol, drugs, and gambling. By having a mechanism through which federating servers can send you "content labels" that you could then choose to block / mute / limit / restrict would enable that, without overloading hashtags.

@ThisIsMissEm ThisIsMissEm changed the title Appropriately defined property/properties for Content Labels / Content Warnings Appropriately defined property/properties for Content Warnings and Content Labels Feb 16, 2024
@ThisIsMissEm ThisIsMissEm changed the title Appropriately defined property/properties for Content Warnings and Content Labels Appropriately defined properties for Content Warnings and Content Labels Feb 16, 2024
@ThisIsMissEm
Copy link
Contributor Author

See also, this thread: https://hachyderm.io/@thisismissem/111862183487194716

@bleonard252
Copy link

This sounds a bit like Tumblr's Community Labels, too.

@ThisIsMissEm
Copy link
Contributor Author

This sounds a bit like Tumblr's Community Labels, too.

Yeah, there's definitely overlap. (I think Tumblr's implementation is a little silly, especially not being able to mark the entire blog / account as having a community label, or the UI rotating for the labels selector, but not for the composer; or the fact that community labels don't apply to their advertisers).

@jenniferplusplus
Copy link

I would very much like to have this capability. But, I think this runs into the verifiability problems that come up a lot with attaching info from or about 3rd parties. How would the recipient know they can trust these labels (and by extension, the lack of other labels) properly reflects the rest of the object's content?

@ThisIsMissEm
Copy link
Contributor Author

But, I think this runs into the verifiability problems that come up a lot with attaching info from or about 3rd parties. How would the recipient know they can trust these labels (and by extension, the lack of other labels) properly reflects the rest of the object's content?

I'm not sure what you mean here? I think maybe we can take an approach similar to FIRES or the bluesky Content Labellers approach of having third-party labels being applied out-of-band of the message; The in-band labels are more applied either directly by the user or by the instance administrator / moderator, hence carry some degree of trust.

@jenniferplusplus
Copy link

Except there is no out-of-band in activitypub. If the object is sent to an inbox (or dereferenced from the ID), then the object is all there is. Any labels it includes are self-asserted by the sender. If there's supposed to be some trusted third party attestation, then it needs to be verifiable.

On the other hand, if the recipient is supposed to just get that third party labeling on their own, then that doesn't seem like an activitypub concern. That's just a third party api that mastodon et al can integrate with.

@ThisIsMissEm
Copy link
Contributor Author

I'm more thinking you could Follow an actor that publishes Label activities or something, if integrated into activitypub, but otherwise yes, I think it'd need to be a tangential API spec.

@jenniferplusplus
Copy link

Oh I see. A labeling actor is an interesting idea. Especially since the same basic semantics could facilitate a fact checking or community notes-like feature.

@TallTed
Copy link
Member

TallTed commented Feb 28, 2024

Things to consider -- Content Type a/k/a Trigger Warnings; Age Requirements a/k/a Maturity Level...

Challenges include different requirements in different regions.

Useful examples might come from Tumblr, LiveJournal, DreamWidth, various fanfic sites/communities, where the sites are very general purpose with smaller interest groups within.

Labeling/tagging by author is one thing; by community is another; by moderator is still another...

@evanp
Copy link
Collaborator

evanp commented Feb 28, 2024

@ThisIsMissEm I like the idea of adding a property for content warnings explicitly. I think the way that we add properties to AS2 right now is:

  1. Create an extension with its own namespace and context document, usuallly as a FEP.
  2. Get it widely supported.
  3. Use the extension policy to get the terms added directly to the AS2 context.

So, I think that would be the next step here. If you'd like to get started on that FEP, and link it here, I'd be happy to jump in.

One complication is how we shift from using summary for content warnings to using contentWarning or whatever property you think up is deprecating the use of summary for CWs. The problem is that if clients don't use it, there's some potential for bad experience for the user -- seeing something they don't want to see. We'll need to design that transition carefully; I don't have a clear picture of it right now.

With respect to content labels, I think there are a few extra processes happening here. One is that there may be an additional activity, or a reuse of something like Flag, to label an object -- and maybe it's not only the author who can label an object.

Regardless, this cluster of features is probably best handled in a FEP.

@bumblefudge
Copy link

Labeling/tagging by author is one thing; by community is another; by moderator is still another...

Totally agree. BlueSky's architecture has this idea of "subscribing" to multiple labelers/taggers actors, rather than having those be locked into the "home server's" long list of powers/obligations. I think labels/warnings/metadata/tags that originate with the author make sense to be added to the original activity (AS?), and labels/warnings/metadata/tags/modernation-events/etc are distinct activities (Annotate? new usage/data shape for Flag? etc?) are p different beasts

@ludrol
Copy link

ludrol commented Jul 22, 2024

Lemmy tags solve two different issues at once. Hashtag-like content filtering and discovery; And Content warning semantics.

It took it's shape due to compromise as not all parties involved want both features with equal priority.

For misapplied labels problem (malicious or not). I wouldn't go for special trust labelling actors but instead for user content removal or defedaration of a server.

@evanp
Copy link
Collaborator

evanp commented Sep 13, 2024

This issue is a great candidate for a FEP; I am closing but it will still be listed as one of our needs-FEP tickets.

@evanp evanp closed this as completed Sep 13, 2024
@ThisIsMissEm
Copy link
Contributor Author

This might be better moved to the new Trust & Safety taskforce.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-fep Needs a FEP
Projects
None yet
Development

No branches or pull requests

7 participants