Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Troll Mode (like +m on IRC) #1587

Closed
gabek opened this issue Dec 12, 2021 · 15 comments · Fixed by #1681
Closed

Troll Mode (like +m on IRC) #1587

gabek opened this issue Dec 12, 2021 · 15 comments · Fixed by #1681
Assignees
Labels
chat Issues dealing with the web chat client and server

Comments

@gabek
Copy link
Member

gabek commented Dec 12, 2021

Was discussing with @YarmoM's and crew on his stream today about moderation ideas and this one came out of some brainstorming that I think is a neat idea.

The idea of something akin to an "allow list" of chatters when a troll comes in would reverse the moderation flow, a "Troll Mode", if you will. Instead of banning the bad people, you temporarily allow only the "good" ones. And while having to manually mark those you trust in the chat is additional work, this would persist and be a one time thing per person. This can also be accomplished by moderators.

It would seen as a temporary measure, and not something expected to be on all the time in hopes that a few minutes of not being able to send messages would discourage a troll from wanting to spend more time there.

@gabek gabek added chat Issues dealing with the web chat client and server backlog Ideas that might be cool and can be looked into later. labels Dec 12, 2021
@Semisol
Copy link

Semisol commented Dec 12, 2021

An option could be to automatically allow-unless-approved for existing viewers, so the normal people don't get impacted as easily.

@Semisol
Copy link

Semisol commented Dec 13, 2021

An option could be to automatically allow-unless-approved for existing viewers, so the normal people don't get impacted as easily.

For clarification: When enabled, if the creator wants, everyone that's online at the time of the enabling gets the "allow" flag.

@surgediverter
Copy link

surgediverter commented Dec 13, 2021

I do like that idea :)
maybe this even works access token based only (no IP recording / account creating needed)
additional one could still send the messages users try to send to chat to moderators only instead and let them individually decide to let them pass that filter /and /or add new users to the whitelist while it is active
(ignoring other aspects beside of trolling, e.g. #464)

@gabek
Copy link
Member Author

gabek commented Dec 13, 2021

maybe this even works access token based only (no IP recording / account creating needed)

Yup, this would have nothing to do with IP addresses, but would use the existing chat accounts.

@hollunder
Copy link

I wonder how persistent that will be. I see a fair share of users on Hatnix' Server who have to set their name every day. I think that means they will have to be whitelisted every time as well.
It definitely does not scale however, at some point you simply can't whitelist everyone. This is of course compounded by the above problem.
Impersonation is also possible but I doubt that's a serious problem in this particular case.

@gabek
Copy link
Member Author

gabek commented Dec 13, 2021

It's designed to be persistent. However I've seen people that block access to local storage or open the page only in incognito windows obviously need to reset their chat identity every time because they are purposefully blocking it.

If there are other scenarios where people are losing their identity without doing it on purpose I'd love to hear details behind it.

@hollunder
Copy link

I guess it is due to people's browser configuration or plugins but obviously I don't know.
When you say identity, does it come down to the name alone?
Could the spammer grab a list of whitelisted names and just use those?

@YarmoM
Copy link
Contributor

YarmoM commented Dec 13, 2021

Each visitor stores an access token in their browser's local storage and I'm pretty sure that is the actual identity. So the troll could pick the name of an allowed person but that won't be enough to impersonate: they need the access token.

@YarmoM
Copy link
Contributor

YarmoM commented Dec 13, 2021

Ideally, identity is something "external" and could be as simple as a text file containing an access token or a cryptographic keypair (generated by the owncast instance, for example). People could either start typing in chat "unauthenticated" or first upload that text file (the contents of which are then stored in local storage) and be "authenticated" in chat. Chat messages are signed on the fly, letting the owncast instance know "yes, this really is X". Now, whitelisting becomes easy as people can block local storage, clean it, open in incognito or whatever else, they keep their identity.

Additional benefit would be that now the identity could be used by 3rd party chat clients such as a terminal one (wink wink).

It probably sounds more complicated than it actually is! It requires:

  • adding a "key fingerprint" field to each user,
  • adding a bit of client-side code that allows uploading a text file, check if it's a keypair and put it in local storage,
  • adding a bit of client-side code that signs chat messages when a keypair is found in local storage,
  • adding a bit of server-side code that verifies the signature before showing the message in chat.

My point: troll mode may become more effective with minimal cryptographic message signing.

---edit---

To make it even simpler and avoid cryptographic keys, make buttons to copy paste the access token so we can store it in password managers. Then when needed, we can enter the access token and the identity is restored across sessions. An owncast mobile app one day? Add a button to turn the access token into a QR code and let the app scan it. Identity shared across devices.

@surgediverter
Copy link

I wonder how persistent that will be. I see a fair share of users on Hatnix' Server who have to set their name every day. I think that means they will have to be whitelisted every time as well. It definitely does not scale however, at some point you simply can't whitelist everyone. This is of course compounded by the above problem. Impersonation is also possible but I doubt that's a serious problem in this particular case.

if I understood it correctly its meant to be a "temporarily" action

I don't think that whitelisting everyone is necessary, I see that more as a regulars /followers only concept

@Semisol
Copy link

Semisol commented Dec 13, 2021

if I understood it correctly its meant to be a "temporarily" action

I don't think that whitelisting everyone is necessary, I see that more as a regulars /followers only concept

Yeah, meant for some really abusive times, nothing else.

@hollunder
Copy link

if I understood it correctly its meant to be a "temporarily" action

I don't think that whitelisting everyone is necessary, I see that more as a regulars /followers only concept

Yes, it is supposed to be a temporary measure but you would want the whitelist to be persistent. Imagine a spammer returns every stream. You as streamer (moderators are not possible yet) would not want to whitelist your regular viewers every stream but ideally only once. For that to work the whitelist needs to be persistent and the users need to be identified reliably.
Especially since this system does not scale well with viewer numbers. How many viewers do you want to whitelist, 10, 100, 1000? This is not an immediate problem as the channels typically do not many viewers yet.

@gabek
Copy link
Member Author

gabek commented Dec 13, 2021

I've mentioned a couple times that user accounts are indeed persistent. If there is a larger issue where persistence is not working for you then please let me know the specifics behind it, but as far as I'm aware this is not a problem.

@gabek
Copy link
Member Author

gabek commented Dec 14, 2021

For clarification: When enabled, if the creator wants, everyone that's online at the time of the enabling gets the "allow" flag.

But that would also enable the troll to keep trolling.

Another option, instead of the idea of whitelisting individuals, is to implicitly allow people who have had a chat account on that server for longer than X amount of time with the assumption that a troll hasn't been hanging around for long before causing trouble. I don't know what that amount of time would be. You could argue 10 minutes might be long enough. Maybe an hour. Maybe it should be a day. Just an idea that would require less manual work from the streamer and moderators, and also fewer pieces of UI.

@aaronpk
Copy link
Contributor

aaronpk commented Dec 14, 2021

YouTube has a similar feature:

image

You can make it so only people who have been subscribed for X minutes can chat. The idea is to prevent drive-by spamming of the chat. Of course a dedicated troll can subscribe and wait 5 minutes to get around it, but I have a feeling this prevents most drive-by spam.

@gabek gabek removed the backlog Ideas that might be cool and can be looked into later. label Jan 14, 2022
@gabek gabek self-assigned this Jan 19, 2022
@gabek gabek linked a pull request Jan 19, 2022 that will close this issue
gabek added a commit that referenced this issue Mar 7, 2022
* Add support for established user mode. #1587

* Tweak tests

* Tweak tests

* Update test

* Fix test.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
chat Issues dealing with the web chat client and server
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants