-
-
Notifications
You must be signed in to change notification settings - Fork 843
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Troll Mode (like +m on IRC) #1587
Comments
An option could be to automatically allow-unless-approved for existing viewers, so the normal people don't get impacted as easily. |
For clarification: When enabled, if the creator wants, everyone that's online at the time of the enabling gets the "allow" flag. |
I do like that idea :) |
Yup, this would have nothing to do with IP addresses, but would use the existing chat accounts. |
I wonder how persistent that will be. I see a fair share of users on Hatnix' Server who have to set their name every day. I think that means they will have to be whitelisted every time as well. |
It's designed to be persistent. However I've seen people that block access to local storage or open the page only in incognito windows obviously need to reset their chat identity every time because they are purposefully blocking it. If there are other scenarios where people are losing their identity without doing it on purpose I'd love to hear details behind it. |
I guess it is due to people's browser configuration or plugins but obviously I don't know. |
Each visitor stores an access token in their browser's local storage and I'm pretty sure that is the actual identity. So the troll could pick the name of an allowed person but that won't be enough to impersonate: they need the access token. |
Ideally, identity is something "external" and could be as simple as a text file containing an access token or a cryptographic keypair (generated by the owncast instance, for example). People could either start typing in chat "unauthenticated" or first upload that text file (the contents of which are then stored in local storage) and be "authenticated" in chat. Chat messages are signed on the fly, letting the owncast instance know "yes, this really is X". Now, whitelisting becomes easy as people can block local storage, clean it, open in incognito or whatever else, they keep their identity. Additional benefit would be that now the identity could be used by 3rd party chat clients such as a terminal one (wink wink). It probably sounds more complicated than it actually is! It requires:
My point: troll mode may become more effective with minimal cryptographic message signing. ---edit--- To make it even simpler and avoid cryptographic keys, make buttons to copy paste the access token so we can store it in password managers. Then when needed, we can enter the access token and the identity is restored across sessions. An owncast mobile app one day? Add a button to turn the access token into a QR code and let the app scan it. Identity shared across devices. |
if I understood it correctly its meant to be a "temporarily" action I don't think that whitelisting everyone is necessary, I see that more as a regulars /followers only concept |
Yeah, meant for some really abusive times, nothing else. |
Yes, it is supposed to be a temporary measure but you would want the whitelist to be persistent. Imagine a spammer returns every stream. You as streamer (moderators are not possible yet) would not want to whitelist your regular viewers every stream but ideally only once. For that to work the whitelist needs to be persistent and the users need to be identified reliably. |
I've mentioned a couple times that user accounts are indeed persistent. If there is a larger issue where persistence is not working for you then please let me know the specifics behind it, but as far as I'm aware this is not a problem. |
But that would also enable the troll to keep trolling. Another option, instead of the idea of whitelisting individuals, is to implicitly allow people who have had a chat account on that server for longer than X amount of time with the assumption that a troll hasn't been hanging around for long before causing trouble. I don't know what that amount of time would be. You could argue 10 minutes might be long enough. Maybe an hour. Maybe it should be a day. Just an idea that would require less manual work from the streamer and moderators, and also fewer pieces of UI. |
Was discussing with @YarmoM's and crew on his stream today about moderation ideas and this one came out of some brainstorming that I think is a neat idea.
The idea of something akin to an "allow list" of chatters when a troll comes in would reverse the moderation flow, a "Troll Mode", if you will. Instead of banning the bad people, you temporarily allow only the "good" ones. And while having to manually mark those you trust in the chat is additional work, this would persist and be a one time thing per person. This can also be accomplished by moderators.
It would seen as a temporary measure, and not something expected to be on all the time in hopes that a few minutes of not being able to send messages would discourage a troll from wanting to spend more time there.
The text was updated successfully, but these errors were encountered: