New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Admin Option to Disable Trending Feature #7702

Closed
lawremipsum opened this Issue Jun 1, 2018 · 46 comments

Comments

Projects
None yet
@lawremipsum

lawremipsum commented Jun 1, 2018

Instance admins should be able to easily disable the trending hashtag feature.


  • [x ] I searched or browsed the repo’s other issues to ensure this is not a duplicate.
@nightpool

This comment has been minimized.

Collaborator

nightpool commented Jun 1, 2018

why?

@mal0ki

This comment has been minimized.

mal0ki commented Jun 1, 2018

Why it would be good to be allowed to turn off trending hashtags on a server:

  • It's federated, so anyone outside of your server could affect what your people see (to the extent that you federate with them), by gaming the trending.
  • It will affect discourse, and instance admins may not want that and would therefor want to turn it off.
  • Even if it doesn't directly mess up the chronological timelines more than boosts already do, it will affect it in the way of "people talk about what people talk about and talk more about it"

All these reasons are valid reasons for an instance admin to not want to have this feature turned on on their instance.

@lawremipsum

This comment has been minimized.

lawremipsum commented Jun 1, 2018

  • multiple admins want to
  • it [the trending feature] is of arguable value
  • it [the trending feature] alters the feel of a Mastodon instance and emphasizes/implements values that admins may disagree with; e.g., algorithmic trending topics is an anti-pattern that promotes things in a way that negatively influences communities
@Laurelai

This comment has been minimized.

Laurelai commented Jun 1, 2018

A lot of admins do not want this feature as it contributes to the gamification of social media, something we left twitter to avoid. So turning this off should be an option. In fact it should be off by default if you absolutely have to add trending tags and frankly, you shouldn't.

@jh4c

This comment has been minimized.

jh4c commented Jun 1, 2018

Trending topics have never been well received on any network that's tried to implement them. There is no user-friendly way to do so, it's an inherently user-antagonistic system, it created echo chambers of opinion and offers nothing new. I thought the point of Mastodon was to leave out all the bullshit from other networks?

@nightpool

This comment has been minimized.

Collaborator

nightpool commented Jun 1, 2018

@Gargron

This comment has been minimized.

Member

Gargron commented Jun 1, 2018

I thought the point of Mastodon was to leave out all the bullshit from other networks?

That's kind of a loaded way of arguing. The point of Mastodon is being a decentralized, standards-based social network with a focus on user experience and anti-abuse tools. What is and isn't "bullshit from other networks" is far from clear and has no direct relation to the mission statement. The feature request over at #271 has gained a lot of support so clearly it's a feature that some people consider useful.

Now, developing Mastodon is always about compromises between groups of people who call for opposite things: a lot of people want to be able to search toots, but searching toots leads to stuff like having to say goobergrate to not get dogpiled; so the compromise was to let you search stuff you've already seen anyway (mentions, favourites, your own toots).

In this particular case, I think that trending hashtags are a compromise. One extreme is not having any trends, the other extreme is using NLP and personalization to extract trending topics from raw text like Twitter does, which would be more useful since hashtags are only a minor part of the total volume of toots. Trending hashtags uses markup specifically designed for being discoverable, with an option to not participate (unlisted toots don't touch hashtags they contain).

So hashtags are for finding toots, but how do you find hashtags? Unless you have a wide and established social group, you would have to watch the federated timeline like a hawk to catch them. For example, as today is a Friday, #ff is trending. I knew about follow fridays, and a lot of seasoned users do too, but there are plenty of folks who never heard of the tradition and now had a chance to join in, making the hashtag a lot more useful in turn.

Similarly, it's nice to see #rubykaigi and #pride in trends, as I might have otherwise totally missed that those events are going on. Again, especially when it comes to new users, this stuff can do a lot of work in helping them get started and stick around.

I've added the ability to collapse the widget so you don't have to see trends if you don't care. Also, admins will have an interface to monitor and moderate trends. Bots and silenced users and non-public toots don't play a role in trends.

Overall, I fail to see how this feature negatively impacts Mastodon. Things like "gamification" are broad concepts. Receiving responses, favourites and boosts from other people might give you dopamine, having people who care about what you're posting might give you dopamine, so that's all gamification too, but removing those things would certainly kill the platform (there are some people that genuinely believe that the social network equivalent of writing to /dev/null is the way to go, but I highly doubt that a service like that would actually survive, people want to have fun)

TL;DR: This issue isn't actually about the validity of trends as a feature that would be #271

@Laurelai

This comment has been minimized.

Laurelai commented Jun 1, 2018

In regards to gamification if you have federation wide trending tags you create incentives for people to game that system to force a tag into trending status, it can and it will happen, which will cause whatever posts they are gaming to gain visibility. Its a way to spread propaganda, lies, illegal content and other malicious postings. You will also incentiveize tag wars where competing ideological factions will battle for supremacy over a tag. These are all things that happen on twitter because they have a trending system. Same with facebook, facebook is in face removing its trending feature.

@Gargron

This comment has been minimized.

Member

Gargron commented Jun 1, 2018

If admins are given tools to moderate trends (and as I said that's planned) then I actually don't see how that problem space is at all different to simply having local/federated timelines at all, as such wars, lies, illegal content and other malicious postings can gain visibility there easily. OTOH sifting through the timelines is harder than looking at trends, but it's a case where risk & reward rise equally since you are also making discovery of good content easier.

There is no user-friendly way to do so, it's an inherently user-antagonistic system, it created echo chambers of opinion and offers nothing new.

I kind of missed this sentence and I want to add that if trending tags did not offer anything new I obviously wouldn't waste my time on them. The point is that it lets you see outside of your echo chamber (the people you follow) to see what others are talking about. Also the server culture of Mastodon is already heavily skewed towards echo chambers so it's odd to bring that up now. Not that echo chambers are a bad thing in so far as it's, you know, your friends and your support system.

@Laurelai

This comment has been minimized.

Laurelai commented Jun 1, 2018

If you are going to add this and i cannot stop you, you have root on the repo. However please leave it off by default and let admins turn it on and off as needed. I just cannot see any way that this ends well.

@lawremipsum

This comment has been minimized.

lawremipsum commented Jun 1, 2018

I agree that this issue is distinct from #271, and for my part concede that some people find the feature valuable. And I appreciate the time Gargron puts into new features, even the ones I disagree with or don't personally value.

My request is simply that I as an admin be able to turn off visibility of the feature for all users on my instance. I don't particularly care if other instances compute trends based on what federates to them, so I'm not asking for a feature that somehow firewalls my instances federated toots from the algorithm, or any change that complicates the algorithm at all. Just a checkbox in the admin pane.

This toggle could be viewed as an anti-abuse feature, as it will will be a decisive means for an admin to stop or prevent any of the unwanted behaviors that might arise, which have at this point been well identified.

If users want to stay tapped into trends they can sign up on an instance that hasn't turned them off.

@Laurelai

This comment has been minimized.

Laurelai commented Jun 1, 2018

If you turn it off on your instance you should also be able to opt out of your instance being indexed for trending tags elsewhere.

@lawremipsum

This comment has been minimized.

lawremipsum commented Jun 1, 2018

I would welcome this as well.

@Gargron

This comment has been minimized.

Member

Gargron commented Jun 1, 2018

I am not a magician and I cannot change the laws of physics. The more impossible things like this you ask for, the more situations we will get where some server software forgets or refuses to implement the courtesy features of Mastodon that can only work on good faith and we get a massive discourse about privacy expectations on the internet. There's nuance and fine lines and "I do what I can" solutions but you've gotta draw the line somewhere and toots cannot be both publicly accessible to everyone and locked away from particular uses. I mean that's not even a decentralization thing, Twitter is a completely centralized platform with complete control over content access and all tweets are routinely used for data analysis by independent researchers.

Making a public post with a hashtag means, under any reasonable interpretation, that you want the post to be associated with that hashtag and found through it. Whether there's a list of most used hashtags, an A-Z list of hashtags, or indeed a sorted set based on fluctuations in frequency of use, I don't see how that is incompatible with the intent of discovery. Let's not bring the whole opt out of indexing thing into this. Your opt out of indexing (based on courtesy) is "unlisted", and your opt out of indexing based on server trust is "private".

@Laurelai

This comment has been minimized.

Laurelai commented Jun 1, 2018

If you cant implement it safely then dont implement it at all. Frankly you are just making the argument that this feature should not exist.

@mal0ki

This comment has been minimized.

mal0ki commented Jun 2, 2018

The point of Mastodon is [having] anti-abuse tools.

You can't have anti-abuse tools by actively creating tools that are very likely to be used for abuse, without consulting people who know and understand anti-abuse.

If you really want to make good on this anti-abuse promise, that you just made now right here in this thread, it would be nice if you acted like it when people are putting up MASSIVE warning signs about this being prime for abuse against individual people.

@rtucker

This comment has been minimized.

Contributor

rtucker commented Jun 2, 2018

Okay, so, hypothetical scenario: bots and/or humans scatter out and make #ReyIsAFox a trending topic. Clearly a lie, intended to slander my character.

What is the plan for dealing with that?

@trwnh

This comment has been minimized.

Contributor

trwnh commented Jun 2, 2018

If you cant implement it safely then don't implement it at all.

I guess by that reasoning we shouldn't have followers-only posts (a la recent discourse on "servers respecting implementations"). Which, to be fair, is a valid and defensible position, but Mastodon as a project doesn't seem to be aiming to build a completely secure/private software; it's aiming to build public communications (with privacy frankly not being truly supported unless you limit all private features to a Mastodon-only federation somehow).


Anyway, what I have to say about the feature itself is that trends are used only for putting something in front of your eyeballs. If you make something artificially trend (say, by botnets of multiple accounts), then you get a lot of exposure that you wouldn't have otherwise gotten if your posts just whizzed by on the public timelines with no one paying attention.

Sure, you can ban the botnets, but then that leads to sockpuppets, and now the admin has to decide if everyone is genuine or fake, and you can't control what trends on other instances even if you disable trends on your instance. so abusive information can rise quickly in trends on servers that you have no control over, even if you disable them on your instance.

At the same time, there is still some value in being aware of what's going on, sure. It's just worrying that the only considerations for abuse right now seem to be reactive instead of preventative. You disable trends if you don't want them, you ban spambots or suspend abusive users, you blacklist certain hashtags... none of those really address the problem at its root, which is the vehicle by which abusers get information in front of you.

@sssdddccc

This comment has been minimized.

sssdddccc commented Jun 2, 2018

So...I'm seeing a bunch of people tell another person planning to do a bad thing that it's a bad thing, and in response, I'm seeing the other person come up with any half-baked excuse they can for why they're just going to do it anyway, because it's fun to pull wings off insects to see what happens or whatever is behind this obtuse, downright unethical type of thinking.

There's a ton of other stuff users have actually been asking for, for a YEAR in some cases, and this "will please the heck out of marketers someday/probably already while enabling the heck out of harassment" non-feature is the thing that's important to do right now?

Eugen, you didn't start out this way. I was around, and you sold a lot of us on "Hey, come here, we're going to build tools that will secure this network from harassment and stand tall against harassers", but now it's "No, I won't protect instance privacy even when asked over and over again, but hey, if people start using a tag a bunch, let's see what happens with that!" What happened? I know development, especially while dealing publicly with the userbase, is a highly frustrating thing at times, but did it really straight-up poison you like this?

And yeah, if you and whoever's on the project now are just going to do whatever you want with the software, anyway, can you maybe stop pretending that input from the userbase holds any water whatsoever with you, or at least stop claiming that you're against abuse, when your "features" can totally be used for it, and the ones we straight-up beg for, based upon, in some cases, multiple decades of real-life experience watching things go south without them, get categorically ignored in the interest of this almighty Utopian concept of FEDERATION? Like, just say "I want my platform, because it's clearly mine, and not the userbase's, to work this way because it's the only way it'd make sense to me, and I do not care which users it harms, alienates or disenfranchises, no matter how hard some of them have been working to popularize said platform even while I've been acting in a hostile manner toward them using it." Behaving in a genuinely disappointing way would be more worthy of respect than all of this doublespeak I've seen for months and months now. This is Facebook engineer behavior ("We know what's good for you better than you do"), not the kind I saw from you in the earlier days of the platform.

I know, this is a "loaded", counterproductive, flamey, probably violating some general principles of how GitHub is supposed to work, whatever you wanna call it way of trying to get a point across, but people have been asking nicely for a long time, and frankly, you, the creator of this software and this protocol, have been either ignoring them or treating them like garbage. I closed my own instance because I didn't want to help popularize a platform whose creator would do what you've been doing (and unfortunately, doing that impacted another person who's pretty vital to your community financially, but I couldn't see a point to inviting friends, colleagues and people I respect to a hostile platform), but mine is a privileged position, and a lot of people using Mastodon really don't have much of anywhere else to go. I'm going to ask you one last time, even if I feel like it's a waste of breath, to respect these people who are just trying to survive on your platform, some of whom, again, are still trying to get people to come to it even with you acting in ways that endanger them like this latest round of foolishness, and maybe start listening to them without being as condescending as you've been here, on the previous thread about this "feature", and elsewhere for months now.

@Gargron

This comment has been minimized.

Member

Gargron commented Jun 2, 2018

It's just worrying that the only considerations for abuse right now seem to be reactive instead of preventative. You disable trends if you don't want them, you ban spambots or suspend abusive users, you blacklist certain hashtags... none of those really address the problem at its root, which is the vehicle by which abusers get information in front of you.

Can you have it any other way, really? Perhaps if you have Big Data you could use predictive algorithms to decide if someone is likely to become an abuser. Or you could manually pre-screen all messages sent by users. Those do not seem like desired methods to me. Our reactive methods are to my knowledge the only possible methods, and they are tangentially preventative (when an abuser gets silenced, it prevents the next victim from being harmed, even though the original victim was still harmed). Do you have any other suggestions?

Okay, so, hypothetical scenario: bots and/or humans scatter out and make #ReyIsAFox a trending topic. Clearly a lie, intended to slander my character.

What is the plan for dealing with that?

What is the plan for dealing with a large number of bots or humans posting slander in general? Trends or no trends that doesn't sound like a good thing, especially considering that with a variety of TrendingBots and 3rd party websites tracking hashtag use across the fediverse, the djinn is out of the bottle on that type of issue, and it's much harder to guarantee that such 3rd party systems would remove a harmful tag. I mean it's kind of why we have a report system and moderation tools...

@Gargron

This comment has been minimized.

Member

Gargron commented Jun 2, 2018

because it's fun to pull wings off insects to see what happens

What's wrong with you? Just wanted to highlight this.

Eugen, you didn't start out this way. I was around, and you sold a lot of us on "Hey, come here, we're going to build tools that will secure this network from harassment and stand tall against harassers", but now it's "No, I won't protect instance privacy even when asked over and over again, but hey, if people start using a tag a bunch, let's see what happens with that!" What happened? I know development, especially while dealing publicly with the userbase, is a highly frustrating thing at times, but did it really straight-up poison you like this?

I never promise anything until it's done. I've built first, announced second, and when you came here, you came because those things were already here, and the way you have phrased your misquotes is designed to elicit an idea that I've somehow gone back on my promises or ideals. I've built this software from the ground up, done my research, shaped it according to user feedback and my vision and judgement, and you liked the result enough to come here.

can you maybe stop pretending that input from the userbase holds any water whatsoever with you,

So because I implement a feature that was requested by a large number of users, but you don't like it, that means input from the userbase does not hold any water for me, did I understand that correctly?

or at least stop claiming that you're against abuse, when your "features" can totally be used for it

Spoken language can be used for abuse. All sorts of tools can be used for abuse. I'm interested in preventing abuse but not at the cost of basic functionality.

get categorically ignored in the interest of this almighty Utopian concept of FEDERATION

I'm sorry to break it to you but you are, in fact, posting on the GitHub page of a federated social network, based on the principles of servers of varying sizes exchanging information to allow people to talk to each other. If that's what you don't like, you are in the wrong place. There is absolutely 0 chance that this software will pivot to being a centralized commercial service for any reason. If something is not possible to do in a decentralized fashion, that's just how it is.


I am interested in improving Mastodon, and that means keeping user safety in mind, but also increasing its utility to its users, addressing demands of the various groups of people who use it, and ensuring that it remains friendly to new users.

@sssdddccc

This comment has been minimized.

sssdddccc commented Jun 2, 2018

Again with the empty public relations-speak, but I'll respond one more time, for some reason. You clearly only hear what you want to hear, but perhaps other people who aren't as set in their ways will see you as I do, and maybe they'll have better luck getting through to you than I have in previous attempts or than I will in this one. Doubt it, but it's this or Netflix, and this might actually matter.

Seriously, when your users (and not just me, clearly; I was late to this party, in fact...more of them would tell you, too, but when they speak up in any of the official channels, they're pretty mercilessly targeted for harassment by the usual suspects, and ignored by the people who handle the decisions on the software) are telling you "No, don't do something, there is strong historical precedent of this being a problem everywhere it's implemented, and a problem that tangibly hurts users", and you're still stanning for the idea, you are demonstrating that you "just want to see what happens", which, yes, makes you just like someone who breaks or kills something because "you wanted to see how it worked". Disagree if you will, but you're demonstrating a clear lack of both compassion and ethics by pushing so hard for this feature. And no, don't even give me this "a large number of users requested this" nonsense, not for a second. Even if you've got some raw numbers, "a large number of users" have requested a lot of truly terrible, hurtful things in their software and on their social networks many, many times. The greater good isn't a popularity contest.

As for you not promising things, you were chasing high-profile harassment targets all over Twitter, trying to get them to come here, because "We don't allow Nazis!", etc., and yet, you've federated with, encouraged federation with, and even pointed those "new users" who you love so much to throw in with a whole mess of instances that have harbored them (and somewhat defiantly, in more than a few cases). Other people are better at maintaining receipts on this stuff than I am, but I can assure anyone reading this that they're out there, screenshotted for all to see.

And sure, of course I liked things well enough when I first arrived a year and a half ago, because none of your principles had really been tested yet at that point. The userbase was still very small, and the software was very, very early. When we'd all had a chance to kick the tires, we started saying things like "You know, it may be a good time to implement things like whitelists, instance-only posting, an ability to switch off federation globally", and you started pushing back on that, because at that point, your vision for the software became something that was at odds with the needs of the community. All three of those things, just as examples, are entirely technically possible. If you can block instances, you can do the inverse or, in extreme cases, you can shut off federation. If you can do followers-only posts, you can do instance-only posts. People across the fediverse asked for these features over and over again, but you did nothing (and don't even try and claim that your "research" backs up your reasoning; I've been on social networks of some sort or another for decades now, from IRC channels on down, and I've seen every single one of those social networks that's gone south fall prey to people getting sick of not being protected by the networks' creators and/or admins or, worse, people getting sick of actively being endangered and harassed by said creators/admins).

Spoken language can be used for abuse. All sorts of tools can be used for abuse. I'm interested in preventing abuse but not at the cost of basic functionality.

Yeah, you're right. Why build in any of your other features, like the block button, the mute button, content warnings, or the ability to delete accounts if you decide an admin is hostile (I finally got around to using this one on your instance earlier, in fact), when people are just going to find ways around them and continue abusing people? Why have laws? Why have civilization? Eventually, someone's just going to do something bad, and we're powerless to stop it.

People who actually have experience dealing with these things, because they've been marginalized off half the wider Internet, at minimum, are telling you that this particular feature you're so jazzed about can be weaponized, and has been weaponized, and what, because a paper clip can kill people, you're just gonna get all existentialist about things and do nothing? Do you have any idea exactly how devoid of concern and empathy you appear by leaning in on this particular line of defense? "Oh well, we're all going to die someday anyway. Eat at Arby's." It's lazy, garbage thinking, and I hope you can see that it's garbage now.

And while you're being obtuse again with this utter rubbish...

I'm sorry to break it to you but you are, in fact, posting on the GitHub page of a federated social network, based on the principles of servers of varying sizes exchanging information to allow people to talk to each other. If that's what you don't like, you are in the wrong place. There is absolutely 0 chance that this software will pivot to being a centralized commercial service for any reason. If something is not possible to do in a decentralized fashion, that's just how it is.

You have the ability, and, I'd argue, a greater responsibility to not only your userbase but that concept of civilization that you seem to think is pointless because someone's just going to abuse someone else with spoken language anyway, to take this federated network and allow the users as well as the administrators of your instances to decide what level of engagement they would like to have with the network (in some cases, because these choices directly impact their personal safety), on perhaps a scale that's granular enough for you and your "vision" to be uncomfortable with it, because otherwise, you're no better than the "centralized commercial services" (and, dude, you have a Patreon that's generating almost 40 grand a year now despite a relatively small userbase, so while that's still very small potatoes compared to the billionaires, don't even try to say that your development of this software stands up to some anti-capitalist purity test), as you, like them, will still be failing the people who need your network most. If you can't hear out and, most importantly, compassionately address the needs of the most marginalized members of your network, it's no good for anyone, be it people who were here a few weeks after you went live like me, new users, any of the celebrities (or notable victims of harassment, which, hey, makes you kind of a predator in my book or at least a really creepy opportunist, but your mileage may vary) you tried chasing on other social networks, not anyone. And, yeah, you will fail just like your predecessors have failed for decades now. A network where knowing if people have suddenly experienced an uptick in hashtagging about Elon Musk or whoever is more important to the people running it than whether or not those people are safe is no network worth using.

@sssdddccc

This comment has been minimized.

sssdddccc commented Jun 2, 2018

Oh, and I just looked at something. Am I missing something here (it's admittedly possible, as I am not privy to all of your strange GitHub customs), or is "a lot of support" really a whole 31 people posting a thumbs up emoji (between 2 GitHub threads) over the course of a year and a half, out of a network of...it is millions now, isn't it?

Do you have any idea how dangerously reckless that is, if so?

@Laurelai

This comment has been minimized.

Laurelai commented Jun 2, 2018

I really wouldn't have said it like this because frankly i don't do that anymore but uh you arent wrong. Structurally the mastodon project is functionally an autocracy. The income and attention gargon and mastodon social receive are forms of power. And ive spoke at great length about how these kinds of power structures create real matereal benefits to behavior that most see as bad. I think that mastodon as a project needs more democratic oversight and division of power, with a voice given to victims of harassment. Otherwise its going to be a constant uphill battle from now on to get you to listen and care about how your code is affecting other human beings.

@Cassolotl

This comment has been minimized.

Cassolotl commented Jun 2, 2018

I hate to interrupt but yeah, when 31 people go out of their way to engage with the issue list and click the thumbs-up, that's usually a sign that a lot of people want it. The original issue is pretty old and 31 is still a lot! But Gargron has frequently told us that this isn't a democracy and people don't get bad abusable features just because a lot of people want it, so let's just stop talking about how many people want this feature. If ultimately Gargron makes the call based on whether or not it seems like a good idea then desirability is only tangentially relevant.

As for everything else, I feel like giving admins the power to customise their instance is a good thing. It's not like anyone is asking Gargron to turn off the federated timeline, but even if they were that's okay, right? Admins should be able to make a totally isolated mini-network if they want to, without having to break anything. We have to trust them, and even if they get it wrong they can change their minds and turn federation back on or whatever.

I guess though that you've said no to that because it is against your principle, and the worry I have then is that this HUGE project with hundreds of thousands of users and many many passionate contributors is still just Gargron's Pet Project, he can do what he wants and if anyone else disagrees they can fork it. At what point is the ultimate decision-maker responsible for users' wellbeing? As soon as there is a conflict it's clear that the answer is "never, as long as someone can theoretically fork it; I never asked for this."

Whitelist federation and the ability to turn off the federation aspect and trending tags will lead to more instances and more users. Growth is obviously very important but not at the expense of people.

I like the trending tags feature a lot! I'm probably one of the 31 thumbs-ups.

@Cassolotl

This comment has been minimized.

Cassolotl commented Jun 2, 2018

How does abuse via trending tag work? If someone wanted to use a trending tag to abuse someone, how would they go about that?

The reason I ask is because right now, with my very little understanding, if someone were to scour hashtags to find people to target, an admin turning off trending topics would be a protective act mainly for other instances. An admin wouldn't be able to protect their own members from abuse from other instances, and since they are a nice admin they wouldn't let their members target people on other instances anyway - they would pay attention to reports and such. I don't think being able to turn it off would reduce the risk for this kind of abuse, or reduce the work that an admin does to protect their members.

I can see that people might try to get a topic to trend using bots in order to bring it to the attention of more people - and in that situation we will STILL do better than Twitter or Facebook because our good admins on good instances will mute particular hashtags, block instances that have a lot of bots that post only hashtags, ban members on their own instances that are harassing people, etc.

So if someone could explain to me other ways that trending topics are abused on Twitter, I would appreciate it! It's not an area I know much about, and so far I can't see how being able to turn this off will make any difference, but folks are very passionate about it all so I must be missing something.

@lawremipsum

This comment has been minimized.

lawremipsum commented Jun 2, 2018

Multiple examples of the potential for abuse have been raised in this thread and elsewhere.

Rather than continue to offer them ad hoc, I just want to again make the broader point from a systems design perspective that failing to carefully and fully consider a potential feature's potential for abuse or harm before implementing it is exactly how other social platforms managed to design themelves into abusive/harmful hellscapes. Successfully being an anti-abuse platform can't just mean "some tools," it must also mean a commitment to robust consideration of the abusive potential of new features—which doesn't mean waiting to see if people concerned about abuse manage to find where new features are being considered and tested and do that work on an ad hoc basis in a github issue thread.

It's neat, some people want it, and arguably it adds some value. At issue is whether the added risk of harm has been fully considered. I don't see evidence that it has been thought about much at all beyond the level of "we have blocking and moderation, clear to proceed," and I think as a project that prides itself on its anti-abuse features, Mastodon needs to embed that commitment into the design process or it's just bolted on for show (just like the other platforms) and not a genuine commitment.

@Cassolotl

This comment has been minimized.

Cassolotl commented Jun 2, 2018

Multiple examples of the potential for abuse have been raised in this thread and elsewhere.

From this thread:

It's federated, so anyone outside of your server could affect what your people see (to the extent that you federate with them), by gaming the trending.

Admins can mute hashtags and block instances that harbour people who game trending topics.

It will affect discourse, and instance admins may not want that and would therefor want to turn it off.

Not clear how? Also, it's not abuse.

Even if it doesn't directly mess up the chronological timelines more than boosts already do, it will affect it in the way of "people talk about what people talk about and talk more about it"

People can mute hashtags. In fact, with trending topics people are more likely to use the hashtags for their toots when they talk about what everyone is talking about, making them easier to mute than they currently are. (Provided we get a good keyword/hashtag muter, which is on the roadmap.) It's not abuse.

multiple admins want to

This can't be argued with. :D

it [the trending feature] is of arguable value

People here on other issues have argued that it would be valuable to them. Just because it isn't valuable to you, doesn't mean it's not valuable. It is a reason to make it a user toggle though! But it's not abuse.

it [the trending feature] alters the feel of a Mastodon instance and emphasizes/implements values that admins may disagree with; e.g., algorithmic trending topics is an anti-pattern that promotes things in a way that negatively influences communities

That's a reason to add a toggle, but it's not abuse.

A lot of admins do not want this feature as it contributes to the gamification of social media, something we left twitter to avoid.

This assumes that everyone on Mastodon dislikes Twitter and is not on Twitter, which is not true. It's also not abuse. There is this constant argument between "Mastodon is like Twitter (implied: good) but without the nazis!" and "we left Twitter to get away from this thing that Mastodon is becoming" and it's like, everyone has a line between "like Twitter (good)" and "like Twitter (bad)", and that line is in a different place for everyone. There is no objective line location! It's not even linear, it's like... a dynamic 3D matrix, or something.

Trending topics have never been well received on any network that's tried to implement them. There is no user-friendly way to do so, it's an inherently user-antagonistic system, it created echo chambers of opinion and offers nothing new.

This is presented as if it's self-evident rather than being a personal account. I don't know how to deal with that, because I'll bet there are no publicly available statistics on how well-received the trending topics on Twitter were, or how much people use them, or whether people like them, or whether people on Twitter feel antagonised by them, etc.

In regards to gamification if you have federation wide trending tags you create incentives for people to game that system to force a tag into trending status, it can and it will happen, which will cause whatever posts they are gaming to gain visibility.

This is not inherently negative, nor is it abuse.

Its a way to spread propaganda, lies, illegal content and other malicious postings. You will also incentiveize tag wars where competing ideological factions will battle for supremacy over a tag. These are all things that happen on twitter because they have a trending system. Same with facebook, facebook is in face removing its trending feature.

We have FAR more admins per user than Twitter or Facebook, many of whom care about their communities and protect users from instances who don't. (We have 55 users per mod, and Facebook has 250,286 users per mod.) Facebook have calculated that the trending news feature causes more financial loss (in terms of business lost through governments getting pissed off, money spent on hiring moderators, etc) than it is willing to handle; we are not a company so we don't have that problem.

If you cant implement it safely then dont implement it at all.

You can't have anti-abuse tools by actively creating tools that are very likely to be used for abuse, without consulting people who know and understand anti-abuse.

I haven't seen anything that suggests this feature would be unsafe, considering the user and admin tools that we have to prevent abuse. I would want to see personal accounts of abuse that happened on Twitter or Facebook, and discussion of how that would be prevented or dealt with on Mastodon between those victims and their admins, in order to approach this argument. But no one has given a personal account of their abuse on Twitter or Mastodon here. The closest we get is...

Okay, so, hypothetical scenario: bots and/or humans scatter out and make #ReyIsAFox a trending topic. Clearly a lie, intended to slander my character. What is the plan for dealing with that?

Admins can mute hashtags in trending topics, can ban users who use it, can block users who use it and instances who harbour those users. Rey's account can be on an instance with a good admin.

It's just worrying that the only considerations for abuse right now seem to be reactive instead of preventative.

There is no way to predict abuse without using algorithms that everyone here would object to because they could be used with malicious or corporate intent far worse than anything trending tags would do. But as I said above, we've got 55 users per admin/mod AT LEAST, and they care about the users much of the time, and on top of that we are lucky enough to be able to choose admins/mods who will protect us.

"No, don't do something, there is strong historical precedent of this being a problem everywhere it's implemented, and a problem that tangibly hurts users"

When it goes unchecked on Twitter or Facebook, yes. But here? Again, we have more mods and better tools. This is a very different situation.

People who actually have experience dealing with these things, because they've been marginalized off half the wider Internet, at minimum, are telling you that this particular feature you're so jazzed about can be weaponized, and has been weaponized

I can absolutely see why people who have been victim to this on Twitter and Facebook are scared, and we should listen to their accounts of their abuse very closely, and respond with how Mastodon will prevent that happening again. But so far no one has come here and said "this abuse happened to me, and it happened like this. How would this go down on Mastodon?"

I think that mastodon as a project needs more democratic oversight and division of power, with a voice given to victims of harassment.

I strongly agree, but I also feel that those victims of harassment need to give criticism that is more constructive.


So okay, I'm going to try to think of a situation in which trending tags could be bad.

Some TERFs get a pro-trans-sounding hashtag trending, to lure in victims. I post in that tag, and because my post is at the top and gives off the right vulnerability vibes, TERFs dogpile me. When I push back someone makes a bot that makes new accounts on several instances that send me vicious abuse.

  1. I put my account in Bunker Mode, which is on the road map - it makes it look like my account is deleted, so my toots are not visible to non-followers.

(I can remove bad notifications and DMs that have already happened by blocking and reporting, but does it block new notifications and DMs from non-followers?)

  1. I tell my admin about the abuse. I report the worst toots but don't have the energy to look at the rest, I just clear my notifications. I block everyone obvious.

(Can my admin view all public, unlisted and followers-only replies to me without me having to report everyone? I understand that admins can't see DMs unless I report them, and I agree with that, but can I turn off DMs from non-followers if I'm too tired to report or block them all? Can I delete individual DMs and notifications if I am too exhausted to block or report? Can I mass-report or mass-block?)


I think the answer to too many of my questions here is probably no. But the need for these tools are not unique to the manipulation of trending tags. This situation will happen without trending tags. Gaming trending tags would be harder for TERFs and misogynists to use to target vulnerable people, because TERFs can find and lurk in hashtags that trans people created themselves.

@Cassolotl

This comment has been minimized.

Cassolotl commented Jun 2, 2018

From #271 (comment) onwards:

My fear is that it will lead to bots trending (#bbc).

Admins can mute hashtags.

I saw that we also have this number: "total times the hashtag has been used during each day, and by how many unique people" and I'm thinking: wouldn't the number of unique people using the hashtag be a better measure of trending?

This has been implemented.

when would we ever want bot tags to trend?

I said: "If their hashtags varied over time, maybe? I know there are newsbots that automatically include tags from articles, and there's usually a few, and they vary from article to article. I'd probably want to see those articles posted by bots in the hashtag searches and trending topics."

Since were being asked to comment here on the validity of the feature itself i think people failed to realize what an exploitable vehicle for harassment this is.

Examples of the kinds of abuse that can happen with it are rampant on facebook and twitter and im not sure why anyone thought this would be a good idea.

Again, until you give us examples we cannot analyse this and decide whether trending tags is a bad idea.

Admin control over tags is not enough as then you leave it up to each instance admin to moderate out things that incite targeted harassment ...

I cannot stress this enough, in a system with 55 users per mod, it is entirely possible (and very important) to choose an admin you trust.

... and if they are willing by the time its sorted it might be too late

That's why I am very much looking forward to "Bunker mode" #7132. Perhaps with the addition of trending tags, if Gargron doesn't deactivate it (and again, this is a release candidate), Bunker Mode should be the top priority so that it's finished and excellent before trending tags goes into a proper release.

they can also abuse those tools to pick and choose things that arent harassment/evil content just to manipulate the userbase.

I say again, you must choose your admin. If it becomes apparent that an admin is manipulating your trending topics for their own personal gain and agenda, due to the nature of federation you can leave your instance and take your follows with you. This is a strong argument in favour of Support account migration #177.

Plus the admins might be party to the harassment, or indifferent.

Admins have the tools to block entire instances. If they are indifferent then the user can leave. See Support account migration #177.

Theres entirely too many "free speech" instances that would use this feature to harm others while the admins stood by and did nothing.

Ive been a victim of cyber harassment mobs on and off for years and this is the kind of feature they love to see on websites because they can use it to hurt people.

Trending topics are abused on every platform they have been implemented

How would they do that? How have they done that? I am not being rhetorical; we have to discuss actual tactics that are possible and easy if you put the work in, so that we can know for sure it's not possible or practical to mitigate it, before removing this feature. As a victim of past harassment by mobs your testimony is vital.

Its all fun and games until some psychopath forces your dox into the trending tags.

This is a good example of the kind of behaviour we need to expect and that has happened before. @Gargron, how would admins be expected to deal with a very dangerous dox of a vulnerable person in trending tags? Once we hear back, victims should respond with whether or not that's good enough, and why.

As an admin I don't want to babysit federated trends for abuse, propaganda, or other unwanted content, which is among the reasons I asked for an off switch in issue #7702.

Legit! It's a reason to implement an off-switch to prevent outgoing abuse, but an off-switch would not prevent incoming abuse. So concern from an admin who isn't willing or able to police that kind of abuse is a legit argument against implementing trending tags.

Importantly it doesn't help any user discover users or toots relevant to things they are already interested in. It only draws focus to things that are already (becoming) popular. That is, it doesn't solve a problem that needs to be solved.

This is not a reason to not implement trending tags. It explains how trending tags doesn't serve a need that it isn't trying to address.

it should be off by default and opt-in on the instance-level to whatever degree that is feasible.

Evil admins will activate it very easily so I don't see how that is helpful.

People will use this for publicity and trying to build up their followers much more than they will use this for engagement.

This is not abuse. It is in the same category of unpleasantness that I've seen Gargron object to in the past, eg: he doesn't like when people crosspost politics and whatever other Twitter crap unCWed to unmonitored accounts.

We already can search hashtags and explore that way, why should we add this?

I like Trending on Twitter because if I want to play on Twitter but don't have anything to say and my timeline is slow I can bimble over to Trending and have a nosey at what other people are talking about. They're usually outside of my usual timeline topics too so that can be interesting. I have learned a lot about other perspectives and life experiences in Trending on Twitter. (Often it's corporate or bollocks though, but I don't see how that can be prevented, and the enjoyment I get outweighs the bollocks for me personally.)

People who don't like Trending on Twitter can not go there, and they can unfollow people whose tweets are too attention-grabby or current-events. The only difference with Mastodon is that that crap will end up in the federated timeline - but it will all be with the hashtag du jour, making it very easy to block with keyword filters. So this is an argument for Feature request: Mute by keyword #1158, which is marked high priority and is on the roadmap.

Plus as it stands switter will absolutely dominate the trending tags on most days. I like switter, but it might draw the wrong sort of attention to them.

Gargron says: This is not true on technical grounds. If a hashtag usage is constant, it won't trend, no matter how high the volume is. Only things that rise unexpectedly appear as trends.

it’s just going to add a way for people to try to manipulate the program into promoting their pet cause whether it’s targeted abuse or social engineering.

Again, this is a question for Gargron and the admins. How would this be prevented?

It’s not going to help people discover new things or users

That is the exact opposite of my experience of Trending stuff on Twitter.

it’s not going to make timelines easier to follow

It's not trying to solve that problem.

Sock puppet accounts are already easy to make on mastodon and this is going to amplify their power and create a mostly unblock able harassment vector.

Admins can block users and entire instances. Will people make sock puppet instances, in quantities that admins can't deal with? That needs to be discussed.

In my experience trending hashtags are only mentioned when they are used as advertising or harassment vectors.

That is not my experience, although sometimes I have to block a lot of spammers in consistently popular hashtags that are not trending on Twitter. Trending hashtags DO contain spammers, but certainly not only spammers.

Also people on mastodon don't see exactly how the trending tags are determined which hurts the transparency about how mastodon works which is one of the best features.

Transparency is easy to add. A link saying "how does the software choose trending tags?" somewhere, with an easy-to-understand description, would be great.

I'm imagining a scenario like the one Twitter got into in the 2016 election: sockpuppet accounts creating astroturf trends and influencing conversations, only this time it's distributed across federated instances and no one admin can plausibly ferret out that the manipulation is in fact coming from a single source.

If the single source is an instance then that's easy. If it's multiple instances, perhaps shared instance blocklists for admins is the natural solution?

The broader point is that it creates new incentives and rewards for unwanted behavior, some of which hasn't even yet been contemplated, much less considered how it might be recognized and countered.

We can't prepare for scenarios that we can't contemplate.

Heck, it was barely more than a week ago that the entire mastodon network was flooded with sockbots from who knows who, dozens of which are probably sleeping across the fediverse right now. This tool seems like it is perfectly designed for them to manipulate.

How would they manipulate it? And, @ Gargron and admins, how would this be prevented?

Gargron responded: "I've gotta say I have not heard of trending topics playing any influential role in the elections. There were accounts spreading misinformation to their followers, who were in turn spreading it to theirs; there were ads that pointed to fake news, and a large number of sockpuppets replying in threads. But the end goal was afaik influencing people through those direct means, not changing which trend appeared in a sidebar. That seems like a much more serious issue, because those people can pretend to be anybody, like Antifa or BLM activists, and gather real followers for subtle campaigns. At least if your problem is trending hashtags, it's super simple to see when one is politically motivated, and then you can ban it and see who's posting under it in one swoop."

A lot of this, then, is related to full-text search on Twitter. If sock puppets are targeting trending tags specifically then admins can block users and instances harbouring those users, but if the scale becomes very large then, again, shared blocklists for admins becomes more important.

I believe the fake trends issue was more prevalent on Facebook, or at least that's where I saw it most, where bots or motivated groups shared and liked false or misleading articles that then got propagated to the rest of the network via the "Trending" box on the right side of everyone's screen.

This is a reason to not factor likes and boosts into Trending.

How come I get a feeling that what tags are trending is controllable on an instance level, meaning people can choose certain tags to trend and silence other tags. Even if turning off trending for a short time. This is what happens on Twitter and Facebook, trends they don't like get silenced.

On Twitter there is no other option, but on Mastodon if it becomes apparent that an admin is doing this then users can switch to an instance with a trustworthy and reputable admin. But it must be easy to migrate: Support account migration #177.

From my habits on Twitter or Mastodon, I see what's trending when I see the people whom I am following speak about it, not because of a trends list.

This is one experience and one way of using it. I use it differently.

a harassing admin could put there hashtag promoting harassment

Users who see this happening can leave the instance and choose a more reputable admin. Other instances can instance-block that admin's instance.

@Cassolotl

This comment has been minimized.

Cassolotl commented Jun 2, 2018

PS: If anyone would like me to anonymously post their personal accounts of harassment and abuse experiences related to trending topics on Twitter, please feel free to DM me on Mastodon: @cassolotl@dev.glitch.social

@Cassolotl

This comment has been minimized.

Cassolotl commented Jun 2, 2018

Saw this toot and agreed with it:

getting someone on your team that is an expert on abuse/harassment and internet stalking tactics should be of high priority to you, imo. it would help with a lot of the issues you’re facing presently.

They could give a broad and contextful and impersonal presentation of a lot of useful and relevant information and the pros and cons of various solutions that have been tried. That'd be MUCH better than making victims of abuse and harassment go through the awfulness of recounting their bad experiences.

@emsenn

This comment has been minimized.

emsenn commented Jun 2, 2018

Another method of abuse - that I haven't seen mentioned, apologies if I missed it - that happened in the real world is government enforcement surveilling trending hashtags to locate dissidents. This is best evidenced happening on Twitter in 2009 during the Iranian "Green Revolution", and there is evidence suggesting it also occurred in several states during the Arab Spring. Additionally, American participants in the Occupy movement were targeted and subsequently doxxed by people scraping trending hashtag data.

@Cassolotl

This comment has been minimized.

Cassolotl commented Jun 2, 2018

@emsenn Wow that is all horrible. :( Was it specifically trending tags that got monitored, or just specific tags that may or may not be trending?

@emsenn

This comment has been minimized.

emsenn commented Jun 2, 2018

Relevant tags were discovered by looking at what was trending; i.e. the Iranian state looked at what was trending on Twitter to learn what hashtags were being used to discuss the Green Revolution, and then began working through those tagged posts to find identities.

@Laurelai

This comment has been minimized.

Laurelai commented Jun 2, 2018

Also nazis, terfs and other abusive people will lurk tags created by vulnerable people and use it to find targets. This has happened on twitter repeatedly any time trans women make a tag that becomes trending,

@Cassolotl

This comment has been minimized.

Cassolotl commented Jun 2, 2018

@emsenn I see! Thank you for clarifying. Also, yuck. :(

@SelfsameSynonym

This comment has been minimized.

SelfsameSynonym commented Jun 2, 2018

Since it looks like Gargron’s made up his mind that he’s just going to pay lip service to being against abuse but won’t actually to listen to anyone who knows from experience on other platforms that this is a bad idea, maybe he should work to really distinguish Mastodon from other platforms by streamlining the whole thing and just creating an “abuse this user” button that automates the process.

@Gargron

This comment has been minimized.

Member

Gargron commented Jun 2, 2018

Relevant tags were discovered by looking at what was trending; i.e. the Iranian state looked at what was trending on Twitter to learn what hashtags were being used to discuss the Green Revolution, and then began working through those tagged posts to find identities.

The feature being discussed doesn't use any data that a government couldn't collect from the stream of public posts, TrendingBots and "mastodon hashtag explorer" app already do this. In the past few days I have read a lot of research papers on trend detection in social networks, and the studies simply use raw tweets from the firehose, they do not rely on the Twitter trends API (obviously, since those papers are about different algorithms). So that issue is orthogonal to the presence or absence of trends in the Mastodon API/UI.

Also nazis, terfs and other abusive people will lurk tags created by vulnerable people and use it to find targets.

But I think this happens without trending tags, too? And mods are already supposed to deal with it. Trends create an additional incentive for trolls, but a lot of social features are double-edged like that. If this conversation was happening 2 years ago, the same arguments could have been used against adding a public timeline to Mastodon (in fact, I think I heard these arguments then...). But public timelines were also a massive success that allowed Mastodon to establish itself and grow, and the local timelines became the vehicle for interest-based communities to converge around specific servers.

On the other hand, 3 hashtags in a box doesn't seem like a hill worth dying on. There's a chance that it would massively increase user onboarding and retention by giving new folks an immediate view into "what's happening", and even for me as a seasoned user they were fun & illuminating so far, and there are definitely ways to tweak the implementation to address most of the abuse issues, but if everyone is determined to throw out the baby with the bathwater, so be it.

@rtucker

This comment has been minimized.

Contributor

rtucker commented Jun 2, 2018

There is a HUGE difference between the (public, hashtag'd) data being available for third-party clients to analyze, and the output of an algorithm being blessed with a front-and-center view in the official web UI.

Until you can understand this, and what this means to the community of people who left Twitter to get away from the louder-voices-win culture, maybe you should not be defining the roadmap based on Hacker News thinkpieces.

@Laurelai

This comment has been minimized.

Laurelai commented Jun 2, 2018

Gargon you really arent understanding and im starting to think you dont want to understand. Theres a huge effort difference in trending tags right in your face and having to expend a lot of effort manually to finding what tags are trending. A lot of malicious people are stopped by such a simple effort barrier. This is reality. All of the hypothetical what ifs you can think of do not matter because this is how humans actually behave in the real world.

@Gargron Gargron closed this Jun 2, 2018

@rtucker

This comment has been minimized.

Contributor

rtucker commented Jun 2, 2018

What is the PR or commit ID addressing the requested feature, or the reasoning for not implementing it?

@Gargron

This comment has been minimized.

Member

Gargron commented Jun 2, 2018

so be it.

@Cassolotl

This comment has been minimized.

Cassolotl commented Jun 2, 2018

Just to be clear, because people seem confused about what you mean - does this mean you will be removing Trending Tags from Mastodon?

@MatejLach

This comment has been minimized.

MatejLach commented Jun 2, 2018

I don't normally participate in these kinds of discussions, but as a member of a vulnerable group myself, (disabled, which this would presumably affect), I feel like I have something to say.

I do see good intentions behind this issue, but when people like @sssdddccc say:

So...I'm seeing a bunch of people tell another person planning to do a bad thing that it's a bad thing, and in response, I'm seeing the other person come up with any **half-baked excuse they can for why they're just going to do it anyway,

I lost interest in what they have to say. Look, the very fact that there is a desire from a part of the community that wants this feature means that it is not universally accepted to be a bad thing. The very fact that you assert it as such means that it is you, not @Gargron who is not open to a rational discussion and just wants to impose their vision in a quite authoritarian way on a project that isn't theirs.

In your desire to help vulnerable people, you're actually empowering labels such as the "authoritarian left" by behaving like this and employing emotional blackmail, such as:

because it's fun to pull wings off insects to see what happens** or whatever is behind this obtuse, downright unethical type of thinking

is really driving off the cliff here with the rhetoric. If you're trying to help people, this is not a way to do it. It burns out @Gargron, it stresses out the community, it empowers the extreme right and still, does not achieve what you want.

I do not agree with every decision ever made by @Gargron, but I recognise that it is still his project and he has the right to impose his vision over it. Moreover, I do recognise that what I wat may not be what the whole community wants and I must therefore make solid arguments as to why my proposal is the way to go.

Some have been made here, but amides the personal attacks and emotional blackmail employed on @Gargron, they lose credibility.

If you fail to understand that your personal opinion does not represent the objective view of morality and that you do not just deserve an audience, you have to win it over, then I am afraid you will not achieve your goals and will not win anybody over.

EDIT: @Gargron I just saw your pull request to remove trending hashtags, I do understand that this is mentally taxing on you, but if it does not represent what the vast majority of the community wants, or you feel strongly that is isn't right, do not be pressured into a position you do not believe in.
I'd suggest running a strawpoll on this, but I suspect that the most active people right now are these that feel strongly about not having this feature, which would skew the poll significantly.

@tootsuite tootsuite deleted a comment from sssdddccc Jun 2, 2018

@strypey

This comment has been minimized.

strypey commented Jun 2, 2018

I just want to remind everyone that you don't have to run @Gargron 's exact version of Mastodon on your instances. I'm not even saying fork the project (although you're free to do that if you don't like how @Gargron runs things), just fork each release version, and apply your own set of patches that adds (or removes) features as you like it. I haven't read this whole thread, but I've seen people verbally attacking @Gargron, including calling him a "predator" which, ironically, is exactly the kind of out-of-line verbal abuse they are claiming to oppose. This is no way to treat anyone, let alone someone who is giving away software for your free-of-charge use.

@Sixthhokage1

This comment has been minimized.

Sixthhokage1 commented Jun 3, 2018

Also nazis, terfs and other abusive people will lurk tags created by vulnerable people and use it to find targets. This has happened on twitter repeatedly any time trans women make a tag that becomes trending

Yes, content discovery features can be abused. This is why moderation is so important. Demanding against making it easier for people to find things because bad people can also find things to be abusive is shooting yourself in the goddamn foot.

@tootsuite tootsuite locked as too heated and limited conversation to collaborators Jun 3, 2018

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.