Week 9 Readings/Viewings and Discussion #32
Replies: 15 comments 11 replies
-
Although I agree with Facebook that the issues discussed in The Social Dilemma are “difficult and complex societal problems” that the company is not entirely responsible for causing, I find many aspects of their argument flawed. Facebook argues that the documentary “buries substance in sensationalism” and that none of the people featured in the documentary are currently working at these companies— therefore they do not know what changes have been made on the inside. However, I believe that one of the reasons none of the people featured work at the companies anymore is because the environment at big tech companies is cutthroat toward whistleblowers. Big tech companies are notorious for practicing ageism and predominantly employing younger adults under the age of 30. Young adults are typically less stable in their lives with new homes and student loans to pay off, therefore making it more difficult for them to take risks and speak out against their employer. Speaking out against these companies would entail risking their job and stability to point out flaws, and often the issues being pointed out are forgotten by those at the company making it seem even less worthwhile to pursue. For example, Tristan Harris pointed out the addictive design at Google and it was quickly forgotten after he shared it with his coworkers. In addition, a recent example of big tech companies firing their employees for calling out unethical products is the firing of Timnit Gebru and Margaret Mitchell at Google. Google hired Gebru to criticize unethical AI at the company, however she was fired for doing exactly that and was not given an opportunity to discuss the research she published. The first point Facebook makes is that they “build [their] products to create value, not to be addictive”; however, I believe it does not necessarily take intent to make a product addictive for a product to actually be addictive or have a negative impact on people’s lives— just as it does not take racist intent in technology to perpetuate racism and amplify negative social biases. In addition, I have an issue with Facebook’s argument in their second point “you are not the product”. I find it interesting that they did not deny the accusation from ex-Facebook employee Tim Kendall when he discussed “dialing up” Facebook advertising and marketing at the commands of executives such as Mark Zuckerberg. I also find it interesting that their argument states “we don’t share information that personally identifies you unless you give us permission.” Giving Facebook the permission to share personal information and data is the default selected option when an account is created. Although I agree that people have more of a choice to opt out of more data collection than The Social Dilemma portrays, being opted in is still the default. The lack of technological literacy in society is a problem, however Facebook is taking advantage of people’s lack of education and awareness of managing their user settings. In response to the third point that Facebook makes in “algorithms”, I find it interesting that the statement also did not specifically deny the black box nature of their algorithms and AI, which only a few people at big tech companies understand how it works. In addition, I also have an issue with the line stating, “Portraying algorithms as ‘mad’ may make good fodder for conspiracy documentaries, but the reality is a lot less entertaining.” The statement exaggerates the intent of The Social Dilemma as a documentary in indirectly name-calling it a “conspiracy documentary”. Facebook’s argumentative statement on the documentary’s portrayal of algorithms fails to address the real conspiracies which Facebook has actually played a major role in amplifying. For example, Facebook played a role in amplifying the conspiracy of “Pizzagate” which is discussed in the documentary and the company does not deny their role in significantly spreading the issue. Facebook is trying to reap the rewards and acquire praise from the public when they manage to make the right choices, however they are also downplaying responsibility when they make things go wrong. I believe that in the cases they are wrong, “lack of intention” is not a viable alibi. However, I found their arguments on polarization and elections more convincing. I was also surprised to see they are addressing misinformation with 70 fact checking partners globally— more than other social media platforms. I enjoyed the documentary. Something that I found interesting in it was how companies in Myanmar pre-load Facebook on to smart phones. The app is the first app that many people learn how to use; however, it gave military and bad actors a tool to incite violence against the Rohingya Muslims and helped enable crimes against humanity. I believe that people have been influenced by outside factors since the beginning of humanity, however technology and social media platforms have amplified the ability to be influenced by big tech companies at an unprecedented and out of control scale, and society is currently not equipped to handle it. |
Beta Was this translation helpful? Give feedback.
-
Something that I found interesting in the documentary The Social Dilemma was how one of the designers at Facebook responded when asked about their initial intentions of their designs. He said that when they first discovered the “like” button, their goal was to spread love and appreciation for others around the world; never in a million years had he expected that like button to destroy teens’ self esteems and produce the competitive society that we live in today. This remark really resonated with me because it made me connect this idea to our discussion on neutral designs, and if technological products can actually be neutral or will they always be biased. This is then a great example for that - the like button started out as a positive product that was designed to spread love, but ultimately became toxic and negatively impacting the mental health of some many people around the world. How does a design flip from such positivity to such negativity? Is it the people that use it that ultimately lead the product to a certain bias or was it an underlying bias this entire time that no one could foresee? Continuing on with the idea that users lead the product to a certain bias, another Facebook developer also brought up the interesting idea that he does not let his kids use social media until they reach high school, or at least the age of 16. It has been known that pre-adult minds are the most malleable, meaning whatever they are using, learning, and playing with will alter their minds in different ways: here with social media, alter their minds in a negative way. I think it is a smart idea to prohibit the use of social media prior to this age so that pre-adolescent minds are able to form their own opinions and self confidence prior to being deconstructed by social media. If adolescent minds can be affected by the use of social media (ie higher levels of depression, fewer communication skills, lower levels of self confidence, etc), social media is then also affected by these repercussions and the cycle never ends. Adolescents look to social media to raise their dopamine levels and feel the high that comes with receiving one more like on their Instagram post, causing social media to become an addictive drug. Proving which one of these two events (the effect on these personable minds or social media becoming negatively biased) came first or second will be difficult to tell as they are now interwoven into a complex downward spiral. The only thing that I believe will be able to slow down this spiral would be to follow this developer’s idea of preventing those under the age of 16 to use social media. In all, I thought the documentary The Social Dilemma was incredibly interesting and eye opening. It is fascinating to hear from so many ex Facebook or Google developers and upper management employees about how they are now fully aware of the negative impacts their company has created in society, and that they know this is a problem that needs to be solved sooner rather than later. As they say in the documentary, our world will only continue to become more divided and polarized with algorithms and artificial intelligence spreading individualized fake news to every person until the point that no one knows what is right or wrong anymore. I think it is the job of our current generation to begin solving this issue so that these negative impacts do not continue on to our future generations. |
Beta Was this translation helpful? Give feedback.
-
To answer question 3, Facebook's 7-point response to the film convinces me in that Facebook has and is doing something to try to address the issue in contrast to the documentary's portrayal of the company not doing anything about it. However, it fails to convince me that Facebook really cares about doing it since it seems more like of a, I have to do it because otherwise I'll get into trouble, type of attitude. The Facebook response feels more like a defense of themselves rather than addressing the issues at hand since most of it is about having some service/page that people can use if they really care about it (more passive/neutral solutions rather than acting actively against the problems). Additionally, the documentary is much more convincing because it's coming from individuals with some sort of status/role in Facebook or other big tech companies while the 7-point response comes from the company as a whole rather than from a specific executive level members of Facebook. The article also wasn't written very well in-depth with facts/detailed support to provide support to their arguments, making most of it feel more so like claims to me. I will say, I have never seen this "Ad Library" facebook mentions and have no idea how to access it. I more so fail to see Facebook's enthusiasm on properly addressing this issue, when the Facebook CEO hasn't even watched "The Social Dilemma" as noted in an article shared by the professor a couple of days ago. I would personally feel like I could believe more so in Facebook's potential to address the issue if executive level people were actively considering these problems as an issue. |
Beta Was this translation helpful? Give feedback.
-
Honestly, reading the quote “if you’re not paying for the product, you are the product” makes me feel many different ways. On one hand I am not surprised by it and have learned to either accept it or not pay attention to it. However, on the other hand, it makes me feel violated, upset, and somewhat scared. I am not surprised since social media companies, in my mind, do not put the needs of its users first, rather they put their needs first which includes money, power, and prestige. Rather than creating a caring relationship between them and the user, they create a one-sided (almost parasitic) relationship which favors the social media platform. I think growing up in this “new-aged” digital world that has transformed from the information age to the disinformation age, has made me (and others) more complacent when it comes to the negatives regarding social media. Some people are unaware of the potential negative aspects talked about within the documentary, however, those that are aware, I believe, have chosen to ignore them. Many of my peers and friends know the negatives of social media and the risks of using it, however, they continue to use it. If I ask myself why this is the case, I am left puzzled. However, if I had to give an answer, I would say people continue to use social media, knowing the negative things associated with it, because as a society, if you are not using social media, you are immediately looked down upon. People who do not use social media are seen as less intelligent, without any friends, lumped into the conspiracy group, or seen as old-fashioned (which is also seen as negative). Social media platforms have created this metaphorical vacuum that keeps people suspended in the social media bubble without allowing them to escape due to social pressures. Being classified as the product makes me feel violated since I did not agree to be used for someone else's gain. I feel unvalued since the goal of the company is not to strictly be a service but rather mask their other goals with being a service. I understand that a free-to-use company has to have some business scheme to make money and afford to provide the service free to its users. However, specifically regarding Facebook, they profit well over the threshold to provide their service free to users, and this is where I have the problem. This is where the issue of users being the product comes into play, in other words, the company has satisfied the monetary needs to provide the service free but continues to ramp up their money-making models, using users for their own gain. I have no quarrels with a company that requires users to pay money to use the service since, most likely, they will not use the users for such money-making schemes. There are other ways a free-to-use website can make money. The two that come to my mind are accepting donations from the user, and creating exclusive content that requires payment to use. Additionally, I believe that if Facebook spent the same amount of time, energy, and resources to develop a way to make money without ads as they do to improve their ad revenue system, they would come up with something that is just as profitable. You might ask why they don’t do this, well I would argue they do not since it is more work, less efficient, and less profitable for them. |
Beta Was this translation helpful? Give feedback.
-
I have long believed in the quote "if you are not paying for the product, you are the product" since it is a natural extension of another quote that I hold to be a first principle: "there ain't no such thing as a free lunch." There is a cost to everything and it's just a matter of whether that cost is perceivable/worth it to you. While I think parts of the documentary is overly sensationalist just as Facebook argues, the narrative did correctly present the root cause of the problem as the advertisement-driven business model. The ad-driven model allows the existence of "free" websites/digital resources through the privacy/data trade-off that many consider to be trivial because the average user does not understand just how valuable data has become to the ad-driven business model. A viable alternative business model that I support is the subscription model, which I think is an improvement over the advertisement model since it realigns the product's financial incentive to be maximizing its long term value to the user so that the product is worth being subscribed to past its first week/month/year. I remember people bemoaned the rise of software subscription model as the reason why the good-old days of owning an app seem to be fading away, but I see it as a fitting solution to the ugly ad-driven model that threatening to turn everything into a data collection scheme. The cost of high-quality software is rising as software continues to eat the world, so it only makes to compensate software makers appropriately so that the industry has a financial incentive to improve the product for the paying users. Another potential business model for free websites to remain free is public/private funding, similar to how Wikipedia is funded by a non-profit that gathers donations from everyday folks as well as big tech companies that have become dependent on it. |
Beta Was this translation helpful? Give feedback.
-
One quote from The Social Dilemma really stuck out to me - Tristan Harris, a former Google design ethicist, argued that “we’ve moved away from having a tools-based technology environment to an addiction- and manipulation-based technology environment.” I agree with this notion that we pay for the technology we use with our attention and time, and because platforms’ profits are driven by these things, there is no financial incentive to limit user engagement. It’s frustrating that the technology we use seems to always have an ulterior motive or consequence, whether intended by the developers or not. When thinking about what caused the change from a tools-based technology environment, the best answer I could come up with was that we used to have to pay (and still often do) for tools that aren’t actively trying to keep us engaged. This raises the question of whether it’s better to pay with money or pay with our time and personal data. On one hand, the fact that we have so many powerful platforms and applications for “free” is highly convenient, allows us to easily connect with one another, and provides access to endless amounts of information. However, as outlined in The Social Dilemma, it’s hard to ignore how much of an impact these platforms and technologies have on our society, especially regarding their addictiveness and ability to rapidly spread misinformation. Several developers and managers in the documentary pushed for more regulation as a potential solution, although I wish they had expanded more on their ideas. At the very end, one individual proposed a tax on data collection to discourage companies from gathering every bit of information they could on their users. I like this idea because it seems to give users some semblance of ownership over their personal data and I would be interested in hearing others’ perspectives on it. |
Beta Was this translation helpful? Give feedback.
-
I believe there is a fine line between using algorithms to improve your experience and algorithms to manipulate you. Like we've talked about many times in class, some technologies while potentially harmful can also be used for great good. Such as doctors examining a patients DNA to see if they would respond to a certain treatment, could easily be turned into insurance companies exploiting that technology to charge higher premiums to those who they regard as more "unhealthy." Understanding the context in which a technology is used is necessary. Likewise the recommendation system when it was first developed was used to help people to find the next thing to watch. This idea isn't new, before recommendation engines, people would read book reviews or see what their local librarians thought were going to be the next big read. Because there is so much content and sometimes half the battle is getting people to know that your content exists. While I initially didn't consider the fact that Netflix also developed and used an algorithm to recommend you shows based on your past viewing experience, I don't believe it is fair to place Youtube and Netflix in the same category in regards to how they use their recommendation engines. I also believed Netflix should have disclosed that they also had a hand in the development of recommendation systems. Because, I believe that good journalism should also disclose those potential conflicts of interest in their reporting (as a frequent listener to NPR, that happens a lot). But in the Social Dilemma the main arguments it brings up is the "unintended consequences" of the technology. After all the person who made the like button for Facebook didn't realize that the like button would create the issues that it has. However as far as I know, Netflix's recommendation system isn't exactly causing massive issues such as Youtube's recommendation has. On the other hand, anyone can post essentially anything on Youtube. Thus the content is loosely moderated (mainly due to the sheer amount of videos being uploaded), and the quality is substantially lower (on average). Thus it is really easy to fall down a rabbit hole of a certain genre of video, which as radicalized many people to various points of views. While yes both platforms aim to keep you using them, Netflix relies on a monthly subscription and doesn't need to be run by ads, and relies on you seeing all the stuff you could watch thus you are going to keep paying them every month. While Youtube needs eyeballs on advertisement to make money (for the most part), so they must constantly show you something that will ensure that you will click on it. So yes, Netflix should have disclosed their use and invention of the recommendation algorithms, I don't think there are unintended consequences from Netflix's recommendations. Which is very different from youtube. |
Beta Was this translation helpful? Give feedback.
-
While I found the Social Dilemma to be an interesting watch, and it certainly hit accurately on topics of social media addiction and the division online speech can cause, I do also believe it was dramatizing our current use of social media and relying heavily on a handful of former tech employees' testimonies more than it should have in order to make a case against social media. Facebook's 7-point rebuttal mentions this small sample size of testimonies to try to combat the accuracy of the Social Dilemma, and while I agree that the film could've used more research I don't necessarily agree with Facebook that the group of ex-employees were necessarily misrepresenting the tech companies they were talking about. In my opinion, the way the film was produced – using a few former employees with homogenous opinions, acting out a whole side story imagining devastating consequences from using social media, and generally tailoring the film to be entertaining rather than informative – is a sign of the very issue it's warning about. The film warned about the downsides of predictive algorithms and addictive design to scare viewers all while the film itself was being produced by a massive corporation that engages in the same tactics to draw in its own users. Just like the social media companies the Social Dilemma is railing against, Netflix's production of it is designed to be less objective and informative on the topic and more sensational and scary in order to draw viewers in (much like real news versus fake news, the latter spreading much better on social media platforms). They also gather user data to predict what shows a person will like (just like YouTube promoting more and more radicalized videos on users' suggested feeds) and the interface is designed to auto-play trailers and next episodes of shows to encourage users to watch more (again, just like endless social media feeds). All this considered, it seems like this film is a really effective way for Netflix to try to promote itself as on the "right side" of the fight for privacy and Big Tech regulation while actually helping to point the finger at other companies while avoiding its own ethical failings. Facebook's 7-point rebuttal is the same way. All of their increased privacy measures they insist on describing in their rebuttal aren't as straightforward as they seem. For one thing, Facebook fails to mention how their data "transparency" doesn't translate to the other platforms they've acquired. In fact, when chatting with a friend I know doing software development for Instagram's advertisement department, they explained that data from your Instagram account automatically gets sent over to your Facebook to inform their advertising on you, although Facebook data does not get sent to your Instagram account. They explained that this was due to a discrepancy between the two user agreements for each platform. So, even if you think you've taken all these protective data measures on your Facebook account, they are still sourcing data on you in other, more secretive ways. Overall, it makes me distrust the privacy measures Facebook does point to in their 7-point rebuttal because they seem to be a crutch the company uses to argue that they've changed and distract from the remaining privacy violations they engage in. |
Beta Was this translation helpful? Give feedback.
-
One thing I find particularly interesting in the Facebook article is the way they crafted their arguments. Facebook noted that they added time-spent features to their applications, however this was months after Apple had already released this feature with iOS 12. As Apple products are used by many, this update was not as revolutionary as they make it out to seem. Additionally, they say that they do not “benefit” from misinformation despite the fact that any content users create or interactions users have with the website is more beneficial in generating revenue than reducing interactions. If misinformation increases user participation or user interaction, this results in profit for Facebook. The article states that the algorithms encourage “meaningful” interactions, but this does not indicate how users actually interact with the platform. If people are spreading the misinformation within groups, the news feed algorithm becomes less impactful as users can sort posts by time. Over the past year it has become more clear the extent to which social media has served as a breeding ground for conspiracy theories, and Facebook also proved itself to be a safe space for conspiracy groups. Additionally, the argument surrounding algorithms was particularly interesting because it both minimizes the impact that Facebook is having while also underestimating the reach and ability of artificial intelligence. It almost seems to describe the algorithm as some basic combination of if-else statements rather than a complicated black-boxed decision that guides users through the platform. The concern rises from the extent of the information collected to make that decision and the manipulative intent behind the decision to begin with. Additionally, Facebook underestimates the impact of the data they have collected on their users. I’d argue that while Facebook may argue they collect less data than the pessimistic interviewees of the Social Dilemma claim, the manipulative end goal of the platform is more detrimental to society than beneficial. |
Beta Was this translation helpful? Give feedback.
-
The internet has brought us products and services at our fingertips, and sometimes we completely turn a blind eye to what is happening behind the scene. The quote, “if you’re not paying for the product, you are the product,” is very accurate. Anything that might be considered free must actually have something else responsible for paying for it. Businesses providing free internet products and services to users make money by selling their users’ data in some format. Companies like Facebook and Zoom have become very successful with this model of driving up user engagement with their software and making money from the aggregation of all the data they can get from the user. I think that it is still important that websites are free and everyone has access to everything on the internet, because this model has allowed for information to spread and has connected the world. We are currently enjoying free services from internet companies like email, GPS, text messaging, and many more. Imagine if these services were not free. Companies and users both can take part in being responsible for resolving the problems that free websites cause. Companies should be responsible for being transparent about any of the detriments to society their software causes, and what they are doing to fix it. In addition, users should inform themselves on what the detriments to the software they are using are and know the risks before using them. This would help a lot in hindering a lot of the negative effects of the free internet and allow the continuation of it. I don’t believe Facebook when they say that they are more focused on showing valuable content for the user rather than focusing on having users engage with their platform as much as possible. Facebook’s business model, centered around selling advertisements that are targeted to customer segments, forces them to create software that increases user engagement and collects as much data as they can from them. They need as much data to run their models and create a better advertising business. Although Facebook did make a convincing point that their advertising business allows small business to reach an audience and compete with bigger businesses, I don’t think that this outweighs the consequences of collecting data of its users. Facebook in general did not have enough in their response to have a better argument that the argument of The Social Dilemma. It is interesting that Netflix was not included in the documentary that they released even though they are a point a discussion in the topic of user engagement. However, Netflix does not do as much wrong as Facebook does in this realm. Users sign up to Netflix to be recommended shows and specifically like the service for it getting them hooked on to shows they otherwise would not have found. Also importantly, Netflix does not sell its users’ data. |
Beta Was this translation helpful? Give feedback.
-
"If you are not paying for the product, then you are the product" If any quote could capture how today's information economy works, then this is it. I personally agree 100% with this, and there are a few things that I would like to talk about: the concept of caveat emptor, the data paycheck, and the other toxic aspects of social media. The first, the Latin phrase caveat emptor, is used mainly as a "buyer beware" sort of answer to things that a purchaser should have known. Shady used car purchases that need more than it was worth in maintenance fixes, back alley Rolex purchases, and MLM type- wellness products all come to mind, in the sense that they buyer is responsible for doing research about the product, within limits. If the company is lying or blindly fraudulent, then there is nothing a consumer can do. However, in the case of social media like FB, they purposely obscure the terms in an extremely long document of user conditions, and that itself is an ethical issue, as it obstructs the consumer's ability to judge in entirely what is being done. Perhaps what needs to be implemented is a "nutrition label-style" disclaimer in the terms and conditions, that explicitly outlines what the company is doing to your data in easy to read, digestible, and bold face font. Second, I think the concept that Andrew Yang proposed (Yang Gang 2020) of the data paycheck is very applicable and worth more discussion. It is in essence making the companies pay you if they sell your data to other places. I think there is a fine line between a (relatively) fair trade of "free site access and feature usage" in exchange for "FB keeping your data" and them selling it and profiting off of it. In essence, I think it would be a good idea to have these companies pay the user a portion of their selling profit, as who "owns" that data in order to sell it? As soon as it is on FB, does my face image no longer belong to me? I would argue not, as I still have control over my image and whatnot- therefore, I argue it is unethical to sell likelihoods of me without reimbursement. This could be compared to the NCAA and the changing Zeitgeist about paying student athletes, as I think there are a lot of similarities. Lastly, data privacy as The Social Dilemma has made clear, is not the only issue with social media - it is merely one. False images of happiness, depression rates skyrocketing, toxic posts, and extremism come to mind. Each of these issues, as it seems to again be made clear in the last decade of social media development, are not going to be solved by the free market - the government needs to regulate aspects of social media, especially given the role of Twitter and FB with the spread of Qanon and the capital riot. |
Beta Was this translation helpful? Give feedback.
-
In response to prompt 4, one thing that I found to be interesting in The Social Dilemma is the notion that many of the adverse effects of these complex social media platforms have no single source to bear all the blame. In fact, the documentary claims that every person working in tech during the beginning of the social media revolution had incredibly positive intentions for their products. No one in the industry at the time expected a simple "like" button or follower count to have such detrimental effects on mental health. It is worrying to see how closely correlated anxiety and depression amongst our youth is with the growing prevalence of social media. Be that as it may, the influence of these social media platforms on this outcome is still largely viewed as “indirect.” Whether it is the spread of fake news, the political and social polarization of our society, or the diminishing mental health of our youth, it is problematic that there seems to be no specific source to hold accountable for any of these drastic phenomena. Furthermore, with the necessity to maintain funding from stockholders and other investors, it seems that there is no way for these huge companies to fix these deeply ingrained systems that make their platforms so dangerous and so profitable. This is very alarming and reveals that there needs to be a monetary incentive for these social media businesses to return to an ethical, productive norm. Despite the fact that there is no singular individual or group that is to blame for the detrimental effects of social media, it seems like the entire industry must be restricted to accommodate for the mistakes that have been made, which have led to these drastic consequences. |
Beta Was this translation helpful? Give feedback.
-
I was surprised to learn that workers at these social media companies have meetings in which they actually discuss how to basically find the best way in which users will get addicted to their features, and also how effortlessly they make us come back to their platforms for more interactions. It was also surprising making many connections to a few things from the movie such as notifications. These are toxic to us because it does bring some sense of emotion when receiving a notification from an app or the texting noise for instance, making us too attached to our phones, but mainly because once you open the notification it leads you to stay on the phone for longer than planned. This is why I took advice and just turned off notifications all together and try to space out as much possible my interactions with social media. Regarding ads, it was interesting to view the view from the movie and then Facebook’s response. In the movie, it is also how I view ads - with an ultimate goal to bring in more engagement and then revenue, but in Facebook’s response, they tried to display it as something they need to do in order for the platform to exist (so basically they need advertising to bring the money to keep the social media free). But it is strange to me because in the past, there were no ads on these social media and they were doing just fine. They make it seem like they 100% depends on ads to keep the website free for us to use, as well as putting it on businesses themselves as a way to help them grow more, which to me is not what the intentions really are. The 7 point response in my opinion is a stretch, as in they seem too bias when forming their arguments against people who have even working at these big companies, whereas the movie although had shocking moments and information, really opened my eye towards our interaction with social media. |
Beta Was this translation helpful? Give feedback.
-
Social Dilemma really touches on many points which are discussed in meticulous detail in this course as well as my Philosophy of Computing Ethics course. The quote on us being the "product" is highly accurate. A "tool" is something the user in in control of. Social media is not a tool, but rather something which uses the user as its tool. We as users give these tech giants our data to some extent as we spend time on their applications and services. They collect information on us which they can use to make a model of us like a "vudu doll" in their interface. We are categorized then given specific catered advertisements which the company's profit off of. Now, is it okay to have this tradeoff between targeted ads versus paid sites? Well it really depends on the lines drawn. Sometimes it is acceptable for them to charge in order for us to use a product. However, in the company's eyes it might be even more beneficial to give applications for free as it will drag more consumers. The consumer signs away rights in the terms and conditions in order for the company to provide them a free service. The issue is these company's have become invested in making users hyperconnected and infatuated for social approval. From a maximizing profits perspective, it is genius. Nevertheless in terms of hurting people's mental health, making users addicted, and general ethical concerns these platforms are way too deregulated. How can a free website make revenue without catering ads? Well one way is to just make egalitarian ads for all users. The issue here is that will not maximize profits as users will not be shown what their niched preferences of ads are. However, company's can make deals to sell their products by offering users a free and subscription version as they do. Moreover, the issue is not only the fact they target ads but rather than in this model being no real regulation to the system. If some sort of entity like design ethicists had boards of regulation vetted by unbiased sources such as hypothetically the government, maybe that would help reform certain practices. |
Beta Was this translation helpful? Give feedback.
-
Week 9: Recommendations and Manipulations (The Social Dilemma)
Lead: Team 2
Blog: Team 5
Required Readings and Viewings (for everyone):
Optional Additional Readings/Viewings:
Response prompts:
Post a response to the readings that does at least one of these options by 10:59pm on Sunday, 28 March (Team 3 and Team 5) or 5:59pm on Monday, 29 March (Team 1 and Team 4):
What do you think of the quote, “if you’re not paying for the product, you are the product”? How do you feel about the tradeoff between having a “free” website that includes targeting ads versus an ad-free website where you would have to pay to view the site? Are there other ways for “free” websites to make revenue?
In Facebook’s response to the Social Dilemma’s portrayal of their recommendation system, they suggest that their platform “uses algorithms to improve the experience for people using [their] apps”, similar to other consumer-facing apps including Netflix. There are many criticisms of Netflix’s release of the film, as they are considered the pioneers of the modern recommendation system. What are the ethical issues involved with Netflix’s decision to release this documentary? What is your stance on Netflix’s decision to not include their involvement in the development of these recommendation systems in the documentary?
What do you make of Facebook’s seven-point response to the film? Whose arguments do you find more convincing, the film’s or Facebook’s?
Respond to something in one of the readings that you found interesting or surprising.
Identify something in one of the viewings/readings that you disagree with, and explain why.
Respond constructively to something someone else posted.
Beta Was this translation helpful? Give feedback.
All reactions