Skip to content

sanazbahargam/CODE2018

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 

Repository files navigation

CODE2018

2018 Conference on Digital Experimentation (CODE)

Exposure to Opposing Views can Increase Political Polarization on Social Media

There is this concern that social networks exacerbate the political polarization due to network homophily effect (users tend to make ties with other similar users). The main goal of this research was to study whether exposure to opposing views decreases the political polarization. In addition, the authors studied whether exposure to opposing views can create backfire effects that exacerbate political polarization. Finally, they also tested the hypothesis that backfire effects will be more likely to occur among conservatives than liberals.

In order to answer all of the aforementioned hypothesis, they conducted experiments on Twitter; they asked self-identified republican and self-identified democrat users on Twitter who visit Twitter at least three times each week to complete a 10-min survey. These surveys measure the key outcome variable: change in political ideology during the study. They also collected data on users’ political attitudes, use of social media and conventional media sources, and a range of demographic indicators. Then they asked republican users to follow some bots in exchange of money. These bots retweeted democrats’ tweet. They also selected self-identified democrats and asked them to follow bots retweeting republicans’ tweet in exchange of money.
At the end of the study (two months later) they ask the users to complete the survey in order to determine whether the users’ political views have changed. The results suggest that the views of both democrat and republicans users not only didn’t change but also the conservatives become more conservative and liberal become more liberal (backfire effect),. This backfire effect is stronger among republicans compared to democrats.

Pitfall: I remember I read a paper last year (probably in psychology) that when people discuss a topic, if one of them has expressed his opinion at the beginning, they are significantly less likely to change their opinion after the discussion/debate. Something that was missing from this research was that if there is any opinion change on the topics that the person has not tweeted about before the treatment. Also, another pitfall of this research was the fact that the bot retweeted the politicians’ tweets. I think most of the tweets by them are opinion based and not factual and they attack the opposite party most of the time. It worth building a bot that only retweets the tweets which contain facts and not attacking the opposite politician.

Does Government Surveillance Give Twitter the Chills?

The main goal of this research was to understand if the Snowden revelation in 2013 changed user behavior in online social networks (Twitter). In order to study that, the authors obtained the tweets containing one of the 414 words that are subject to monitoring by Government authorities and are considered to be sensitive. In particular, they study if users’ propensity to post Tweets containing monitored words experienced a significant decrease, after the revelations as compared to non-monitored words. Since the control group should be the tweets containing no monitored words, the authors decided to use tweets containing one of 434 food-related words. The studied showed that the number of tweets contacting the monitored words has decreased after the Snowden revelation compared to before. They also studied the difference in difference (DiD) to find out heterogeneity across keywords, locations, and time. In order to study the DiD, the authors developed a new statistical machine learning method based on subset scanning. This method is able to detect the subpopulation of data that shows the most significant treatment effects in the context of a spatio-temporal difference-in-difference. Finally, they result shows that user are .8% less likely to use monitored words and this effect is much stronger tin US and specially in US states that were showing a democratic majority during the 2012 election.

Computer Algorithms prefer headless women

The main goal is this research was to study whether ads algorithm show any discriminatory biases in targeting. In order to that, the authors ran an ad campaign experiments on Snapchat and the experiment include four treatment groups with 4 different photos of t-shirts to be shown to high school students. The photos differed by the message written on the back of person’s t-shirt and whether or not the head of the person was included. The photos are shown below:

The study shows that on average, the ads having the male photos (both headless and with head) are more likely to be shown and specifically the photo with the female head was significantly less likely to be shown to the users compared to all other photos.

Experimentation and variations and p-hacking:

Is Time Our Friend or Enemy? The Impact of Timing on Online Experimentation

The main goal of this research is to study whether experimental results can be affected by when the data collection is conducted when doing experiments on MTurk environment. More specifically, they studied the effect of timing in multiple dimensions, both hourly and monthly and to see whether results differ in different times. In order to achieve that, the authors ran multiple important economic and cognitive experiments on MTurk on different times of the day (in four different time slots 2-3am, 8-9am, 2-3pm, 5-6pm) in April and May and investigated the hourly variation. Months after, they replicated the exact same experiments on MTurk (in August). They found out very small differences of experimental results obtained at different times of a day. On the other hand, the results suggest that there are significant monthly variations in results and behavior , for example, on worker’s risk preferences and degree of extraversion. The results suggest that in order to obtain confidence on the results, the experiments should be done during different time of year.

On the detection of p-hacking in experimental meta-analysis:
A non-parametric procedure for analyzing discontinuities in empirical density functions

This research studies one the p-hacking methods in A/B testing, continuous monitoring in which the experimenters continuously monitor the result of experiments and either stop the experiment after it shows no statistically significant results or continue the experiment until there is a statistically significant results with a p-value less than 0.05 and then stop the experiment. The authors showed that such monitoring method will cause an inflated false discovery rate in experiments.

Not Registered? Please Sign-up Now: A Randomized Field Experiment on the Optimal Timing of Registration Request

About

2018 Conference on Digital Experimentation (CODE)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published