Bots and Political Influence: A Sociotechnical Investigation of Social Network Capital

The rise of bots on social media platforms, designed to automate disinformation and disruption, has led to a kind of moral panic. The authors of this study sought to quantify the actual impact of bots on political conversations, and to answer the question “will public policy decisions be distorted by public opinion corrupted by bots?” The project was designed by an interdisciplinary team of scholars in political communications, sociology, computer science, and technology studies, and conducted by deploying  bots to write tweets and participate in Twitter discussions of three high stakes political events in the United Kingdom during 2016. The bots were programmed to follow and retweet specific hashtags. A network analysis was then performed to determine the influence of the bots during the course of the experiment.

The most interesting outcome of the study is that it failed to show any significant effect of the bots on Twitter conversations surrounding the three political events. The interpretation of that outcome is the focus of the authors primary conclusion, where they identify specific challenges faced by researchers in studying the influence of bots:

  • The experiment rested on a number of student volunteers who set up new Twitter accounts, and were asked to use specific hashtags while tweeting about certain events. The researchers then linked bots to some of the accounts to comment on and retweet the students’ tweets. But the new accounts lacked the “social capital” of a high follower count, and thus their tweets had limited reach even when amplified by the bots.
  • The researchers used two methods to deploy the bots. The first method was to fully create their own bots; the second method was to purchase bots from MonsterSocial, a commercial marketing agency that bills itself as “the #1 automation bot for Facebook, Instagram, Pinterest, Tumblr and Twitter.” MonsterSocial provides a user interface to set up a number of Twitter accounts to automatically retweet, favorite, and follow other accounts. It is not illegal to create bots in this way, and depending on the behavior of the bots, does not violate Twitter’s terms of service.
  • The authors conclude that another type of bot would likely have been more effective: those created by hacking and hijacking dormant Twitter accounts, set up and abandoned by human users. In this case the accounts may have already established considerable social capital in the form of followers, likes, and retweets, and thus have greater reach on Twitter. But the use of hijacked accounts violates Twitter’s terms of service, may be illegal, and would never be approved by university ethics authorities. The authors say these are the types of bots used to spread disinformation during political campaigns, and to disrupt protests and social movements.

The experiment indicates that small-scale deployment of bots created by legally acceptable methods lacks the social capital to exert influence on Twitter. The authors were also hampered by a lack of financial resources needed to create and purchase bots at great scale, and by legal and ethical concerns.

The authors expected their bots to be more successful in swaying the political dialog on Twitter, but came to understand that “social influence, even over technologies that allow bots, is a product of capital,” including the kind of social capital that can be acquired by cheating. They conclude that “the most effective bots may be the ones we cannot study.”

Reference

Murthy, Dhiraj, Alison B. Powell, Ramine Tinati, Nick Anstead, Leslie Carr, Susan J. Halford, and Mark Weal. 2016. “Automation, Algorithms, and Politics| Bots and Political Influence: A Sociotechnical Investigation of Social Network Capital.” International Journal of Communication 10 (0): 20. https://ijoc.org/index.php/ijoc/article/view/6271.

The Market of Disinformation

This report was produced by the Oxford Information Labs for the purpose of explicating the problem of disinformation on social media, and making some actionable recommendations for the UK Electoral Commission. The authors do an admirable job of describing disinformation strategies deployed by political campaigns with specific examples from recent events, including the inevitable reference to Cambridge Analytica.

The report seems to be written for an audience that may not know what an algorithm is…although the initial explanation of algorithms as “calculations coded in computer software” and “opinions embedded in mathematics” is unlikely to be of much help. From there, the report gets to the heart of the matter, which is that the bias of social media algorithms is to keep people “engaged.” This is a lovely word, but in the context of e.g. Facebook and Twitter it means “trigger the emotions of people to keep them scrolling, clicking, liking, and sharing for as long as humanly possible without literally dying of dehydration” (my wording) and preferably in many sessions per person per day.

So this is “optimization” in social media and the platforms can afford many thousands of engineers and experience designers to do it. The authors don’t let Google off the hook, and they do a reasonable job of explaining web crawling, relevance algorithms, and SEO. They outline recent changes to Facebook’s algorithm and explain why different Facebook users see different things, which leads into an explanation of psychological profiling, personal data aggregation, and microtargeting.

I think the most important point they make is that “(f)uture electoral policy and oversight should be informed by the fact that online and offline actions are necessarily linked, with the offline elements being key enablers of online uses and abuses.” In other words, the older tricks by political propagandists haven’t been replaced by social media; they’ve been augmented by it.

The authors recommend specific measures the UK Election commission could try to put in place. As with many ideas for regulating social media, they seem worthy of consideration, but they might be totally impractical. For example, digitally imprinting campaign material with the source of the information could improve transparency. Location verification of messages could help even more. Campaigns could be penalized for violations with financial sanctions that actually hurt. And finally, transparency in the financing of organizations and people behind political messages might limit the activities of truly bad actors. The objection in the West is likely to be “but Free Speech and Free Markets!” (Here in the U.S. we have the Supreme Court decision in Citizens United v. FEC, which basically says money is speech so you can’t stop money.)

The measures suggested in this report aim to “future-proof” election policies. Elections are special cases, where (in theory) the outcome supports democratic governance. Elections are too important to just say “oh well, free speech and free markets, I guess we can’t do anything about political disinformation.” Some of these recommendations might make a difference in reducing disinformation in political campaigns today. As for future-proofing future elections, I suspect we’re going to need more future reports.

Reference

Hoffmann, Stacie, Emily Taylor, and Samantha Bradshaw. 2019. “The Market of Disinformation.” https://comprop.oii.ox.ac.uk/research/oxtec-disinfo-market/.