Bots and Political Influence: A Sociotechnical Investigation of Social Network Capital

The rise of bots on social media platforms, designed to automate disinformation and disruption, has led to a kind of moral panic. The authors of this study sought to quantify the actual impact of bots on political conversations, and to answer the question “will public policy decisions be distorted by public opinion corrupted by bots?” The project was designed by an interdisciplinary team of scholars in political communications, sociology, computer science, and technology studies, and conducted by deploying  bots to write tweets and participate in Twitter discussions of three high stakes political events in the United Kingdom during 2016. The bots were programmed to follow and retweet specific hashtags. A network analysis was then performed to determine the influence of the bots during the course of the experiment.

The most interesting outcome of the study is that it failed to show any significant effect of the bots on Twitter conversations surrounding the three political events. The interpretation of that outcome is the focus of the authors primary conclusion, where they identify specific challenges faced by researchers in studying the influence of bots:

  • The experiment rested on a number of student volunteers who set up new Twitter accounts, and were asked to use specific hashtags while tweeting about certain events. The researchers then linked bots to some of the accounts to comment on and retweet the students’ tweets. But the new accounts lacked the “social capital” of a high follower count, and thus their tweets had limited reach even when amplified by the bots.
  • The researchers used two methods to deploy the bots. The first method was to fully create their own bots; the second method was to purchase bots from MonsterSocial, a commercial marketing agency that bills itself as “the #1 automation bot for Facebook, Instagram, Pinterest, Tumblr and Twitter.” MonsterSocial provides a user interface to set up a number of Twitter accounts to automatically retweet, favorite, and follow other accounts. It is not illegal to create bots in this way, and depending on the behavior of the bots, does not violate Twitter’s terms of service.
  • The authors conclude that another type of bot would likely have been more effective: those created by hacking and hijacking dormant Twitter accounts, set up and abandoned by human users. In this case the accounts may have already established considerable social capital in the form of followers, likes, and retweets, and thus have greater reach on Twitter. But the use of hijacked accounts violates Twitter’s terms of service, may be illegal, and would never be approved by university ethics authorities. The authors say these are the types of bots used to spread disinformation during political campaigns, and to disrupt protests and social movements.

The experiment indicates that small-scale deployment of bots created by legally acceptable methods lacks the social capital to exert influence on Twitter. The authors were also hampered by a lack of financial resources needed to create and purchase bots at great scale, and by legal and ethical concerns.

The authors expected their bots to be more successful in swaying the political dialog on Twitter, but came to understand that “social influence, even over technologies that allow bots, is a product of capital,” including the kind of social capital that can be acquired by cheating. They conclude that “the most effective bots may be the ones we cannot study.”

Reference

Murthy, Dhiraj, Alison B. Powell, Ramine Tinati, Nick Anstead, Leslie Carr, Susan J. Halford, and Mark Weal. 2016. “Automation, Algorithms, and Politics| Bots and Political Influence: A Sociotechnical Investigation of Social Network Capital.” International Journal of Communication 10 (0): 20. https://ijoc.org/index.php/ijoc/article/view/6271.