Defining Digital Literacy in the Age of Computational Propaganda and Hate Spin Politics

Just like much of the rest of the world, Indonesia is facing a crisis of fake news and bot network infiltration on social media, leading to rampant propaganda, mass belief of disinformation, and not fully understood effects on voters that may affect them deeply enough to alter election results. Salma (2019) describes this crisis and identifies the solution as critical digital literacy, essentially educating people about the nature of fake news, algorithmic gaming of social media platforms, and identifying bot networks.

Salma consolidates the issue into two problems: computational propaganda and hate spin politics. She defines computational propaganda as “the use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks” (p. 328). This includes fake news created and spread on social media, bot networks driving attention to and changing conversation around particular issues, and the groups who organize these campaigns of disinformation. Her definition of computational propaganda encompasses much of the fake news crisis currently rattling the United States, as well as other countries.

The other primary issue she identifies is hate spin politics, which is less easily defined. She describes it as “exploit[ing] freedom in democracy by reinforcing group identities and attempt[ing] to manipulate the genuine emotional reactions of citizens as resources in collective actions whose goals are not pro-democracy” (p. 329). Hate spin politics seems to be the weaponization of identity politics and emotion in the digital political sphere, using religion, nationality, sexuality, and other identity markers to turn people against each other. It not only aims to segregate people based on their identities, but to inspire people to self-select into identify groups to create political warfare.

Computational propaganda and hate spin politics are carried out by several groups in Indonesia. Salma identifies Saracen and Muslim Cyber Army as responsible for various fake news campaigns, and there have been notable suggestions of similar political interference from Russia (Sapiie and Anya, 2019). These tactics have shown to be successful on a large scale and with dire consequences in the case of Basuki Tjahaja Purnama, also known as Ahok, the politician who was imprisoned for blasphemy based largely on an edited video that went viral on social media.

Indonesian government officials are keenly aware of the problem computational propaganda presents, taking significant steps to counter fake news spread. In 2018, they began weekly fake news briefings intended to address false stories that have gained traction (Handley, 2018). Salma suggests an increased focus in critical digital literacy, or teaching people to “evaluat[e] online content or digital skills but also to understand the internet’s production, management and consumption processes, as well as its democratizing potential and its structural constraints” (p. 333). Essentially, critical digital literacy is to computer or technical literacy what reading comprehension is to literacy. It’s not enough for users to be able to use a computer and navigate the Internet; there needs to be a solid understanding of what they’re seeing and why, including who might have produced content and how it came to be presented to that user.

Who could argue with that? Of course increased education about the creation and spread of fake news and algorithmic manipulation would be useful to nearly all Internet users, and it might help counter the spread and impact of computational propaganda. However, Salma offers no explanation of how digital literacy would improve hate spin, which seems to be a larger social issue that’s just as likely to occur offline as on. Hate spin politics also traffics in emotional responses, meaning strictly logical literacy training might not be enough to retrain people to grapple with emotional manipulation.

Paper:

Salma, A. N. (2019). Defining Digital Literacy in the Age of Computational Propaganda and Hate Spin Politics. KnE Social Sciences & Humanities2019, 323-338.

Additional Resources:

Sapiie, M.A. & Anya, A. (2019, February 4). Jokowi accuses Prabowo camp of enlisting foreign propaganda help. From https://www.thejakartapost.com/news/2019/02/04/jokowi-accuses-prabowo-camp-of-enlisting-foreign-propaganda-help.html

Handley, L. (2018, September 27). Indonesia’s government is to hold public fake news briefings every week. From https://www.cnbc.com/2018/09/27/indonesias-government-is-to-hold-public-fake-news-briefings-each-week.html

Bots and Political Influence: A Sociotechnical Investigation of Social Network Capital

The rise of bots on social media platforms, designed to automate disinformation and disruption, has led to a kind of moral panic. The authors of this study sought to quantify the actual impact of bots on political conversations, and to answer the question “will public policy decisions be distorted by public opinion corrupted by bots?” The project was designed by an interdisciplinary team of scholars in political communications, sociology, computer science, and technology studies, and conducted by deploying  bots to write tweets and participate in Twitter discussions of three high stakes political events in the United Kingdom during 2016. The bots were programmed to follow and retweet specific hashtags. A network analysis was then performed to determine the influence of the bots during the course of the experiment.

The most interesting outcome of the study is that it failed to show any significant effect of the bots on Twitter conversations surrounding the three political events. The interpretation of that outcome is the focus of the authors primary conclusion, where they identify specific challenges faced by researchers in studying the influence of bots:

  • The experiment rested on a number of student volunteers who set up new Twitter accounts, and were asked to use specific hashtags while tweeting about certain events. The researchers then linked bots to some of the accounts to comment on and retweet the students’ tweets. But the new accounts lacked the “social capital” of a high follower count, and thus their tweets had limited reach even when amplified by the bots.
  • The researchers used two methods to deploy the bots. The first method was to fully create their own bots; the second method was to purchase bots from MonsterSocial, a commercial marketing agency that bills itself as “the #1 automation bot for Facebook, Instagram, Pinterest, Tumblr and Twitter.” MonsterSocial provides a user interface to set up a number of Twitter accounts to automatically retweet, favorite, and follow other accounts. It is not illegal to create bots in this way, and depending on the behavior of the bots, does not violate Twitter’s terms of service.
  • The authors conclude that another type of bot would likely have been more effective: those created by hacking and hijacking dormant Twitter accounts, set up and abandoned by human users. In this case the accounts may have already established considerable social capital in the form of followers, likes, and retweets, and thus have greater reach on Twitter. But the use of hijacked accounts violates Twitter’s terms of service, may be illegal, and would never be approved by university ethics authorities. The authors say these are the types of bots used to spread disinformation during political campaigns, and to disrupt protests and social movements.

The experiment indicates that small-scale deployment of bots created by legally acceptable methods lacks the social capital to exert influence on Twitter. The authors were also hampered by a lack of financial resources needed to create and purchase bots at great scale, and by legal and ethical concerns.

The authors expected their bots to be more successful in swaying the political dialog on Twitter, but came to understand that “social influence, even over technologies that allow bots, is a product of capital,” including the kind of social capital that can be acquired by cheating. They conclude that “the most effective bots may be the ones we cannot study.”

Reference

Murthy, Dhiraj, Alison B. Powell, Ramine Tinati, Nick Anstead, Leslie Carr, Susan J. Halford, and Mark Weal. 2016. “Automation, Algorithms, and Politics| Bots and Political Influence: A Sociotechnical Investigation of Social Network Capital.” International Journal of Communication 10 (0): 20. https://ijoc.org/index.php/ijoc/article/view/6271.

Social Media and Politics in Indonesia

Johansson (2016) gives a solid background to the state of media in Indonesia, both traditional and digital. He explains how a narrowly controlled traditional media in a democracy as new as Indonesia created favorable conditions for social media to break through and disrupt the spread of information, focusing largely on the potential for positive change.

He describes issues with Indonesia’s print and television media, starting with their vulnerability to being completely controlled by just a few elite members of society, i.e. elite capture or media capture. He elaborates on how much of Indonesia’s media is owned or influenced by figures tied to politics, including direct family members of politicians (p. 17). He also describes the rise of media conglomerates. In short, he describes a media ecosystem in which power is held by very few people with ties to other powerful people, working towards a future with less and less competition, all of which can contribute to increased media bias.

Next, he explains the culture of social media in Indonesia, and the effect it’s had on political messaging and campaigning. Social media is wildly popular in Indonesia, with users spending an average of 2.9 hours on social media each day, compared to just 1.7 hours of use in the United States (p. 25). Social media is an attractive place for political messaging not only because of its popularity, but also due to “the cost of campaigning on a steady increase, limited political financing, problems with money politics and the limits of traditional media” (p. 25). Johansson also touches on social media strategies from the 2014 presidential election, explaining that Jokowi’s use of a massive volunteer network coordinating and posting on social media ultimately won out over Prabowo’s smaller and more professional social media team.

Although Johansson mentions propaganda only sparingly, his paper works as a useful, fairly comprehensive account of the media landscape in Indonesia, presently as well as historically. His few words on propaganda are also useful, explaining how media exists primarily as a mode of disseminating propaganda when being viewed from the lens of framing theory, among others. Finally, he warns of how effective political messaging on social media may be dangerous, and how it can “result in an ever-increasing difficulty for citizens to differentiate between news, propaganda, and opinions” (p. 37).

Paper:

Johansson, Anders C., 2016. “Social Media and Politics in Indonesia,” Stockholm School of Economics Asia Working Paper Series 2016-42, Stockholm School of Economics, Stockholm China Economic Research Institute.

The Disinformation Order: Disruptive Communication and the Decline of Democratic Institutions

In this 2018 study published in the European Journal of Communication, W. Lance Bennett and Steven Livingston trace the roots of online political disinformation affecting democratic nations.  They argue that declining confidence in democratic institutions makes citizens more inclined to believe false information and narratives, and spread them more broadly on social media. They identify the radical right, enabled and supported by Russia, as the predominant source of disinformation, and cite examples in the UK Brexit campaign, disruptions affecting democracies in Europe, and the U.S. election of Donald Trump.

Many articles on social media and political communications provide differing definitions of disinformation, misinformation, and fake news. Bennett and Livingston offer their own provisional definition: “intentional falsehoods spread as news stories or simulated documentary formats to advance political goals” (Bennett & Livingston, p.124). In addition to those who share disinformation originating on the Internet, they identify legacy media as an important factor in spreading it further. They say that when news organizations report on false claims and narratives, the effect is to amplify the disinformation. Even fact-checking can strengthen this amplifier effect, because the message is exposed and repeated to more people. As traditional news institutions are attacked as “fake news,” journalistic attempts to correct the record can be cited by propagandists and their supporters as proof of an elite conspiracy to hide the truth. The authors refer to this dynamic as the “disinformation-amplification-reverberation (DAR) cycle.”

It’s interesting that both the political left and right increasingly share contempt for neoliberal policies that benefit elites. But instead of coming together to address political and economic problems, they are being driven further apart by “strategic disinformation.” This hollowing out of the center produces a growing legitimacy crisis, and political processes that are increasingly superficial. The authors term this post-democracy: “(t)he breakdown of core processes of political representation, along with declining authority of institutions and public officials” (p.127).

The authors identify Russia as the primary source of disinformation and disruptive hacking in an increasing number of western democratic and semi-democratic nations: Germany, the UK, The Netherlands, Norway, Sweden, Austria, Hungary, Poland, Turkey, and most of the Balkans. They say Russia has learned to coordinate a variety of hybrid warfare tactics that reinforce their impact, such as troll factories, hackers, bots, and the seeding of false information and narratives by state-owned media channels. As other researchers have argued, Bennett and Livingston say Russia’s disinformation activities are geostrategic, aimed at undermining NATO and the cohesiveness of democratic nations who oppose the expansion of Russia’s power.

In response to the scale of disinformation and disruptions in democratic institutions, Bennett and Livingston suggest comparative research on the characteristics of disinformation in different societies, so as to identify similarities and differences, and the identification of contextual factors that provide either fertile ground for or resistance to disinformation. They also recommend that the current operations of trolls, hackers, and bots should be more central to political communications studies.

Reference

Bennett, W. Lance, and Steven Livingston. 2018. “The Disinformation Order: Disruptive Communication and the Decline of Democratic Institutions.” European Journal of Communication 33 (2): 122–39. https://doi.org/10.1177/0267232118760317.

The Market of Disinformation

This report was produced by the Oxford Information Labs for the purpose of explicating the problem of disinformation on social media, and making some actionable recommendations for the UK Electoral Commission. The authors do an admirable job of describing disinformation strategies deployed by political campaigns with specific examples from recent events, including the inevitable reference to Cambridge Analytica.

The report seems to be written for an audience that may not know what an algorithm is…although the initial explanation of algorithms as “calculations coded in computer software” and “opinions embedded in mathematics” is unlikely to be of much help. From there, the report gets to the heart of the matter, which is that the bias of social media algorithms is to keep people “engaged.” This is a lovely word, but in the context of e.g. Facebook and Twitter it means “trigger the emotions of people to keep them scrolling, clicking, liking, and sharing for as long as humanly possible without literally dying of dehydration” (my wording) and preferably in many sessions per person per day.

So this is “optimization” in social media and the platforms can afford many thousands of engineers and experience designers to do it. The authors don’t let Google off the hook, and they do a reasonable job of explaining web crawling, relevance algorithms, and SEO. They outline recent changes to Facebook’s algorithm and explain why different Facebook users see different things, which leads into an explanation of psychological profiling, personal data aggregation, and microtargeting.

I think the most important point they make is that “(f)uture electoral policy and oversight should be informed by the fact that online and offline actions are necessarily linked, with the offline elements being key enablers of online uses and abuses.” In other words, the older tricks by political propagandists haven’t been replaced by social media; they’ve been augmented by it.

The authors recommend specific measures the UK Election commission could try to put in place. As with many ideas for regulating social media, they seem worthy of consideration, but they might be totally impractical. For example, digitally imprinting campaign material with the source of the information could improve transparency. Location verification of messages could help even more. Campaigns could be penalized for violations with financial sanctions that actually hurt. And finally, transparency in the financing of organizations and people behind political messages might limit the activities of truly bad actors. The objection in the West is likely to be “but Free Speech and Free Markets!” (Here in the U.S. we have the Supreme Court decision in Citizens United v. FEC, which basically says money is speech so you can’t stop money.)

The measures suggested in this report aim to “future-proof” election policies. Elections are special cases, where (in theory) the outcome supports democratic governance. Elections are too important to just say “oh well, free speech and free markets, I guess we can’t do anything about political disinformation.” Some of these recommendations might make a difference in reducing disinformation in political campaigns today. As for future-proofing future elections, I suspect we’re going to need more future reports.

Reference

Hoffmann, Stacie, Emily Taylor, and Samantha Bradshaw. 2019. “The Market of Disinformation.” https://comprop.oii.ox.ac.uk/research/oxtec-disinfo-market/.

Digital Media and the Surge of Political Outsiders: Explaining the Success of Political Challengers in the United States, Germany, and China

U.S. has a long history of outsiders running for president or gunning for power in general. Labor leader Eugene Debs ran five times as a Socialist candidate for president, and won 6 percent of the popular vote in 1912. Ronald Reagan ran as a “Washington outsider,” though he had already established political credentials as the governor of California. Ross Perot, a billionaire businessman, ran as an independent candidate in 1992, winning about 19 percent of the popular vote. But in most bids for the presidency and other high offices, outsiders have faced insurmountable obstacles in gaining the media coverage, financial support, and voter constituencies generated by established parties running traditional campaigns. That is, until the internet.

Jungherr, Schroeder, and Stier say the advent of the digital media fundamentally changed the political playing field by allowing outsiders to bypass the traditional gatekeepers and established institutions. By “digital media” they mean “the set of institutions and infrastructures allowing the production, distribution, and searching of information online” (Jungherr, Schroeder, & Stier, p.2). In other words, the internet and especially social media. This seems intuitively true, and Jungherr, Schroeder, and Stier provide concrete examples from three very different political scenarios: the 2016 U.S. presidential election of Donald Trump; the rise of the left-leaning Pirate Party and the far right-leaning AfD in Germany; and ultranationalist activists in China.

In each case, the outsider campaigners used an online presence to attract attention and support, often while inciting controversy and making outrageous claims about the political establishment and status quo. As the outsiders’ social media audience grew, they gained coverage in traditional media which served to raise their visibility and further broadcast and amplify their messages. In many cases the controversial rhetoric of the outsiders, inserted into digital media space and amplified by traditional media, has shifted the Overton Window of tolerable political discourse (Mackinac Center for Public Policy), leading to policies and actions that were previously anathema. Digital media thus allows outsiders to mount ongoing campaigns that challenge the very legitimacy of the institutions that served as gatekeepers of political language and power in the pre-internet world.

It’s interesting that the authors cite Barack Obama’s use of digital media in the 2008 U.S. presidential election, but fail to mention Howard Dean’s innovations in 2004. Dean, the early frontrunner, raised most of his campaign funding from small donors through the internet (CNN 2003), built a massive email list used by his campaign to communicate with supporters, and was one of the first presidential candidates to establish a strong online presence through a sophisticated campaign website (Howard Dean Campaign). His bid for the White House began to slump after the Iowa caucus, when his performance of what became known as the “I Have a Scream” speech ignited a media feeding frenzy that quickly spread far and wide online (CNN 2004). As the internet giveth, so too it can taketh away.

The authors say their study provides “a novel explanation that systematically accounts for the political consequences of digital media” (Jungherr, Schroeder, & Stier, p.1). The clarity with which they present evidence and the range of examples they cite strongly support this argument. Notably, they say the effect of digital media in politics is not deterministic; it simply provides an opportunity not available before the internet. They argue that this opportunity can be used by outsiders across the political and ideological spectrum.

But the examples cited focus on the rise of right-wing and would-be authoritarian outsiders, even in China where the authors say the government largely tolerates online activities of Chinese ultranationalists. Beyond this paper, further research might document and analyze examples of the “digital media effect” (my term, not the authors’) on the rise of progressive outsiders whose concerns include things like energy and environmental policy sanity, economic and social equity, and universal human rights.

Paper

Jungherr, Andreas, Ralph Schroeder, and Sebastian Stier. 2019. “Digital Media and the Surge of Political Outsiders: Explaining the Success of Political Challengers in the United States, Germany, and China.” Social Media + Society 5 (3): 2056305119875439. https://doi.org/10.1177/2056305119875439.

Additional References

CNN. 2003. “CNN.Com – Dean to Let Supporters Decide Whether to Abandon Public Financing – Nov. 5, 2003.” November 5, 2003. http://edition.cnn.com/2003/ALLPOLITICS/11/05/elec04.prez.dean.financing/index.html.

CNN. 2004. “‘Dean Scream’ Becomes Online Hit,” January 23, 2004. http://news.bbc.co.uk/2/hi/americas/3422809.stm.

Howard Dean Campaign. 2004. “Wayback Machine: Howard Dean for America.” January 29, 2004. https://web.archive.org/web/20040129143845/http://howarddean.com/.

Mackinac Center for Public Policy. n.d. “The Overton Window.” Accessed October 7, 2019. http://www.mackinac.org/OvertonWindow.