Political Propaganda & Social Media: A Project Summary and Critique

We began this project with somewhat lofty goals. We wanted to develop a comparative analysis of the impact of social media influence on the behavior and governance of the people in the regions examined; to understand how similar forces manifest in different ways in different cultures and political conditions; and to contribute to existing literature on social media disinformation and make it more accessible. The scale of the topic meant we could attain these goals only by adding the words “scratch the surface” to the above. But regardless of failure to reach our original ambition, we achieved some unexpected things.

Firstly, in searching for useful primary sources on social media and political disinformation, we became much more aware of existing research by scholars, government bodies, think tanks, and NGOs. Just a few short years ago, it was common to assume social media would liberate people from the tyranny of one-way mass media controlled by large corporations, governments, and oligarchs. It is now darkly amusing to read popular and scholarly literature on social media written just five years ago. Today there are thousands of seemingly credible sources of research exploring the current disinformation environment and its impact on politics.

Given the wealth of available research materials, almost all of which is accessible online, we tried to identify the most relevant examples. It’s likely that we failed in that as well, but the primary sources we selected are generally representative of the existing body of research. We also chose to narrow our initial focus from “the world” to certain areas of the world, specifically Indonesia and Europe. Some of the most interesting examples of social media propaganda are now occurring in Africa, South America, and of course China (and by “interesting” we mean horrific), but we had to put aside those regions at least for now.

The second unexpected thing was a clear correlation between propagandistic messages sponsored by state actors, and changes in the political rhetoric of those targeted. As if we didn’t know this: propaganda can work. For example, there is a preponderance of evidence that Russia’s disinformation campaign to position Ukraine as a corrupt state and perpetrator of various conspiracies is not only influencing opinions among populations in Europe, it is being loudly echoed by the President of the United States and members of his political party.

But propaganda doesn’t always work. For example, in Anke Schmidt-Felzmann’s account of Russian disinformation operations in the Nordic countries of Europe, attempts to undermine support for the EU and NATO are gaining very little traction (Schmidt-Felzmann 2017). In contrast, the same messages are resonating broadly in Central and East European countries, whose populations and political leaders are more friendly to Russia, and more suspicious of the United States, the EU, and NATO (Wierzejski, Syrovatka et al. 2017).

A third surprise dawned on us over the months of working on this project: The use of social media for political propaganda is rapidly evolving, and we are merely capturing a screenshot (so to speak) of this moment. While use of the Internet for strategic disinformation predates the 2016 U.S. presidential election, the disruption of that election, along with others in Africa, India, and the Brexit referendum, brought into sharp relief the scale at which online political propaganda is now being deployed. As the actors behind it acquire more resources and learn from their successes and failures, and as more “innovation” is piled on our current systems of ubiquitous information, we are likely to see a continuing evolution of disinformational strategies and tactics.

Comparing Indonesia and Russia: State Roles in the Spread of Propaganda

Any attempt to analyze the use of propaganda in two different countries and contexts might be a fool’s errand. It’s difficult to shrink entire countries into narratives small enough to neatly compare one to the other, and it puts the analyst at risk of reducing each country to a singular convenient narrative. However for argument’s sake, let’s try it out:

Russia might be seen as the puppet master, controlling armies of bots and trolls to create havoc in many target countries, and sowing the seeds of discord, distrust, and disinformation to weaken democracies worldwide. Indonesia could be cast as a relatively blameless victim country, a young democracy subjected to attacks of propaganda and fake news from religious groups, and possibly from Russia itself (Sapiie and Anya, 2019). The takeaway might be that Russia, a nuclear power with imperialistic ambitions, has the motivation and resources to spread their propaganda across the globe, while countries like Indonesia do their best to overcome the propaganda threatening their democracy.

Obviously it isn’t that simple. Russia isn’t the only country sponsoring propaganda or attempting to influence the political activity of other countries. The Indonesian government isn’t completely innocent of sponsoring their own propaganda. It would be naïve to regard states as monolithic actors, particularly when it comes to their presence on social media. Finally, attempting to compare propaganda activities in very different countries runs the risk of perpetuating our own received colonial narratives, casting some as the villain and others as the innocent victim. In the world of social media disinformation, it may not be obvious who is colonizing whom.

Theoretical Frameworks

Is there a theory of social media that sheds light on current phenomena, and allows us to confidently make predictions? Or are the pieces moving too fast to do more than merely describe? We explore here the application of two prominent theories in communications research: Framing and Media Ecology.

Framing Theory

Framing Theory fits neatly into the conversation of propaganda on social media. As defined by Entman, framing means to “select some aspects of a perceived reality and make them more salient in a communicating text, in such a way as to promote a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation” (Entman 1993). In contrast to agenda setting or priming, framing theory sets not only the topic of discussion, but the terms as well.

Broadly stated, the effect of framing is to construct a social reality people will use to interpret information and events. Similar to pre-Internet media, social media can provide a “a central organizing idea or story line that provides meaning to an unfolding strip of events . . . The frame suggests what the controversy is about, the essence of the issue” (Gamson & Modigliani 1987).

In traditional print and broadcast media, the power of framing is in the hands of journalists, editors, publishers, producers, networks, etc., and there is a clear division between framers and audiences. Social media dissolves this division as “the people formerly known as the audience” are involved in the framing (Rosen 2012). With social media platforms it is often unclear what is being framed or who has the power to do the framing. Twitter and Facebook don’t create the content users see, and the algorithms that control our timelines determine what information we are exposed to. The power to set frames on social media platforms is controlled by anyone with the ability to leverage the algorithms. This can be good; it allows people other than those traditionally in power to present frames of their own, potentially making audiences aware of a wider range of viewpoints, influences, problems, and solutions.

But as we see in the research presented here, social media also increases the potential for deception and manipulation. When propagandistic content floods our newsfeeds, it is increasingly difficult to identify the true authors (is this a real individual or a bot?), the audience reach (is everyone seeing this, or has it been algorithmically selected for your tastes?), and the purpose of the content. Clearly, framing theory is a useful lens for evaluating disinformation on social media. Research might identify the original source of information attempting to “promote a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation,” and attempt to follow the acceptance of the frame by audiences (Entman 1993).

This approach to analyzing disinformation on social media makes use of framing as “a theory of media effects” (Scheufele 1999). Goffman’s concept of “social frameworks” seems particularly well-suited to examining the effects of social media. We are social animals, and social media platforms have become an important site for our social connections. Our interpretations of information and events are influenced by our social connections, whether or not we are conscious of that influence (Goffman 1974).

Media Ecology Theory

We are aware there is considerable disagreement in the academic world about Marshall McLuhan, but the Media Ecology framework seems particularly well suited for analyzing the technological, social, and political complexities of this particular epoch of the information age.

McLuhan wrote about media as “extensions of some human faculty” (McLuhan, Marshall, & Fiore 1967), and “the new scale that is introduced into our affairs by each extension of ourselves, or by any new technology” (McLuhan 1964). Media ecology theory frames the Internet and social media as hyperextensions of every human sense. And on the Internet those extensions are interconnected by a global network of devices that can send and receive information literally at the speed of light, “extending our central nervous system itself in a global embrace, abolishing both space and time as far as our planet is concerned” (McLuhan 1964).

But media ecology theory “is not exhausted by the thought of Herbert Marshall McLuhan”(Islas & Bernal 2016). Some of the post-McLuhan scholarship directly addresses the social and political effects of digital media. Robert K. Logan, a former colleague of McLuhan, suggests that in a flip reversal of media as extensions of man, “the human users of digital media become an extension of those digital media as these media scoop up their data and use them to the advantage of those that control these media…The feedback of the users of digital media become the feedforward for those media” (Logan, 2019).

Logan is primarily concerned with the abuse of personal data for persuasive communications by digital media monopolies such as Google, Facebook, Amazon, and Twitter. But the same kinds of personal data and persuasive technologies are being used by the propagandistic actors in the scenarios described in this project. They aren’t the owners of the technologies, but they don’t have to be. In today’s neoliberal, unregulated “free market,” social media networks are open to use or abuse at scale by anyone with enough resources. As suggested in the study Bots and Political Influence: A Sociotechnical Investigation of Social Network Capital, the resources required for effective social media propaganda operations are beyond the means of anyone but large institutional actors like governments (Murthy, Powel, et al. 2016). And as is clear in Emilio Iasiello’s article Russia’s Improved Information Operations: From Georgia to Crimea, governments are now budgeting for disinformation campaigns aimed at national and global audiences as a vital part of their geopolitical and military strategies (Iasiello 2017). As applied to the Internet age, McLuhan’s frame is still relevant: the medium is the message, and the user is the content (McLuhan 1964).

Conclusion

During this project we chose to primarily use printed resources from academic or government studies. In some cases we reviewed reports from non-profit organizations focused on digital disinformation and security studies. While news reports could have been helpful in providing the most recent accounts of political disinformation, we decided to avoid possible issues of journalistic story framing. We did our best to vet all sources for credibility, and to weed out resources showing signs of ideological and political bias. Our methodology included an examination of the authors, their body of research, and their institutional affiliations. We believe our choices are justifiable, but our inclusion of these sources does not imply wholesale endorsement of the authors or the information and views they express.

Due to rapid changes in technologies used for disinformation and the circumstances of its use, it is likely that much of today’s research will soon be obsolete. An obvious response to this is more research, and it’s clear from our work on this project that more research is coming. A variety of new institutions and initiatives are beginning to systematically study and counter digital disinformation. Which also raises a caution: Will we begin to see disinformation in disinformation research? All the more reason for us to be critical of our sources, and select only those we can reasonably identify as credible.

Coda

Any analysis of the actions and attitudes of governments and other informational actors will inevitably be shaped by the values and views of the authors. Because a discussion of the author’s perspective is rarely included in their published works, audiences may assume that the analysis is intended to be “objective,” and that the author occupies “the view from nowhere” (Rosen 2003). We wish to make our values and views explicit so as to avoid any ambiguity about our perspectives and motivations.

As librarians we understand that “the values and ethics of librarianship are a firm foundation for understanding human rights and the importance of human rights education,” and that “human rights education is decidedly not neutral” (Hinchliffe 2016, p.81). While there can be different arguments about the merits and flaws of different political and economic systems, the role of corporations and governments, and the obligations of citizens, we are strongly in favor of free expression, self-determination, and social justice. We believe all people have an absolute right to knowledge, and we regard influence operations designed to deceive, confuse, or divide people and nations as violations of their human rights and dangerous to the future of world peace. The Internet has become a medium for influencing the thoughts and behavior of people across the globe. Disinformation is not new, but its potential for disruption has never been greater.

We view social media as potentially a net positive for human welfare and civic life. For now, let’s just say it’s a work in progress.


References

Entman, Robert M. 1993. “Framing: Toward Clarification of a Fractured Paradigm.” Journal of Communication 43 (4): 51–58. https://doi.org/10.1111/j.1460-2466.1993.tb01304.x.

Gamson, William and Modigliani, Andre. 1987. “The Changing Culture of Affirmative Action.” In Research in Political Sociology , edited by Richard Braungart , 137-177. Greenwich, CT: Jai Press, Inc.

Goffman, Erving. 1974. Frame Analysis: An Essay on the Organization of Experience. Frame Analysis: An Essay on the Organization of Experience. Cambridge, MA, US: Harvard University Press.

Hinchliffe, Lisa Janicke. 2016. “Loading Examples to Further Human Rights Education.” https://www.ideals.illinois.edu/handle/2142/91636.

Iasiello, Emilio. 2017. “Russia’s Improved Information Operations: From Georgia to Crimea.” US Army War College: Parameters [Summer 2017], US Army War College Quarterly: Parameters, 47 (2): 51–63. https://www.hsdl.org/?abstract&did=803998.

Islas, Octavio, and Juan Bernal Suárez. 2016. “Media Ecology: A Complex and Systemic Metadiscipline.” Philosophies 1 (October): 190–98. https://doi.org/10.3390/philosophies1030190.

Logan, Robert K. 2019. “Understanding Humans: The Extensions of Digital Media.” Information 10 (10): 304. https://doi.org/10.3390/info10100304.

McLuhan, Marshall. 1964. Understanding Media: The Extensions of Man.

McLuhan, Marshall, Quentin Fiore, and Jerome Agel. 1967. The medium is the massage. New York: Bantam Books.

Murthy, Dhiraj, Alison B. Powell, Ramine Tinati, Nick Anstead, Leslie Carr, Susan J. Halford, and Mark Weal. 2016. “Automation, Algorithms, and Politics| Bots and Political Influence: A Sociotechnical Investigation of Social Network Capital.” International Journal of Communication 10 (0): 20. https://ijoc.org/index.php/ijoc/article/view/6271.

Rosen, Jay. 2003. “PressThink: The View from Nowhere.”. http://archive.pressthink.org/2003/09/18/jennings.html.

Rosen, Jay. 2012. “The People Formerly Known as the Audience.” In The Social Media Reader. ed. Mandiberg, Michael. NYU Press. www.jstor.org/stable/j.ctt16gzq5m.

Sapiie, M.A. & Anya, A. 2019. “Jokowi accuses Prabowo camp of enlisting foreign propaganda help.” From https://www.thejakartapost.com/news/2019/02/04/jokowi-accuses-prabowo-camp-of-enlisting-foreign-propaganda-help.html

Scheufele, Dietram A. 1999. “Framing as a Theory of Media Effects.” Journal of Communication 49 (1): 103–22. https://doi.org/10.1111/j.1460-2466.1999.tb02784.x.

Schmidt-Felzmann, Anke. 2017. “More than ‘Just’ Disinformation: Russia’s Information Operations in the Nordic Region.” In Information Warfare – New Security Challenge for Europe, 32–67. Centre for European and North Atlantic Affairs.

Wierzejski, Antoni, Jonáš Syrovatka, Daniel Bartha, Botond Feledy, András Rácz, Petru Macovei, Dušan Fischer, and Margo Gontar. 2017. “Information Warfare in the Internet Countering Pro-Kremlin Disinformation in the CEE Countries.” Centre for International Relations. https://www.academia.edu/34620712/Information_warfare_in_the_Internet_COUNTERING_PRO-KREMLIN_DISINFORMATION_IN_THE_CEE_COUNTRIES_Centre_for_International_Relations_and_Partners.

Defining Digital Literacy in the Age of Computational Propaganda and Hate Spin Politics

Just like much of the rest of the world, Indonesia is facing a crisis of fake news and bot network infiltration on social media, leading to rampant propaganda, mass belief of disinformation, and not fully understood effects on voters that may affect them deeply enough to alter election results. Salma (2019) describes this crisis and identifies the solution as critical digital literacy, essentially educating people about the nature of fake news, algorithmic gaming of social media platforms, and identifying bot networks.

Salma consolidates the issue into two problems: computational propaganda and hate spin politics. She defines computational propaganda as “the use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks” (p. 328). This includes fake news created and spread on social media, bot networks driving attention to and changing conversation around particular issues, and the groups who organize these campaigns of disinformation. Her definition of computational propaganda encompasses much of the fake news crisis currently rattling the United States, as well as other countries.

The other primary issue she identifies is hate spin politics, which is less easily defined. She describes it as “exploit[ing] freedom in democracy by reinforcing group identities and attempt[ing] to manipulate the genuine emotional reactions of citizens as resources in collective actions whose goals are not pro-democracy” (p. 329). Hate spin politics seems to be the weaponization of identity politics and emotion in the digital political sphere, using religion, nationality, sexuality, and other identity markers to turn people against each other. It not only aims to segregate people based on their identities, but to inspire people to self-select into identify groups to create political warfare.

Computational propaganda and hate spin politics are carried out by several groups in Indonesia. Salma identifies Saracen and Muslim Cyber Army as responsible for various fake news campaigns, and there have been notable suggestions of similar political interference from Russia (Sapiie and Anya, 2019). These tactics have shown to be successful on a large scale and with dire consequences in the case of Basuki Tjahaja Purnama, also known as Ahok, the politician who was imprisoned for blasphemy based largely on an edited video that went viral on social media.

Indonesian government officials are keenly aware of the problem computational propaganda presents, taking significant steps to counter fake news spread. In 2018, they began weekly fake news briefings intended to address false stories that have gained traction (Handley, 2018). Salma suggests an increased focus in critical digital literacy, or teaching people to “evaluat[e] online content or digital skills but also to understand the internet’s production, management and consumption processes, as well as its democratizing potential and its structural constraints” (p. 333). Essentially, critical digital literacy is to computer or technical literacy what reading comprehension is to literacy. It’s not enough for users to be able to use a computer and navigate the Internet; there needs to be a solid understanding of what they’re seeing and why, including who might have produced content and how it came to be presented to that user.

Who could argue with that? Of course increased education about the creation and spread of fake news and algorithmic manipulation would be useful to nearly all Internet users, and it might help counter the spread and impact of computational propaganda. However, Salma offers no explanation of how digital literacy would improve hate spin, which seems to be a larger social issue that’s just as likely to occur offline as on. Hate spin politics also traffics in emotional responses, meaning strictly logical literacy training might not be enough to retrain people to grapple with emotional manipulation.

Paper:

Salma, A. N. (2019). Defining Digital Literacy in the Age of Computational Propaganda and Hate Spin Politics. KnE Social Sciences & Humanities2019, 323-338.

Additional Resources:

Sapiie, M.A. & Anya, A. (2019, February 4). Jokowi accuses Prabowo camp of enlisting foreign propaganda help. From https://www.thejakartapost.com/news/2019/02/04/jokowi-accuses-prabowo-camp-of-enlisting-foreign-propaganda-help.html

Handley, L. (2018, September 27). Indonesia’s government is to hold public fake news briefings every week. From https://www.cnbc.com/2018/09/27/indonesias-government-is-to-hold-public-fake-news-briefings-each-week.html

More than ‘Just’ Disinformation: Russia’s Information Operations in the Nordic Region

This book chapter by Anke Schmidt-Felzmann from the Swedish Institute of International Affairs provides an overview of Russian disinformation tactics and messages in the Nordic countries of Finland, Sweden, Denmark, and Norway, and Iceland.

The author begins with the context of why Russia would be interested in social-political influence in the region: Norway and Finland share borders with Russia; Norway is a net exporter of oil and gas, and thus in competition with Russia on world energy markets; Denmark and Russia have an ongoing dispute over resource-rich territories along the continental shelf; Sweden has a visible role in political reform in Ukraine, and in the EU’s Eastern Partnership initiative.

Perhaps more importantly, the five Nordic countries have strong relationships with both NATO and the EU – although only Iceland, Denmark, and Norway are NATO members. All five countries responded with condemnation of Russia’s annexation of Crimea, including the implementation of sanctions.

Schmidt-Felzmann discusses Russia’s use of different channels, social media, and IT tools for “socio-psychological manipulation” in the Nordic region. Interestingly, she singles out the manipulation of individual human beings as both targets and tools of misinformation including journalists and politicians. Tactics cited by the author include intimidation and disinformation campaigns against individuals critical of Russian policies, and the use of trolls and bots on social media. The case of Finnish journalist Jessikka Aro is an interesting example. In 2015 while investigating online trolling, her reporting identified the building in St. Petersburg housing the now-infamous Internet Research Agency. She soon became the target of personal attacks and harassment on social media by the same trolls.

According to Schmidt-Felzmann, Russian information operations in the Nordic region seem to be aimed at discrediting NATO and the EU, and positioning Russia as an innocent victim of “Russophobia” promulgated by the West. Accusations of anti-Russian bias are leveled against journalists and politicians on Russian-sponsored media, online forums, and social media platforms,

Interestingly, Schmidt-Felzmann says, Russian attempts to establish Nordic platforms for its news operations Sputnik and RT failed less than a year after their launch in 2015.

In summary, the Nordic nations appear to have shown considerable resistance to Russian information operations, and are engaged with the EU and NATO in developing multinational research centers and countermeasures such as identifying and responding to disinformation, training on how to identify malicious information, coordinating the exchange of information between agencies…and developing their own influence operations.

Reference

Schmidt-Felzmann, Anke. 2017. “More than ‘Just’ Disinformation: Russia’s Information Operations in the Nordic Region.” In Information Warfare. New Security Challenge for Europe, 32–67. Centre for European and North Atlantic Affairs.

 

Aksi Bela Islam: Islamic Clicktivism and the New Authority of Religious Propaganda in the Millennial age in Indonesia

Ahyar and Alfitri (2019) examine the way social media has reshaped the landscape of propaganda, and how it’s being used to change dominate religious authorities. Propaganda used to be a tool wielded almost exclusively by government bodies or other massive organizations. Ahyar and Alfitri say, “In previous eras – especially in authoritarian regimes prior to the reformation era in Indonesia – the state was an authoritative source for social campaigning” (p. 14). The resources needed to create and effectively spread propaganda were simply too great for small groups or individuals to harness.

Social media has completely changed this; the Internet has effectively allowed nearly anyone to create and spread their own propaganda for their own purposes, with the potential for massive virality and impact. Governments no longer have a monopoly on mass information (or disinformation) spreading. Ahyar and Alfitri explain that alternative groups have come to harness propaganda: “In the Reformation era in Indonesia, propaganda is also often done not only by the government, but also by social movements that echo multiple identities; be it a primordial identity of ethnicity, religion, political ideology and profession” (p. 12-13).

They go on to explain how social media has also revolutionized social movements and activism, again with disruption. Because movements can be planned and executed more easily, they need less hierarchical structure to form and continue. They say, “…Social movements appear massively outside the established political or institutional channels within a country. Social movement is closely related to a shared ideal and responds to a political power” (p. 9). Social movements need less planning, promotion, and organization to be successful. All they really need is a powerful motivating factor to spark mobilization. Propaganda can easily fill this role: “The pattern begins with an action of propaganda through the sails of technological devices, which is followed by supportive comments on the propaganda, and ends in mass mobilization for a real social movement for a purpose” (p. 4).

Although there is obvious good in breaking the government’s former monopoly on propaganda and in tools like social media making organizing and protesting easier than ever, there’s also the possibility for increased disinformation, chaos, and abuse. Ahyar and Alfitri consider the example of Basuki Tjahaya Purnama (also called Ahok), the former Jakarta governor who was imprisoned for blasphemy after a misleadingly edited video of one of his speeches went viral, causing controversy among Islamic communities in Indonesia. The doctored video functioned as propaganda, perfectly matching Ahyar and Alfitri’s definition of propaganda as “attempts to shape influence, change, and control the attitudes and opinions of a person or group for a particular purpose or to be brought in a particular direction” (p. 11). That propaganda spread rapidly through social media, acting as the spark that mobilized thousands of people to take to the streets in protests that were easily and spontaneously planned with improved technology and communication. Ahok’s imprisonment serves as testimony to the power and changed nature of propaganda and social movement, and to the danger that these powerful tools have when they are used rapidly and with little opportunity for oversight, consideration, and fact-checking.

Paper:

Ahyar, M & Alfitri. (2019) ‘Aksi Bela Islam: Islamic clicktivism and the new authority of religious propaganda in the millennial age in Indonesia’, Indonesian Journal of Islam and Muslim Societies, 9(i), pp. 1–29.

Information Warfare in the Internet: Countering Pro-Kremlin Disinformation in the CEE Countries

In this report, published in 2017 by the Poland-based Centre for International Relations and funded by the International Visegrad Fund, authors from seven Central and East European Countries analyze disinformation tactics, channels, and messaging currently used by Russia targeting their respective nations. Data used was drawn from the period between July and October 2017, although general trends were also assessed. The authors find that while Russia’s propaganda tactics are similar throughout the CEE countries, messages are often tailored to maximize impact based on the politics of each country.

The presentation of country-specific data in the report follows a similar format. For example, the section on the Czech Republic, written by Jonáš Syrovatka from the the Prague Security Studies Institute, describes the channels used to spread propagandistic messages, and the reach of each channel. The main channels are conspiracy websites, alternative media, semi-legitimate “bridge media,” Facebook, and YouTube. Messaging and language vary depending on the normative style and tone of each channel. Syrovatka identifies the combination of channels and messaging used as key to propagandistic influence. Subjects of the messages in the Czech Republic include the danger of Islamization, violence by refugees as a strategy by global elites including Hillary Clinton ad Emmanuel Macron, corruption and incompetence of the Ukrainian government, and positive narratives concerning Vladimir Putin and Russia.

Among the country-specific differences:

  • In Hungary, pro-Kremlin narratives are often promoted by mainstream newspapers and broadcast channels, including the state-owned news agency MTI. Much of the messaging content skirts the line between authentic Hungarian pro-Eastern sentiments and Kremlin-sponsored propaganda, rather than pure disinformation. With three million Hungarian retirees, chain mail is also used to spread pro-Kremin narratives. Facebook is the predominant social media platform used, due to Hungary’s lack of “Twitter culture.” Propagandistic messages are aimed to undermine trust in the U.S., NATO and the EU, encourage anti-immigration and anti-refugee views,  discredit liberal ideas about human rights and NGOs, and to discredit Ukraine as corrupt, fascist, and failing.
  • In Moldava, Petru Macovei, Executive Director of the Association of Independent Press at the Chisinau School of Advanced Journalism, reports that Russian influence is powerfully exerted through mass media outlets created by Russia, including a Moldavan edition of Komsomolskaya Pravda and Sputnik.md. Twitter and Facebook are also used as channels. Narratives include conspiracy theories about NATO preparing for nuclear war against Russia with Moldava as a battlefield, that Moldava is ruled by an outsider network connected with George Soros, that the U.S., NATO, and NGOs are conspiring against Moldavan interests and promoting homosexuality, and that the U.S. is defending the Islamic State in Syria.
  • In Poland, tactics and propaganda messages are much the same: that NATO is a tool of America and is acting again Poland, and that Russia is the only counter to American influence. Poland-specific narratives include disinformation targeting Ukraine, wherein Ukrainians are portrayed as “wild and cruel beasts mindlessly slaughtering Poles.” Polish mass media, websites, and social media are leveraged for these narratives, along with an interesting twist: fake interview with top Polish generals which invariably position the West as anti-Poland, and as promoting homosexuality. According to journalist Antoni Wierzejski, author of the section on Poland, hackers and trolls are very active in Poland.
  • The section on Ukraine, authored by Margo Gontar, journalist and co-founder of the Ukrainian organization StopFake, summarized four disinformation themes: Ukraine is a failed state, Ukrainians are dangerous, Ukraine is breaking the Minsk Agreement to stop the war in the Donbass region of Ukraine, and that everyone loves Russia. At the same time, according to other research identified in this project, various actors in Ukraine are mounting somewhat effective information campaigns to counter Russian propaganda. It is of note, though, that anti-Ukraine disinformation is featured so prominently in the other CEE nations.

As we see in other research, social media is an important channel for the spread of disinformation and participatory propaganda. But the authors emphasize it is the combined impact of traditional media, state-sponsored news organizations, conspiracy websites, trolls and hackers, and social media that is the basis for Russia’s propaganda strategy.

The report concludes with a number of recommendations to counter Russian disinformation, including more research on its authors and target audiences, education of the public on information ethics, and encouraging Internet companies to deploy tools against fake news. All of which are worth attempting, and possibly inadequate unless done at a very large scale.

Reference

Wierzejski, Antoni, Jonáš Syrovatka, Daniel Bartha, Botond Feledy, András Rácz, Petru Macovei, Dušan Fischer, and Margo Gontar. 2017. “Information Warfare in the Internet COUNTERING PRO-KREMLIN DISINFORMATION IN THE CEE COUNTRIES.” Centre for International Relations. https://www.academia.edu/34620712/Information_warfare_in_the_Internet_COUNTERING_PRO-KREMLIN_DISINFORMATION_IN_THE_CEE_COUNTRIES_Centre_for_International_Relations_and_Partners.

Countering Terrorist Narratives: Winning the Hearts and Minds of Indonesian Millennials

Narratives are powerful because they’re easy to follow. Factual information and research might provide someone with all of the pieces, but a well-crafted narrative presents itself as an already completed puzzle. Ansis (2018) discusses the narratives that terrorists and extremists use to recruit new members, and how those narratives can be shaped into convincing propaganda that is easily disseminated through social media, focusing primarily on the recruitment of young Indonesians and responses from the Indonesian government. Islamic extremist narratives give followers a consistent worldview, as well as a clearly defined role and purpose within that worldview. Once a follower has accepted extremist narratives, it’s difficult to counter them.

Islamic extremist groups build their narratives on social media the same way many use social media- consistent branding and plenty of selfies. Ansis says, “Many of the selfie photos of young jihadists express their happiness. They smile and carry weapons. The jihadists use this strategy to give a picture that they are powerful and own many weapons” (p. 196). Again, following pretty standard social media manipulation tactics, extremists can deceive followers. Ansis continues, “They may only have a few weapons and ask the jihadists to take turns taking selfie photos carrying the gun” (p. 196). They also use catchphrases. Ansis identifies the phrase “You Only Die Once” or “YODO,” (p. 193) a clear derivative of the popular hashtag #yolo. 

 Ansis’s examples of jidhadist recruiting, specifically her analysis of the film Jihad Selfie, reveal the targeted nature of their recruiting efforts. Extremists’s success isn’t from pouring money into Facebook advertisements; it’s from using social media to talk to vulnerable individuals. There seems to be more to gain from putting significant resources towards the small number of individuals who are able to be flipped than there is in mass recruitment tactics that will fall largely on deaf ears. Again, using social media for this kind of targeted advertising isn’t exclusive to jidhadist groups. Cambridge Analytica’s use of highly targeted advertising has caused outrage worldwide. 

Indonesia has taken several steps to attempt to counter extremist propaganda online, largely in the form of websites offering counter-narratives and promoting peacefulness (p. 202). However, it’s unclear how effective this approach can be. Ansis describes how jidhadists’ use of social media makes them look “cool,” according to former recruits, because of their handling of weapons and the interactions their post get from Muslim women (p. 197). If the appeal of jidhadist’s propaganda comes down to cool factor, it’s really difficult to imagine the government successfully creating something that will actually read as cool to young people. 

The weakest point of Ansis’s analysis comes from her failure to interrogate the term “lone wolf” terrorists. She points out, “Unlike in the past when a terrorist was defined as someone who completed a long process of training and indoctrination through a terrorist group, the lone wolf terrorists are not tied to any terrorist network and have gotten inspiration through the internet,” (p. 195) yet fails to connect that this inspiration through the Internet is often from interacting with content from terrorist networks. 

Paper:

Anis, E. Z. (2018). Countering Terrorist Narratives: Winning the Hearts and Minds of Indonesian Millennials. KnE Social Sciences & Humanities, 2018, 189.

Stoking the Flames: Russian Information Operations in Turkey

It can be argued that Russia scored a major goal on October 7, 2019 when U.S. President Trump tweeted that he would withdraw American troops from the war zone in northeastern Syria, and that “Turkey, Europe, Syria, Iran, Iraq, Russia and the Kurds will now have to…figure the situation out.” This cleared the way for Turkey to launch a large-scale military operation against America’s allies, the Kurdish PKK. The sudden change in U.S. policy caught just about everyone off-guard – the Kurds, NATO, the U.S. State Department and members of Congress, and even the U.S. military commanders in Syria.

In the 2018 article “Stoking the Flames: Russian Information Operations in Turkey,” published in the journal Ukraine Analytica, University of Copenhagen political scientist Balkan Devlen details Russia’s shifting propaganda narrative targeting Turkish audiences. During and after it 2014 invasion in Crimea, Russia sought to portray Ukraine as a corrupt ally of the “imperialist West,” and Russia as an anti-imperialist friend to Turkey. A variety of media outlets were used to spread this message, including the Turkish language service of Russia’s Sputnik News, and a range of Turkish media sources known to be suspicious of Western and American meddling in the region. As shown by other research on Russian disinformation strategies, a variety of social media outlets were also used.

After the downing of a Russian jet by the Turkish air force in 2015, Russia’s propaganda massaging in Turkey did a 180-degree turn and began targeting the Turkish government and its foreign policy, claiming that Turkey was supporting ISIS, violating international law, and committing war crimes. Balkan notes that Russia’s anti-Turkey propaganda campaign was immediate, robust, and agile, suggesting that Russia is well-prepared to launch disinformation campaigns against even friendly nations, with messaging developed in advance should the need arise.

In 2016 relations between Russia and Turkey became friendly, and the torrent of anti-Turkish disinformation quicky ceased. A new phase of propaganda sought to increase suspicion and animosity toward the U.S. and NATO, and to once again portray Russia as a true friend. As anti-American sentiment sentiment increases among the Turkish population, this narrative has been picked up by Turkey’s major media and amplified by Eurasianist “fellow travellers” through various channels.

Balkan concludes that as relations between Turkey, the U.S., and NATO fray, “Russia gets closer to its goal of weakening and undermining the liberal international order.”

While it is possible to read Balkan’s article as a polemic, much of his argument is echoed by other research annotated in this Political Propaganda and Social Media project. It might also be worth noting that some of the propaganda messages deployed by Russia in Turkey, such as the message that Ukraine is a corrupt nation, are mirrored in tweets by the U.S. president.

Bots and Political Influence: A Sociotechnical Investigation of Social Network Capital

The rise of bots on social media platforms, designed to automate disinformation and disruption, has led to a kind of moral panic. The authors of this study sought to quantify the actual impact of bots on political conversations, and to answer the question “will public policy decisions be distorted by public opinion corrupted by bots?” The project was designed by an interdisciplinary team of scholars in political communications, sociology, computer science, and technology studies, and conducted by deploying  bots to write tweets and participate in Twitter discussions of three high stakes political events in the United Kingdom during 2016. The bots were programmed to follow and retweet specific hashtags. A network analysis was then performed to determine the influence of the bots during the course of the experiment.

The most interesting outcome of the study is that it failed to show any significant effect of the bots on Twitter conversations surrounding the three political events. The interpretation of that outcome is the focus of the authors primary conclusion, where they identify specific challenges faced by researchers in studying the influence of bots:

  • The experiment rested on a number of student volunteers who set up new Twitter accounts, and were asked to use specific hashtags while tweeting about certain events. The researchers then linked bots to some of the accounts to comment on and retweet the students’ tweets. But the new accounts lacked the “social capital” of a high follower count, and thus their tweets had limited reach even when amplified by the bots.
  • The researchers used two methods to deploy the bots. The first method was to fully create their own bots; the second method was to purchase bots from MonsterSocial, a commercial marketing agency that bills itself as “the #1 automation bot for Facebook, Instagram, Pinterest, Tumblr and Twitter.” MonsterSocial provides a user interface to set up a number of Twitter accounts to automatically retweet, favorite, and follow other accounts. It is not illegal to create bots in this way, and depending on the behavior of the bots, does not violate Twitter’s terms of service.
  • The authors conclude that another type of bot would likely have been more effective: those created by hacking and hijacking dormant Twitter accounts, set up and abandoned by human users. In this case the accounts may have already established considerable social capital in the form of followers, likes, and retweets, and thus have greater reach on Twitter. But the use of hijacked accounts violates Twitter’s terms of service, may be illegal, and would never be approved by university ethics authorities. The authors say these are the types of bots used to spread disinformation during political campaigns, and to disrupt protests and social movements.

The experiment indicates that small-scale deployment of bots created by legally acceptable methods lacks the social capital to exert influence on Twitter. The authors were also hampered by a lack of financial resources needed to create and purchase bots at great scale, and by legal and ethical concerns.

The authors expected their bots to be more successful in swaying the political dialog on Twitter, but came to understand that “social influence, even over technologies that allow bots, is a product of capital,” including the kind of social capital that can be acquired by cheating. They conclude that “the most effective bots may be the ones we cannot study.”

Reference

Murthy, Dhiraj, Alison B. Powell, Ramine Tinati, Nick Anstead, Leslie Carr, Susan J. Halford, and Mark Weal. 2016. “Automation, Algorithms, and Politics| Bots and Political Influence: A Sociotechnical Investigation of Social Network Capital.” International Journal of Communication 10 (0): 20. https://ijoc.org/index.php/ijoc/article/view/6271.

Government Social Media in Indonesia: Just Another Information Dissemination Tool

No matter how much Mark Zuckerberg promises that the goal of Facebook has always been to “connect” the world, it’s increasingly clear that social media might not actually be the most effective tool towards accomplishing that goal. Though social media sties like Facebook and Twitter can make two-way communication between entities easier from a logistical standpoint, scholars remain divided on whether or not social media has been able to live up to possibilities in the political realm.

Idris (2018) examines this possibility for two-way communication between government entities and individuals in Indonesia using social media, finding that two-way communication is much more of a social media ideal than a reality. She specifically looks at two Indonesian government agencies’ social media presences, using social network analysis to determine how and when they interacted with other social media users. For the most part, it turns out they don’t. She says, “…the Indonesian government mostly used social media to disseminate governmental information, to dominate social media conversation, and to amplify governmental messages… Thus, advanced communication technology was not used to transform the government to become more engaging and transparent” (p. 352). Basically, just because social media creates the opportunity for dialogue between governments and citizens, doesn’t ensure that the governments reads, considers, or acknowledges citizens’ responses.

Without two-way communication, there is little or no difference between government information and PR campaigns disseminated on social media and propaganda (p. 338). However, the use of social media allows governments to maintain the illusion of increased communication with citizens while actually perfecting their propagandistic techniques. When communicating directly on social media, a government can effectively bypass traditional media, allowing them to release their content exactly as they see fit, keeping journalistic scrutiny out of their initial message. They can also manipulate social media algorithms to amplify their own content, using nothing more than networks of government social media accounts. Idris describes President Widodo’s network of governmental social media accounts’ objective as “to counter negative opinions about the government and at the same time make government information go viral” (p. 350). Though downright measly compared to something like Russian bot networks, these networks of official government accounts can be enough to spread information and shape conversation. Governments using social media for information dissemination also have the opportunity to perfectly test and reshape their messages in real time. Both the Obama and Trump campaigns in the U.S. saw impressive results using methods like A/B testing to craft and recraft their social media advertisements with incredible precision (Bashyakarla, 2019).

Social media makes a lot of things possible that were not before. This includes both increased transparency and easier back and forth communication between governments and citizens, but also easier dissemination of perfectly-crafted propaganda. Idris makes it clear which of these aims the Indonesia government is pursing.

Paper

Idris, I. K. (2018). Government social media in Indonesia: Just another information dissemination tool. Jurnal Komunikasi: Malaysian Journal of Communication34(4), 337–356. https://doi.org/10.17576/JKMJC-2018-3404-20

Additional References

Bashyakarla, V. (2019). A/B Testing: Experiments in campaign messaging. Retrieved from https://ourdataourselves.tacticaltech.org/posts/ab-testing

Social Media and Politics in Indonesia

Johansson (2016) gives a solid background to the state of media in Indonesia, both traditional and digital. He explains how a narrowly controlled traditional media in a democracy as new as Indonesia created favorable conditions for social media to break through and disrupt the spread of information, focusing largely on the potential for positive change.

He describes issues with Indonesia’s print and television media, starting with their vulnerability to being completely controlled by just a few elite members of society, i.e. elite capture or media capture. He elaborates on how much of Indonesia’s media is owned or influenced by figures tied to politics, including direct family members of politicians (p. 17). He also describes the rise of media conglomerates. In short, he describes a media ecosystem in which power is held by very few people with ties to other powerful people, working towards a future with less and less competition, all of which can contribute to increased media bias.

Next, he explains the culture of social media in Indonesia, and the effect it’s had on political messaging and campaigning. Social media is wildly popular in Indonesia, with users spending an average of 2.9 hours on social media each day, compared to just 1.7 hours of use in the United States (p. 25). Social media is an attractive place for political messaging not only because of its popularity, but also due to “the cost of campaigning on a steady increase, limited political financing, problems with money politics and the limits of traditional media” (p. 25). Johansson also touches on social media strategies from the 2014 presidential election, explaining that Jokowi’s use of a massive volunteer network coordinating and posting on social media ultimately won out over Prabowo’s smaller and more professional social media team.

Although Johansson mentions propaganda only sparingly, his paper works as a useful, fairly comprehensive account of the media landscape in Indonesia, presently as well as historically. His few words on propaganda are also useful, explaining how media exists primarily as a mode of disseminating propaganda when being viewed from the lens of framing theory, among others. Finally, he warns of how effective political messaging on social media may be dangerous, and how it can “result in an ever-increasing difficulty for citizens to differentiate between news, propaganda, and opinions” (p. 37).

Paper:

Johansson, Anders C., 2016. “Social Media and Politics in Indonesia,” Stockholm School of Economics Asia Working Paper Series 2016-42, Stockholm School of Economics, Stockholm China Economic Research Institute.

The Disinformation Order: Disruptive Communication and the Decline of Democratic Institutions

In this 2018 study published in the European Journal of Communication, W. Lance Bennett and Steven Livingston trace the roots of online political disinformation affecting democratic nations.  They argue that declining confidence in democratic institutions makes citizens more inclined to believe false information and narratives, and spread them more broadly on social media. They identify the radical right, enabled and supported by Russia, as the predominant source of disinformation, and cite examples in the UK Brexit campaign, disruptions affecting democracies in Europe, and the U.S. election of Donald Trump.

Many articles on social media and political communications provide differing definitions of disinformation, misinformation, and fake news. Bennett and Livingston offer their own provisional definition: “intentional falsehoods spread as news stories or simulated documentary formats to advance political goals” (Bennett & Livingston, p.124). In addition to those who share disinformation originating on the Internet, they identify legacy media as an important factor in spreading it further. They say that when news organizations report on false claims and narratives, the effect is to amplify the disinformation. Even fact-checking can strengthen this amplifier effect, because the message is exposed and repeated to more people. As traditional news institutions are attacked as “fake news,” journalistic attempts to correct the record can be cited by propagandists and their supporters as proof of an elite conspiracy to hide the truth. The authors refer to this dynamic as the “disinformation-amplification-reverberation (DAR) cycle.”

It’s interesting that both the political left and right increasingly share contempt for neoliberal policies that benefit elites. But instead of coming together to address political and economic problems, they are being driven further apart by “strategic disinformation.” This hollowing out of the center produces a growing legitimacy crisis, and political processes that are increasingly superficial. The authors term this post-democracy: “(t)he breakdown of core processes of political representation, along with declining authority of institutions and public officials” (p.127).

The authors identify Russia as the primary source of disinformation and disruptive hacking in an increasing number of western democratic and semi-democratic nations: Germany, the UK, The Netherlands, Norway, Sweden, Austria, Hungary, Poland, Turkey, and most of the Balkans. They say Russia has learned to coordinate a variety of hybrid warfare tactics that reinforce their impact, such as troll factories, hackers, bots, and the seeding of false information and narratives by state-owned media channels. As other researchers have argued, Bennett and Livingston say Russia’s disinformation activities are geostrategic, aimed at undermining NATO and the cohesiveness of democratic nations who oppose the expansion of Russia’s power.

In response to the scale of disinformation and disruptions in democratic institutions, Bennett and Livingston suggest comparative research on the characteristics of disinformation in different societies, so as to identify similarities and differences, and the identification of contextual factors that provide either fertile ground for or resistance to disinformation. They also recommend that the current operations of trolls, hackers, and bots should be more central to political communications studies.

Reference

Bennett, W. Lance, and Steven Livingston. 2018. “The Disinformation Order: Disruptive Communication and the Decline of Democratic Institutions.” European Journal of Communication 33 (2): 122–39. https://doi.org/10.1177/0267232118760317.

Russia’s Improved Information Operations: From Georgia to Crimea

In the Western press much attention has been focused on Russia’s interference in the 2016 U.S. election, by spreading disinformation broadly on the Internet and social media platforms. Russia (and of course the United States) has long used propaganda as a psychological weapon in hot wars, cold wars, and even times of relative peace. In an article published in the US Army War College journal Parameters, Emilio Iasiello, a cyberintelligence advisor to Fortune 100 clients, says “nonkinetic options” are now a core part of Russia’s military and geopolitical strategy: using information and deception to disrupt opponents and influence internal and global audiences.

But propaganda hasn’t always prevailed. Iasiello reviews Russia’s information operations in its 2008 invasion of Georgia, and finds that Georgia ultimately won the information war. Russia relied on pre-Internet propaganda tactics such as using traditional media to deliver key messaging to the international community, and trying to position Georgia as the aggressor and Russia as merely defending its citizens. But Georgia fought back with its own extensive counterinformation campaign, and ultimately won the battle for international support.

Iasiello says Russia may have lost the Georgian conflict, but learned that the Internet could be used as a weapon and began revising and expanding its information war strategy. In its 2014 annexation of the Crimean region in Ukraine, Russia applied the lessons from the Georgian conflict to orchestrate a rapid and nearly bloodless victory. Russian state actors directed cyberattacks to shut down Crimea’s telecommunications and websites, and to jam the mobile phones of key Ukrainian officials. Russian hackers intercepted documents on Ukrainian military strategy, launched DDOS attacks on Ukrainian and NATO websites, disrupted the Ukrainian Central Election Commission network, planted “fake news” on fake websites and Russian media, and employed a cadre of trolls to comment on news and social media for the purpose of distorting reality and confusing Ukraine’s allies.

According to Iasiello, the 2014 Crimean annexation was a case study in the use of social media to control messaging and sow discord among the Ukrainian population and the international community. Thus the birth of Russia’s new strategy for “hybrid” warfare, using trolls, fake websites, social media, and the international news media to massively spread disinformation and confusion about the conflict.

Iasiello says Russia is vastly outpacing the United States on information war tactics, and using its experience to refine its strategies for different conflicts. In essence, Russia is playing the long game to sow discord and division, so as to weaken Western alliances. His recommendations include developing a U.S. counterinformation center, using analytics and artificial intelligence to identify online disinformation, and increasing international cooperation to combat various forms of Russian propaganda. He concludes that the Internet and social media are now an international battleground, and Russia is currently winning the information war.

Reference

Iasiello, Emilio. 2017. “Russia’s Improved Information Operations: From Georgia to Crimea.” US Army War College: Parameters [Summer 2017], US Army War College Quarterly: Parameters, 47 (2): 51–63. https://www.hsdl.org/?abstract&did=803998.

The Market of Disinformation

This report was produced by the Oxford Information Labs for the purpose of explicating the problem of disinformation on social media, and making some actionable recommendations for the UK Electoral Commission. The authors do an admirable job of describing disinformation strategies deployed by political campaigns with specific examples from recent events, including the inevitable reference to Cambridge Analytica.

The report seems to be written for an audience that may not know what an algorithm is…although the initial explanation of algorithms as “calculations coded in computer software” and “opinions embedded in mathematics” is unlikely to be of much help. From there, the report gets to the heart of the matter, which is that the bias of social media algorithms is to keep people “engaged.” This is a lovely word, but in the context of e.g. Facebook and Twitter it means “trigger the emotions of people to keep them scrolling, clicking, liking, and sharing for as long as humanly possible without literally dying of dehydration” (my wording) and preferably in many sessions per person per day.

So this is “optimization” in social media and the platforms can afford many thousands of engineers and experience designers to do it. The authors don’t let Google off the hook, and they do a reasonable job of explaining web crawling, relevance algorithms, and SEO. They outline recent changes to Facebook’s algorithm and explain why different Facebook users see different things, which leads into an explanation of psychological profiling, personal data aggregation, and microtargeting.

I think the most important point they make is that “(f)uture electoral policy and oversight should be informed by the fact that online and offline actions are necessarily linked, with the offline elements being key enablers of online uses and abuses.” In other words, the older tricks by political propagandists haven’t been replaced by social media; they’ve been augmented by it.

The authors recommend specific measures the UK Election commission could try to put in place. As with many ideas for regulating social media, they seem worthy of consideration, but they might be totally impractical. For example, digitally imprinting campaign material with the source of the information could improve transparency. Location verification of messages could help even more. Campaigns could be penalized for violations with financial sanctions that actually hurt. And finally, transparency in the financing of organizations and people behind political messages might limit the activities of truly bad actors. The objection in the West is likely to be “but Free Speech and Free Markets!” (Here in the U.S. we have the Supreme Court decision in Citizens United v. FEC, which basically says money is speech so you can’t stop money.)

The measures suggested in this report aim to “future-proof” election policies. Elections are special cases, where (in theory) the outcome supports democratic governance. Elections are too important to just say “oh well, free speech and free markets, I guess we can’t do anything about political disinformation.” Some of these recommendations might make a difference in reducing disinformation in political campaigns today. As for future-proofing future elections, I suspect we’re going to need more future reports.

Reference

Hoffmann, Stacie, Emily Taylor, and Samantha Bradshaw. 2019. “The Market of Disinformation.” https://comprop.oii.ox.ac.uk/research/oxtec-disinfo-market/.

Digital Media and the Surge of Political Outsiders: Explaining the Success of Political Challengers in the United States, Germany, and China

U.S. has a long history of outsiders running for president or gunning for power in general. Labor leader Eugene Debs ran five times as a Socialist candidate for president, and won 6 percent of the popular vote in 1912. Ronald Reagan ran as a “Washington outsider,” though he had already established political credentials as the governor of California. Ross Perot, a billionaire businessman, ran as an independent candidate in 1992, winning about 19 percent of the popular vote. But in most bids for the presidency and other high offices, outsiders have faced insurmountable obstacles in gaining the media coverage, financial support, and voter constituencies generated by established parties running traditional campaigns. That is, until the internet.

Jungherr, Schroeder, and Stier say the advent of the digital media fundamentally changed the political playing field by allowing outsiders to bypass the traditional gatekeepers and established institutions. By “digital media” they mean “the set of institutions and infrastructures allowing the production, distribution, and searching of information online” (Jungherr, Schroeder, & Stier, p.2). In other words, the internet and especially social media. This seems intuitively true, and Jungherr, Schroeder, and Stier provide concrete examples from three very different political scenarios: the 2016 U.S. presidential election of Donald Trump; the rise of the left-leaning Pirate Party and the far right-leaning AfD in Germany; and ultranationalist activists in China.

In each case, the outsider campaigners used an online presence to attract attention and support, often while inciting controversy and making outrageous claims about the political establishment and status quo. As the outsiders’ social media audience grew, they gained coverage in traditional media which served to raise their visibility and further broadcast and amplify their messages. In many cases the controversial rhetoric of the outsiders, inserted into digital media space and amplified by traditional media, has shifted the Overton Window of tolerable political discourse (Mackinac Center for Public Policy), leading to policies and actions that were previously anathema. Digital media thus allows outsiders to mount ongoing campaigns that challenge the very legitimacy of the institutions that served as gatekeepers of political language and power in the pre-internet world.

It’s interesting that the authors cite Barack Obama’s use of digital media in the 2008 U.S. presidential election, but fail to mention Howard Dean’s innovations in 2004. Dean, the early frontrunner, raised most of his campaign funding from small donors through the internet (CNN 2003), built a massive email list used by his campaign to communicate with supporters, and was one of the first presidential candidates to establish a strong online presence through a sophisticated campaign website (Howard Dean Campaign). His bid for the White House began to slump after the Iowa caucus, when his performance of what became known as the “I Have a Scream” speech ignited a media feeding frenzy that quickly spread far and wide online (CNN 2004). As the internet giveth, so too it can taketh away.

The authors say their study provides “a novel explanation that systematically accounts for the political consequences of digital media” (Jungherr, Schroeder, & Stier, p.1). The clarity with which they present evidence and the range of examples they cite strongly support this argument. Notably, they say the effect of digital media in politics is not deterministic; it simply provides an opportunity not available before the internet. They argue that this opportunity can be used by outsiders across the political and ideological spectrum.

But the examples cited focus on the rise of right-wing and would-be authoritarian outsiders, even in China where the authors say the government largely tolerates online activities of Chinese ultranationalists. Beyond this paper, further research might document and analyze examples of the “digital media effect” (my term, not the authors’) on the rise of progressive outsiders whose concerns include things like energy and environmental policy sanity, economic and social equity, and universal human rights.

Paper

Jungherr, Andreas, Ralph Schroeder, and Sebastian Stier. 2019. “Digital Media and the Surge of Political Outsiders: Explaining the Success of Political Challengers in the United States, Germany, and China.” Social Media + Society 5 (3): 2056305119875439. https://doi.org/10.1177/2056305119875439.

Additional References

CNN. 2003. “CNN.Com – Dean to Let Supporters Decide Whether to Abandon Public Financing – Nov. 5, 2003.” November 5, 2003. http://edition.cnn.com/2003/ALLPOLITICS/11/05/elec04.prez.dean.financing/index.html.

CNN. 2004. “‘Dean Scream’ Becomes Online Hit,” January 23, 2004. http://news.bbc.co.uk/2/hi/americas/3422809.stm.

Howard Dean Campaign. 2004. “Wayback Machine: Howard Dean for America.” January 29, 2004. https://web.archive.org/web/20040129143845/http://howarddean.com/.

Mackinac Center for Public Policy. n.d. “The Overton Window.” Accessed October 7, 2019. http://www.mackinac.org/OvertonWindow.