Political Propaganda & Social Media: A Project Summary and Critique

We began this project with somewhat lofty goals. We wanted to develop a comparative analysis of the impact of social media influence on the behavior and governance of the people in the regions examined; to understand how similar forces manifest in different ways in different cultures and political conditions; and to contribute to existing literature on social media disinformation and make it more accessible. The scale of the topic meant we could attain these goals only by adding the words “scratch the surface” to the above. But regardless of failure to reach our original ambition, we achieved some unexpected things.

Firstly, in searching for useful primary sources on social media and political disinformation, we became much more aware of existing research by scholars, government bodies, think tanks, and NGOs. Just a few short years ago, it was common to assume social media would liberate people from the tyranny of one-way mass media controlled by large corporations, governments, and oligarchs. It is now darkly amusing to read popular and scholarly literature on social media written just five years ago. Today there are thousands of seemingly credible sources of research exploring the current disinformation environment and its impact on politics.

Given the wealth of available research materials, almost all of which is accessible online, we tried to identify the most relevant examples. It’s likely that we failed in that as well, but the primary sources we selected are generally representative of the existing body of research. We also chose to narrow our initial focus from “the world” to certain areas of the world, specifically Indonesia and Europe. Some of the most interesting examples of social media propaganda are now occurring in Africa, South America, and of course China (and by “interesting” we mean horrific), but we had to put aside those regions at least for now.

The second unexpected thing was a clear correlation between propagandistic messages sponsored by state actors, and changes in the political rhetoric of those targeted. As if we didn’t know this: propaganda can work. For example, there is a preponderance of evidence that Russia’s disinformation campaign to position Ukraine as a corrupt state and perpetrator of various conspiracies is not only influencing opinions among populations in Europe, it is being loudly echoed by the President of the United States and members of his political party.

But propaganda doesn’t always work. For example, in Anke Schmidt-Felzmann’s account of Russian disinformation operations in the Nordic countries of Europe, attempts to undermine support for the EU and NATO are gaining very little traction (Schmidt-Felzmann 2017). In contrast, the same messages are resonating broadly in Central and East European countries, whose populations and political leaders are more friendly to Russia, and more suspicious of the United States, the EU, and NATO (Wierzejski, Syrovatka et al. 2017).

A third surprise dawned on us over the months of working on this project: The use of social media for political propaganda is rapidly evolving, and we are merely capturing a screenshot (so to speak) of this moment. While use of the Internet for strategic disinformation predates the 2016 U.S. presidential election, the disruption of that election, along with others in Africa, India, and the Brexit referendum, brought into sharp relief the scale at which online political propaganda is now being deployed. As the actors behind it acquire more resources and learn from their successes and failures, and as more “innovation” is piled on our current systems of ubiquitous information, we are likely to see a continuing evolution of disinformational strategies and tactics.

Comparing Indonesia and Russia: State Roles in the Spread of Propaganda

Any attempt to analyze the use of propaganda in two different countries and contexts might be a fool’s errand. It’s difficult to shrink entire countries into narratives small enough to neatly compare one to the other, and it puts the analyst at risk of reducing each country to a singular convenient narrative. However for argument’s sake, let’s try it out:

Russia might be seen as the puppet master, controlling armies of bots and trolls to create havoc in many target countries, and sowing the seeds of discord, distrust, and disinformation to weaken democracies worldwide. Indonesia could be cast as a relatively blameless victim country, a young democracy subjected to attacks of propaganda and fake news from religious groups, and possibly from Russia itself (Sapiie and Anya, 2019). The takeaway might be that Russia, a nuclear power with imperialistic ambitions, has the motivation and resources to spread their propaganda across the globe, while countries like Indonesia do their best to overcome the propaganda threatening their democracy.

Obviously it isn’t that simple. Russia isn’t the only country sponsoring propaganda or attempting to influence the political activity of other countries. The Indonesian government isn’t completely innocent of sponsoring their own propaganda. It would be naïve to regard states as monolithic actors, particularly when it comes to their presence on social media. Finally, attempting to compare propaganda activities in very different countries runs the risk of perpetuating our own received colonial narratives, casting some as the villain and others as the innocent victim. In the world of social media disinformation, it may not be obvious who is colonizing whom.

Theoretical Frameworks

Is there a theory of social media that sheds light on current phenomena, and allows us to confidently make predictions? Or are the pieces moving too fast to do more than merely describe? We explore here the application of two prominent theories in communications research: Framing and Media Ecology.

Framing Theory

Framing Theory fits neatly into the conversation of propaganda on social media. As defined by Entman, framing means to “select some aspects of a perceived reality and make them more salient in a communicating text, in such a way as to promote a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation” (Entman 1993). In contrast to agenda setting or priming, framing theory sets not only the topic of discussion, but the terms as well.

Broadly stated, the effect of framing is to construct a social reality people will use to interpret information and events. Similar to pre-Internet media, social media can provide a “a central organizing idea or story line that provides meaning to an unfolding strip of events . . . The frame suggests what the controversy is about, the essence of the issue” (Gamson & Modigliani 1987).

In traditional print and broadcast media, the power of framing is in the hands of journalists, editors, publishers, producers, networks, etc., and there is a clear division between framers and audiences. Social media dissolves this division as “the people formerly known as the audience” are involved in the framing (Rosen 2012). With social media platforms it is often unclear what is being framed or who has the power to do the framing. Twitter and Facebook don’t create the content users see, and the algorithms that control our timelines determine what information we are exposed to. The power to set frames on social media platforms is controlled by anyone with the ability to leverage the algorithms. This can be good; it allows people other than those traditionally in power to present frames of their own, potentially making audiences aware of a wider range of viewpoints, influences, problems, and solutions.

But as we see in the research presented here, social media also increases the potential for deception and manipulation. When propagandistic content floods our newsfeeds, it is increasingly difficult to identify the true authors (is this a real individual or a bot?), the audience reach (is everyone seeing this, or has it been algorithmically selected for your tastes?), and the purpose of the content. Clearly, framing theory is a useful lens for evaluating disinformation on social media. Research might identify the original source of information attempting to “promote a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation,” and attempt to follow the acceptance of the frame by audiences (Entman 1993).

This approach to analyzing disinformation on social media makes use of framing as “a theory of media effects” (Scheufele 1999). Goffman’s concept of “social frameworks” seems particularly well-suited to examining the effects of social media. We are social animals, and social media platforms have become an important site for our social connections. Our interpretations of information and events are influenced by our social connections, whether or not we are conscious of that influence (Goffman 1974).

Media Ecology Theory

We are aware there is considerable disagreement in the academic world about Marshall McLuhan, but the Media Ecology framework seems particularly well suited for analyzing the technological, social, and political complexities of this particular epoch of the information age.

McLuhan wrote about media as “extensions of some human faculty” (McLuhan, Marshall, & Fiore 1967), and “the new scale that is introduced into our affairs by each extension of ourselves, or by any new technology” (McLuhan 1964). Media ecology theory frames the Internet and social media as hyperextensions of every human sense. And on the Internet those extensions are interconnected by a global network of devices that can send and receive information literally at the speed of light, “extending our central nervous system itself in a global embrace, abolishing both space and time as far as our planet is concerned” (McLuhan 1964).

But media ecology theory “is not exhausted by the thought of Herbert Marshall McLuhan”(Islas & Bernal 2016). Some of the post-McLuhan scholarship directly addresses the social and political effects of digital media. Robert K. Logan, a former colleague of McLuhan, suggests that in a flip reversal of media as extensions of man, “the human users of digital media become an extension of those digital media as these media scoop up their data and use them to the advantage of those that control these media…The feedback of the users of digital media become the feedforward for those media” (Logan, 2019).

Logan is primarily concerned with the abuse of personal data for persuasive communications by digital media monopolies such as Google, Facebook, Amazon, and Twitter. But the same kinds of personal data and persuasive technologies are being used by the propagandistic actors in the scenarios described in this project. They aren’t the owners of the technologies, but they don’t have to be. In today’s neoliberal, unregulated “free market,” social media networks are open to use or abuse at scale by anyone with enough resources. As suggested in the study Bots and Political Influence: A Sociotechnical Investigation of Social Network Capital, the resources required for effective social media propaganda operations are beyond the means of anyone but large institutional actors like governments (Murthy, Powel, et al. 2016). And as is clear in Emilio Iasiello’s article Russia’s Improved Information Operations: From Georgia to Crimea, governments are now budgeting for disinformation campaigns aimed at national and global audiences as a vital part of their geopolitical and military strategies (Iasiello 2017). As applied to the Internet age, McLuhan’s frame is still relevant: the medium is the message, and the user is the content (McLuhan 1964).

Conclusion

During this project we chose to primarily use printed resources from academic or government studies. In some cases we reviewed reports from non-profit organizations focused on digital disinformation and security studies. While news reports could have been helpful in providing the most recent accounts of political disinformation, we decided to avoid possible issues of journalistic story framing. We did our best to vet all sources for credibility, and to weed out resources showing signs of ideological and political bias. Our methodology included an examination of the authors, their body of research, and their institutional affiliations. We believe our choices are justifiable, but our inclusion of these sources does not imply wholesale endorsement of the authors or the information and views they express.

Due to rapid changes in technologies used for disinformation and the circumstances of its use, it is likely that much of today’s research will soon be obsolete. An obvious response to this is more research, and it’s clear from our work on this project that more research is coming. A variety of new institutions and initiatives are beginning to systematically study and counter digital disinformation. Which also raises a caution: Will we begin to see disinformation in disinformation research? All the more reason for us to be critical of our sources, and select only those we can reasonably identify as credible.

Coda

Any analysis of the actions and attitudes of governments and other informational actors will inevitably be shaped by the values and views of the authors. Because a discussion of the author’s perspective is rarely included in their published works, audiences may assume that the analysis is intended to be “objective,” and that the author occupies “the view from nowhere” (Rosen 2003). We wish to make our values and views explicit so as to avoid any ambiguity about our perspectives and motivations.

As librarians we understand that “the values and ethics of librarianship are a firm foundation for understanding human rights and the importance of human rights education,” and that “human rights education is decidedly not neutral” (Hinchliffe 2016, p.81). While there can be different arguments about the merits and flaws of different political and economic systems, the role of corporations and governments, and the obligations of citizens, we are strongly in favor of free expression, self-determination, and social justice. We believe all people have an absolute right to knowledge, and we regard influence operations designed to deceive, confuse, or divide people and nations as violations of their human rights and dangerous to the future of world peace. The Internet has become a medium for influencing the thoughts and behavior of people across the globe. Disinformation is not new, but its potential for disruption has never been greater.

We view social media as potentially a net positive for human welfare and civic life. For now, let’s just say it’s a work in progress.


References

Entman, Robert M. 1993. “Framing: Toward Clarification of a Fractured Paradigm.” Journal of Communication 43 (4): 51–58. https://doi.org/10.1111/j.1460-2466.1993.tb01304.x.

Gamson, William and Modigliani, Andre. 1987. “The Changing Culture of Affirmative Action.” In Research in Political Sociology , edited by Richard Braungart , 137-177. Greenwich, CT: Jai Press, Inc.

Goffman, Erving. 1974. Frame Analysis: An Essay on the Organization of Experience. Frame Analysis: An Essay on the Organization of Experience. Cambridge, MA, US: Harvard University Press.

Hinchliffe, Lisa Janicke. 2016. “Loading Examples to Further Human Rights Education.” https://www.ideals.illinois.edu/handle/2142/91636.

Iasiello, Emilio. 2017. “Russia’s Improved Information Operations: From Georgia to Crimea.” US Army War College: Parameters [Summer 2017], US Army War College Quarterly: Parameters, 47 (2): 51–63. https://www.hsdl.org/?abstract&did=803998.

Islas, Octavio, and Juan Bernal Suárez. 2016. “Media Ecology: A Complex and Systemic Metadiscipline.” Philosophies 1 (October): 190–98. https://doi.org/10.3390/philosophies1030190.

Logan, Robert K. 2019. “Understanding Humans: The Extensions of Digital Media.” Information 10 (10): 304. https://doi.org/10.3390/info10100304.

McLuhan, Marshall. 1964. Understanding Media: The Extensions of Man.

McLuhan, Marshall, Quentin Fiore, and Jerome Agel. 1967. The medium is the massage. New York: Bantam Books.

Murthy, Dhiraj, Alison B. Powell, Ramine Tinati, Nick Anstead, Leslie Carr, Susan J. Halford, and Mark Weal. 2016. “Automation, Algorithms, and Politics| Bots and Political Influence: A Sociotechnical Investigation of Social Network Capital.” International Journal of Communication 10 (0): 20. https://ijoc.org/index.php/ijoc/article/view/6271.

Rosen, Jay. 2003. “PressThink: The View from Nowhere.”. http://archive.pressthink.org/2003/09/18/jennings.html.

Rosen, Jay. 2012. “The People Formerly Known as the Audience.” In The Social Media Reader. ed. Mandiberg, Michael. NYU Press. www.jstor.org/stable/j.ctt16gzq5m.

Sapiie, M.A. & Anya, A. 2019. “Jokowi accuses Prabowo camp of enlisting foreign propaganda help.” From https://www.thejakartapost.com/news/2019/02/04/jokowi-accuses-prabowo-camp-of-enlisting-foreign-propaganda-help.html

Scheufele, Dietram A. 1999. “Framing as a Theory of Media Effects.” Journal of Communication 49 (1): 103–22. https://doi.org/10.1111/j.1460-2466.1999.tb02784.x.

Schmidt-Felzmann, Anke. 2017. “More than ‘Just’ Disinformation: Russia’s Information Operations in the Nordic Region.” In Information Warfare – New Security Challenge for Europe, 32–67. Centre for European and North Atlantic Affairs.

Wierzejski, Antoni, Jonáš Syrovatka, Daniel Bartha, Botond Feledy, András Rácz, Petru Macovei, Dušan Fischer, and Margo Gontar. 2017. “Information Warfare in the Internet Countering Pro-Kremlin Disinformation in the CEE Countries.” Centre for International Relations. https://www.academia.edu/34620712/Information_warfare_in_the_Internet_COUNTERING_PRO-KREMLIN_DISINFORMATION_IN_THE_CEE_COUNTRIES_Centre_for_International_Relations_and_Partners.

More than ‘Just’ Disinformation: Russia’s Information Operations in the Nordic Region

This book chapter by Anke Schmidt-Felzmann from the Swedish Institute of International Affairs provides an overview of Russian disinformation tactics and messages in the Nordic countries of Finland, Sweden, Denmark, and Norway, and Iceland.

The author begins with the context of why Russia would be interested in social-political influence in the region: Norway and Finland share borders with Russia; Norway is a net exporter of oil and gas, and thus in competition with Russia on world energy markets; Denmark and Russia have an ongoing dispute over resource-rich territories along the continental shelf; Sweden has a visible role in political reform in Ukraine, and in the EU’s Eastern Partnership initiative.

Perhaps more importantly, the five Nordic countries have strong relationships with both NATO and the EU – although only Iceland, Denmark, and Norway are NATO members. All five countries responded with condemnation of Russia’s annexation of Crimea, including the implementation of sanctions.

Schmidt-Felzmann discusses Russia’s use of different channels, social media, and IT tools for “socio-psychological manipulation” in the Nordic region. Interestingly, she singles out the manipulation of individual human beings as both targets and tools of misinformation including journalists and politicians. Tactics cited by the author include intimidation and disinformation campaigns against individuals critical of Russian policies, and the use of trolls and bots on social media. The case of Finnish journalist Jessikka Aro is an interesting example. In 2015 while investigating online trolling, her reporting identified the building in St. Petersburg housing the now-infamous Internet Research Agency. She soon became the target of personal attacks and harassment on social media by the same trolls.

According to Schmidt-Felzmann, Russian information operations in the Nordic region seem to be aimed at discrediting NATO and the EU, and positioning Russia as an innocent victim of “Russophobia” promulgated by the West. Accusations of anti-Russian bias are leveled against journalists and politicians on Russian-sponsored media, online forums, and social media platforms,

Interestingly, Schmidt-Felzmann says, Russian attempts to establish Nordic platforms for its news operations Sputnik and RT failed less than a year after their launch in 2015.

In summary, the Nordic nations appear to have shown considerable resistance to Russian information operations, and are engaged with the EU and NATO in developing multinational research centers and countermeasures such as identifying and responding to disinformation, training on how to identify malicious information, coordinating the exchange of information between agencies…and developing their own influence operations.

Reference

Schmidt-Felzmann, Anke. 2017. “More than ‘Just’ Disinformation: Russia’s Information Operations in the Nordic Region.” In Information Warfare. New Security Challenge for Europe, 32–67. Centre for European and North Atlantic Affairs.

 

Information Warfare in the Internet: Countering Pro-Kremlin Disinformation in the CEE Countries

In this report, published in 2017 by the Poland-based Centre for International Relations and funded by the International Visegrad Fund, authors from seven Central and East European Countries analyze disinformation tactics, channels, and messaging currently used by Russia targeting their respective nations. Data used was drawn from the period between July and October 2017, although general trends were also assessed. The authors find that while Russia’s propaganda tactics are similar throughout the CEE countries, messages are often tailored to maximize impact based on the politics of each country.

The presentation of country-specific data in the report follows a similar format. For example, the section on the Czech Republic, written by Jonáš Syrovatka from the the Prague Security Studies Institute, describes the channels used to spread propagandistic messages, and the reach of each channel. The main channels are conspiracy websites, alternative media, semi-legitimate “bridge media,” Facebook, and YouTube. Messaging and language vary depending on the normative style and tone of each channel. Syrovatka identifies the combination of channels and messaging used as key to propagandistic influence. Subjects of the messages in the Czech Republic include the danger of Islamization, violence by refugees as a strategy by global elites including Hillary Clinton ad Emmanuel Macron, corruption and incompetence of the Ukrainian government, and positive narratives concerning Vladimir Putin and Russia.

Among the country-specific differences:

  • In Hungary, pro-Kremlin narratives are often promoted by mainstream newspapers and broadcast channels, including the state-owned news agency MTI. Much of the messaging content skirts the line between authentic Hungarian pro-Eastern sentiments and Kremlin-sponsored propaganda, rather than pure disinformation. With three million Hungarian retirees, chain mail is also used to spread pro-Kremin narratives. Facebook is the predominant social media platform used, due to Hungary’s lack of “Twitter culture.” Propagandistic messages are aimed to undermine trust in the U.S., NATO and the EU, encourage anti-immigration and anti-refugee views,  discredit liberal ideas about human rights and NGOs, and to discredit Ukraine as corrupt, fascist, and failing.
  • In Moldava, Petru Macovei, Executive Director of the Association of Independent Press at the Chisinau School of Advanced Journalism, reports that Russian influence is powerfully exerted through mass media outlets created by Russia, including a Moldavan edition of Komsomolskaya Pravda and Sputnik.md. Twitter and Facebook are also used as channels. Narratives include conspiracy theories about NATO preparing for nuclear war against Russia with Moldava as a battlefield, that Moldava is ruled by an outsider network connected with George Soros, that the U.S., NATO, and NGOs are conspiring against Moldavan interests and promoting homosexuality, and that the U.S. is defending the Islamic State in Syria.
  • In Poland, tactics and propaganda messages are much the same: that NATO is a tool of America and is acting again Poland, and that Russia is the only counter to American influence. Poland-specific narratives include disinformation targeting Ukraine, wherein Ukrainians are portrayed as “wild and cruel beasts mindlessly slaughtering Poles.” Polish mass media, websites, and social media are leveraged for these narratives, along with an interesting twist: fake interview with top Polish generals which invariably position the West as anti-Poland, and as promoting homosexuality. According to journalist Antoni Wierzejski, author of the section on Poland, hackers and trolls are very active in Poland.
  • The section on Ukraine, authored by Margo Gontar, journalist and co-founder of the Ukrainian organization StopFake, summarized four disinformation themes: Ukraine is a failed state, Ukrainians are dangerous, Ukraine is breaking the Minsk Agreement to stop the war in the Donbass region of Ukraine, and that everyone loves Russia. At the same time, according to other research identified in this project, various actors in Ukraine are mounting somewhat effective information campaigns to counter Russian propaganda. It is of note, though, that anti-Ukraine disinformation is featured so prominently in the other CEE nations.

As we see in other research, social media is an important channel for the spread of disinformation and participatory propaganda. But the authors emphasize it is the combined impact of traditional media, state-sponsored news organizations, conspiracy websites, trolls and hackers, and social media that is the basis for Russia’s propaganda strategy.

The report concludes with a number of recommendations to counter Russian disinformation, including more research on its authors and target audiences, education of the public on information ethics, and encouraging Internet companies to deploy tools against fake news. All of which are worth attempting, and possibly inadequate unless done at a very large scale.

Reference

Wierzejski, Antoni, Jonáš Syrovatka, Daniel Bartha, Botond Feledy, András Rácz, Petru Macovei, Dušan Fischer, and Margo Gontar. 2017. “Information Warfare in the Internet COUNTERING PRO-KREMLIN DISINFORMATION IN THE CEE COUNTRIES.” Centre for International Relations. https://www.academia.edu/34620712/Information_warfare_in_the_Internet_COUNTERING_PRO-KREMLIN_DISINFORMATION_IN_THE_CEE_COUNTRIES_Centre_for_International_Relations_and_Partners.

Stoking the Flames: Russian Information Operations in Turkey

It can be argued that Russia scored a major goal on October 7, 2019 when U.S. President Trump tweeted that he would withdraw American troops from the war zone in northeastern Syria, and that “Turkey, Europe, Syria, Iran, Iraq, Russia and the Kurds will now have to…figure the situation out.” This cleared the way for Turkey to launch a large-scale military operation against America’s allies, the Kurdish PKK. The sudden change in U.S. policy caught just about everyone off-guard – the Kurds, NATO, the U.S. State Department and members of Congress, and even the U.S. military commanders in Syria.

In the 2018 article “Stoking the Flames: Russian Information Operations in Turkey,” published in the journal Ukraine Analytica, University of Copenhagen political scientist Balkan Devlen details Russia’s shifting propaganda narrative targeting Turkish audiences. During and after it 2014 invasion in Crimea, Russia sought to portray Ukraine as a corrupt ally of the “imperialist West,” and Russia as an anti-imperialist friend to Turkey. A variety of media outlets were used to spread this message, including the Turkish language service of Russia’s Sputnik News, and a range of Turkish media sources known to be suspicious of Western and American meddling in the region. As shown by other research on Russian disinformation strategies, a variety of social media outlets were also used.

After the downing of a Russian jet by the Turkish air force in 2015, Russia’s propaganda massaging in Turkey did a 180-degree turn and began targeting the Turkish government and its foreign policy, claiming that Turkey was supporting ISIS, violating international law, and committing war crimes. Balkan notes that Russia’s anti-Turkey propaganda campaign was immediate, robust, and agile, suggesting that Russia is well-prepared to launch disinformation campaigns against even friendly nations, with messaging developed in advance should the need arise.

In 2016 relations between Russia and Turkey became friendly, and the torrent of anti-Turkish disinformation quicky ceased. A new phase of propaganda sought to increase suspicion and animosity toward the U.S. and NATO, and to once again portray Russia as a true friend. As anti-American sentiment sentiment increases among the Turkish population, this narrative has been picked up by Turkey’s major media and amplified by Eurasianist “fellow travellers” through various channels.

Balkan concludes that as relations between Turkey, the U.S., and NATO fray, “Russia gets closer to its goal of weakening and undermining the liberal international order.”

While it is possible to read Balkan’s article as a polemic, much of his argument is echoed by other research annotated in this Political Propaganda and Social Media project. It might also be worth noting that some of the propaganda messages deployed by Russia in Turkey, such as the message that Ukraine is a corrupt nation, are mirrored in tweets by the U.S. president.

The Disinformation Order: Disruptive Communication and the Decline of Democratic Institutions

In this 2018 study published in the European Journal of Communication, W. Lance Bennett and Steven Livingston trace the roots of online political disinformation affecting democratic nations.  They argue that declining confidence in democratic institutions makes citizens more inclined to believe false information and narratives, and spread them more broadly on social media. They identify the radical right, enabled and supported by Russia, as the predominant source of disinformation, and cite examples in the UK Brexit campaign, disruptions affecting democracies in Europe, and the U.S. election of Donald Trump.

Many articles on social media and political communications provide differing definitions of disinformation, misinformation, and fake news. Bennett and Livingston offer their own provisional definition: “intentional falsehoods spread as news stories or simulated documentary formats to advance political goals” (Bennett & Livingston, p.124). In addition to those who share disinformation originating on the Internet, they identify legacy media as an important factor in spreading it further. They say that when news organizations report on false claims and narratives, the effect is to amplify the disinformation. Even fact-checking can strengthen this amplifier effect, because the message is exposed and repeated to more people. As traditional news institutions are attacked as “fake news,” journalistic attempts to correct the record can be cited by propagandists and their supporters as proof of an elite conspiracy to hide the truth. The authors refer to this dynamic as the “disinformation-amplification-reverberation (DAR) cycle.”

It’s interesting that both the political left and right increasingly share contempt for neoliberal policies that benefit elites. But instead of coming together to address political and economic problems, they are being driven further apart by “strategic disinformation.” This hollowing out of the center produces a growing legitimacy crisis, and political processes that are increasingly superficial. The authors term this post-democracy: “(t)he breakdown of core processes of political representation, along with declining authority of institutions and public officials” (p.127).

The authors identify Russia as the primary source of disinformation and disruptive hacking in an increasing number of western democratic and semi-democratic nations: Germany, the UK, The Netherlands, Norway, Sweden, Austria, Hungary, Poland, Turkey, and most of the Balkans. They say Russia has learned to coordinate a variety of hybrid warfare tactics that reinforce their impact, such as troll factories, hackers, bots, and the seeding of false information and narratives by state-owned media channels. As other researchers have argued, Bennett and Livingston say Russia’s disinformation activities are geostrategic, aimed at undermining NATO and the cohesiveness of democratic nations who oppose the expansion of Russia’s power.

In response to the scale of disinformation and disruptions in democratic institutions, Bennett and Livingston suggest comparative research on the characteristics of disinformation in different societies, so as to identify similarities and differences, and the identification of contextual factors that provide either fertile ground for or resistance to disinformation. They also recommend that the current operations of trolls, hackers, and bots should be more central to political communications studies.

Reference

Bennett, W. Lance, and Steven Livingston. 2018. “The Disinformation Order: Disruptive Communication and the Decline of Democratic Institutions.” European Journal of Communication 33 (2): 122–39. https://doi.org/10.1177/0267232118760317.

Russia’s Improved Information Operations: From Georgia to Crimea

In the Western press much attention has been focused on Russia’s interference in the 2016 U.S. election, by spreading disinformation broadly on the Internet and social media platforms. Russia (and of course the United States) has long used propaganda as a psychological weapon in hot wars, cold wars, and even times of relative peace. In an article published in the US Army War College journal Parameters, Emilio Iasiello, a cyberintelligence advisor to Fortune 100 clients, says “nonkinetic options” are now a core part of Russia’s military and geopolitical strategy: using information and deception to disrupt opponents and influence internal and global audiences.

But propaganda hasn’t always prevailed. Iasiello reviews Russia’s information operations in its 2008 invasion of Georgia, and finds that Georgia ultimately won the information war. Russia relied on pre-Internet propaganda tactics such as using traditional media to deliver key messaging to the international community, and trying to position Georgia as the aggressor and Russia as merely defending its citizens. But Georgia fought back with its own extensive counterinformation campaign, and ultimately won the battle for international support.

Iasiello says Russia may have lost the Georgian conflict, but learned that the Internet could be used as a weapon and began revising and expanding its information war strategy. In its 2014 annexation of the Crimean region in Ukraine, Russia applied the lessons from the Georgian conflict to orchestrate a rapid and nearly bloodless victory. Russian state actors directed cyberattacks to shut down Crimea’s telecommunications and websites, and to jam the mobile phones of key Ukrainian officials. Russian hackers intercepted documents on Ukrainian military strategy, launched DDOS attacks on Ukrainian and NATO websites, disrupted the Ukrainian Central Election Commission network, planted “fake news” on fake websites and Russian media, and employed a cadre of trolls to comment on news and social media for the purpose of distorting reality and confusing Ukraine’s allies.

According to Iasiello, the 2014 Crimean annexation was a case study in the use of social media to control messaging and sow discord among the Ukrainian population and the international community. Thus the birth of Russia’s new strategy for “hybrid” warfare, using trolls, fake websites, social media, and the international news media to massively spread disinformation and confusion about the conflict.

Iasiello says Russia is vastly outpacing the United States on information war tactics, and using its experience to refine its strategies for different conflicts. In essence, Russia is playing the long game to sow discord and division, so as to weaken Western alliances. His recommendations include developing a U.S. counterinformation center, using analytics and artificial intelligence to identify online disinformation, and increasing international cooperation to combat various forms of Russian propaganda. He concludes that the Internet and social media are now an international battleground, and Russia is currently winning the information war.

Reference

Iasiello, Emilio. 2017. “Russia’s Improved Information Operations: From Georgia to Crimea.” US Army War College: Parameters [Summer 2017], US Army War College Quarterly: Parameters, 47 (2): 51–63. https://www.hsdl.org/?abstract&did=803998.