Political Propaganda & Social Media: A Project Summary and Critique

We began this project with somewhat lofty goals. We wanted to develop a comparative analysis of the impact of social media influence on the behavior and governance of the people in the regions examined; to understand how similar forces manifest in different ways in different cultures and political conditions; and to contribute to existing literature on social media disinformation and make it more accessible. The scale of the topic meant we could attain these goals only by adding the words “scratch the surface” to the above. But regardless of failure to reach our original ambition, we achieved some unexpected things.

Firstly, in searching for useful primary sources on social media and political disinformation, we became much more aware of existing research by scholars, government bodies, think tanks, and NGOs. Just a few short years ago, it was common to assume social media would liberate people from the tyranny of one-way mass media controlled by large corporations, governments, and oligarchs. It is now darkly amusing to read popular and scholarly literature on social media written just five years ago. Today there are thousands of seemingly credible sources of research exploring the current disinformation environment and its impact on politics.

Given the wealth of available research materials, almost all of which is accessible online, we tried to identify the most relevant examples. It’s likely that we failed in that as well, but the primary sources we selected are generally representative of the existing body of research. We also chose to narrow our initial focus from “the world” to certain areas of the world, specifically Indonesia and Europe. Some of the most interesting examples of social media propaganda are now occurring in Africa, South America, and of course China (and by “interesting” we mean horrific), but we had to put aside those regions at least for now.

The second unexpected thing was a clear correlation between propagandistic messages sponsored by state actors, and changes in the political rhetoric of those targeted. As if we didn’t know this: propaganda can work. For example, there is a preponderance of evidence that Russia’s disinformation campaign to position Ukraine as a corrupt state and perpetrator of various conspiracies is not only influencing opinions among populations in Europe, it is being loudly echoed by the President of the United States and members of his political party.

But propaganda doesn’t always work. For example, in Anke Schmidt-Felzmann’s account of Russian disinformation operations in the Nordic countries of Europe, attempts to undermine support for the EU and NATO are gaining very little traction (Schmidt-Felzmann 2017). In contrast, the same messages are resonating broadly in Central and East European countries, whose populations and political leaders are more friendly to Russia, and more suspicious of the United States, the EU, and NATO (Wierzejski, Syrovatka et al. 2017).

A third surprise dawned on us over the months of working on this project: The use of social media for political propaganda is rapidly evolving, and we are merely capturing a screenshot (so to speak) of this moment. While use of the Internet for strategic disinformation predates the 2016 U.S. presidential election, the disruption of that election, along with others in Africa, India, and the Brexit referendum, brought into sharp relief the scale at which online political propaganda is now being deployed. As the actors behind it acquire more resources and learn from their successes and failures, and as more “innovation” is piled on our current systems of ubiquitous information, we are likely to see a continuing evolution of disinformational strategies and tactics.

Comparing Indonesia and Russia: State Roles in the Spread of Propaganda

Any attempt to analyze the use of propaganda in two different countries and contexts might be a fool’s errand. It’s difficult to shrink entire countries into narratives small enough to neatly compare one to the other, and it puts the analyst at risk of reducing each country to a singular convenient narrative. However for argument’s sake, let’s try it out:

Russia might be seen as the puppet master, controlling armies of bots and trolls to create havoc in many target countries, and sowing the seeds of discord, distrust, and disinformation to weaken democracies worldwide. Indonesia could be cast as a relatively blameless victim country, a young democracy subjected to attacks of propaganda and fake news from religious groups, and possibly from Russia itself (Sapiie and Anya, 2019). The takeaway might be that Russia, a nuclear power with imperialistic ambitions, has the motivation and resources to spread their propaganda across the globe, while countries like Indonesia do their best to overcome the propaganda threatening their democracy.

Obviously it isn’t that simple. Russia isn’t the only country sponsoring propaganda or attempting to influence the political activity of other countries. The Indonesian government isn’t completely innocent of sponsoring their own propaganda. It would be naïve to regard states as monolithic actors, particularly when it comes to their presence on social media. Finally, attempting to compare propaganda activities in very different countries runs the risk of perpetuating our own received colonial narratives, casting some as the villain and others as the innocent victim. In the world of social media disinformation, it may not be obvious who is colonizing whom.

Theoretical Frameworks

Is there a theory of social media that sheds light on current phenomena, and allows us to confidently make predictions? Or are the pieces moving too fast to do more than merely describe? We explore here the application of two prominent theories in communications research: Framing and Media Ecology.

Framing Theory

Framing Theory fits neatly into the conversation of propaganda on social media. As defined by Entman, framing means to “select some aspects of a perceived reality and make them more salient in a communicating text, in such a way as to promote a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation” (Entman 1993). In contrast to agenda setting or priming, framing theory sets not only the topic of discussion, but the terms as well.

Broadly stated, the effect of framing is to construct a social reality people will use to interpret information and events. Similar to pre-Internet media, social media can provide a “a central organizing idea or story line that provides meaning to an unfolding strip of events . . . The frame suggests what the controversy is about, the essence of the issue” (Gamson & Modigliani 1987).

In traditional print and broadcast media, the power of framing is in the hands of journalists, editors, publishers, producers, networks, etc., and there is a clear division between framers and audiences. Social media dissolves this division as “the people formerly known as the audience” are involved in the framing (Rosen 2012). With social media platforms it is often unclear what is being framed or who has the power to do the framing. Twitter and Facebook don’t create the content users see, and the algorithms that control our timelines determine what information we are exposed to. The power to set frames on social media platforms is controlled by anyone with the ability to leverage the algorithms. This can be good; it allows people other than those traditionally in power to present frames of their own, potentially making audiences aware of a wider range of viewpoints, influences, problems, and solutions.

But as we see in the research presented here, social media also increases the potential for deception and manipulation. When propagandistic content floods our newsfeeds, it is increasingly difficult to identify the true authors (is this a real individual or a bot?), the audience reach (is everyone seeing this, or has it been algorithmically selected for your tastes?), and the purpose of the content. Clearly, framing theory is a useful lens for evaluating disinformation on social media. Research might identify the original source of information attempting to “promote a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation,” and attempt to follow the acceptance of the frame by audiences (Entman 1993).

This approach to analyzing disinformation on social media makes use of framing as “a theory of media effects” (Scheufele 1999). Goffman’s concept of “social frameworks” seems particularly well-suited to examining the effects of social media. We are social animals, and social media platforms have become an important site for our social connections. Our interpretations of information and events are influenced by our social connections, whether or not we are conscious of that influence (Goffman 1974).

Media Ecology Theory

We are aware there is considerable disagreement in the academic world about Marshall McLuhan, but the Media Ecology framework seems particularly well suited for analyzing the technological, social, and political complexities of this particular epoch of the information age.

McLuhan wrote about media as “extensions of some human faculty” (McLuhan, Marshall, & Fiore 1967), and “the new scale that is introduced into our affairs by each extension of ourselves, or by any new technology” (McLuhan 1964). Media ecology theory frames the Internet and social media as hyperextensions of every human sense. And on the Internet those extensions are interconnected by a global network of devices that can send and receive information literally at the speed of light, “extending our central nervous system itself in a global embrace, abolishing both space and time as far as our planet is concerned” (McLuhan 1964).

But media ecology theory “is not exhausted by the thought of Herbert Marshall McLuhan”(Islas & Bernal 2016). Some of the post-McLuhan scholarship directly addresses the social and political effects of digital media. Robert K. Logan, a former colleague of McLuhan, suggests that in a flip reversal of media as extensions of man, “the human users of digital media become an extension of those digital media as these media scoop up their data and use them to the advantage of those that control these media…The feedback of the users of digital media become the feedforward for those media” (Logan, 2019).

Logan is primarily concerned with the abuse of personal data for persuasive communications by digital media monopolies such as Google, Facebook, Amazon, and Twitter. But the same kinds of personal data and persuasive technologies are being used by the propagandistic actors in the scenarios described in this project. They aren’t the owners of the technologies, but they don’t have to be. In today’s neoliberal, unregulated “free market,” social media networks are open to use or abuse at scale by anyone with enough resources. As suggested in the study Bots and Political Influence: A Sociotechnical Investigation of Social Network Capital, the resources required for effective social media propaganda operations are beyond the means of anyone but large institutional actors like governments (Murthy, Powel, et al. 2016). And as is clear in Emilio Iasiello’s article Russia’s Improved Information Operations: From Georgia to Crimea, governments are now budgeting for disinformation campaigns aimed at national and global audiences as a vital part of their geopolitical and military strategies (Iasiello 2017). As applied to the Internet age, McLuhan’s frame is still relevant: the medium is the message, and the user is the content (McLuhan 1964).

Conclusion

During this project we chose to primarily use printed resources from academic or government studies. In some cases we reviewed reports from non-profit organizations focused on digital disinformation and security studies. While news reports could have been helpful in providing the most recent accounts of political disinformation, we decided to avoid possible issues of journalistic story framing. We did our best to vet all sources for credibility, and to weed out resources showing signs of ideological and political bias. Our methodology included an examination of the authors, their body of research, and their institutional affiliations. We believe our choices are justifiable, but our inclusion of these sources does not imply wholesale endorsement of the authors or the information and views they express.

Due to rapid changes in technologies used for disinformation and the circumstances of its use, it is likely that much of today’s research will soon be obsolete. An obvious response to this is more research, and it’s clear from our work on this project that more research is coming. A variety of new institutions and initiatives are beginning to systematically study and counter digital disinformation. Which also raises a caution: Will we begin to see disinformation in disinformation research? All the more reason for us to be critical of our sources, and select only those we can reasonably identify as credible.

Coda

Any analysis of the actions and attitudes of governments and other informational actors will inevitably be shaped by the values and views of the authors. Because a discussion of the author’s perspective is rarely included in their published works, audiences may assume that the analysis is intended to be “objective,” and that the author occupies “the view from nowhere” (Rosen 2003). We wish to make our values and views explicit so as to avoid any ambiguity about our perspectives and motivations.

As librarians we understand that “the values and ethics of librarianship are a firm foundation for understanding human rights and the importance of human rights education,” and that “human rights education is decidedly not neutral” (Hinchliffe 2016, p.81). While there can be different arguments about the merits and flaws of different political and economic systems, the role of corporations and governments, and the obligations of citizens, we are strongly in favor of free expression, self-determination, and social justice. We believe all people have an absolute right to knowledge, and we regard influence operations designed to deceive, confuse, or divide people and nations as violations of their human rights and dangerous to the future of world peace. The Internet has become a medium for influencing the thoughts and behavior of people across the globe. Disinformation is not new, but its potential for disruption has never been greater.

We view social media as potentially a net positive for human welfare and civic life. For now, let’s just say it’s a work in progress.


References

Entman, Robert M. 1993. “Framing: Toward Clarification of a Fractured Paradigm.” Journal of Communication 43 (4): 51–58. https://doi.org/10.1111/j.1460-2466.1993.tb01304.x.

Gamson, William and Modigliani, Andre. 1987. “The Changing Culture of Affirmative Action.” In Research in Political Sociology , edited by Richard Braungart , 137-177. Greenwich, CT: Jai Press, Inc.

Goffman, Erving. 1974. Frame Analysis: An Essay on the Organization of Experience. Frame Analysis: An Essay on the Organization of Experience. Cambridge, MA, US: Harvard University Press.

Hinchliffe, Lisa Janicke. 2016. “Loading Examples to Further Human Rights Education.” https://www.ideals.illinois.edu/handle/2142/91636.

Iasiello, Emilio. 2017. “Russia’s Improved Information Operations: From Georgia to Crimea.” US Army War College: Parameters [Summer 2017], US Army War College Quarterly: Parameters, 47 (2): 51–63. https://www.hsdl.org/?abstract&did=803998.

Islas, Octavio, and Juan Bernal Suárez. 2016. “Media Ecology: A Complex and Systemic Metadiscipline.” Philosophies 1 (October): 190–98. https://doi.org/10.3390/philosophies1030190.

Logan, Robert K. 2019. “Understanding Humans: The Extensions of Digital Media.” Information 10 (10): 304. https://doi.org/10.3390/info10100304.

McLuhan, Marshall. 1964. Understanding Media: The Extensions of Man.

McLuhan, Marshall, Quentin Fiore, and Jerome Agel. 1967. The medium is the massage. New York: Bantam Books.

Murthy, Dhiraj, Alison B. Powell, Ramine Tinati, Nick Anstead, Leslie Carr, Susan J. Halford, and Mark Weal. 2016. “Automation, Algorithms, and Politics| Bots and Political Influence: A Sociotechnical Investigation of Social Network Capital.” International Journal of Communication 10 (0): 20. https://ijoc.org/index.php/ijoc/article/view/6271.

Rosen, Jay. 2003. “PressThink: The View from Nowhere.”. http://archive.pressthink.org/2003/09/18/jennings.html.

Rosen, Jay. 2012. “The People Formerly Known as the Audience.” In The Social Media Reader. ed. Mandiberg, Michael. NYU Press. www.jstor.org/stable/j.ctt16gzq5m.

Sapiie, M.A. & Anya, A. 2019. “Jokowi accuses Prabowo camp of enlisting foreign propaganda help.” From https://www.thejakartapost.com/news/2019/02/04/jokowi-accuses-prabowo-camp-of-enlisting-foreign-propaganda-help.html

Scheufele, Dietram A. 1999. “Framing as a Theory of Media Effects.” Journal of Communication 49 (1): 103–22. https://doi.org/10.1111/j.1460-2466.1999.tb02784.x.

Schmidt-Felzmann, Anke. 2017. “More than ‘Just’ Disinformation: Russia’s Information Operations in the Nordic Region.” In Information Warfare – New Security Challenge for Europe, 32–67. Centre for European and North Atlantic Affairs.

Wierzejski, Antoni, Jonáš Syrovatka, Daniel Bartha, Botond Feledy, András Rácz, Petru Macovei, Dušan Fischer, and Margo Gontar. 2017. “Information Warfare in the Internet Countering Pro-Kremlin Disinformation in the CEE Countries.” Centre for International Relations. https://www.academia.edu/34620712/Information_warfare_in_the_Internet_COUNTERING_PRO-KREMLIN_DISINFORMATION_IN_THE_CEE_COUNTRIES_Centre_for_International_Relations_and_Partners.