Political Propaganda & Social Media: A Project Summary and Critique

We began this project with somewhat lofty goals. We wanted to develop a comparative analysis of the impact of social media influence on the behavior and governance of the people in the regions examined; to understand how similar forces manifest in different ways in different cultures and political conditions; and to contribute to existing literature on social media disinformation and make it more accessible. The scale of the topic meant we could attain these goals only by adding the words “scratch the surface” to the above. But regardless of failure to reach our original ambition, we achieved some unexpected things.

Firstly, in searching for useful primary sources on social media and political disinformation, we became much more aware of existing research by scholars, government bodies, think tanks, and NGOs. Just a few short years ago, it was common to assume social media would liberate people from the tyranny of one-way mass media controlled by large corporations, governments, and oligarchs. It is now darkly amusing to read popular and scholarly literature on social media written just five years ago. Today there are thousands of seemingly credible sources of research exploring the current disinformation environment and its impact on politics.

Given the wealth of available research materials, almost all of which is accessible online, we tried to identify the most relevant examples. It’s likely that we failed in that as well, but the primary sources we selected are generally representative of the existing body of research. We also chose to narrow our initial focus from “the world” to certain areas of the world, specifically Indonesia and Europe. Some of the most interesting examples of social media propaganda are now occurring in Africa, South America, and of course China (and by “interesting” we mean horrific), but we had to put aside those regions at least for now.

The second unexpected thing was a clear correlation between propagandistic messages sponsored by state actors, and changes in the political rhetoric of those targeted. As if we didn’t know this: propaganda can work. For example, there is a preponderance of evidence that Russia’s disinformation campaign to position Ukraine as a corrupt state and perpetrator of various conspiracies is not only influencing opinions among populations in Europe, it is being loudly echoed by the President of the United States and members of his political party.

But propaganda doesn’t always work. For example, in Anke Schmidt-Felzmann’s account of Russian disinformation operations in the Nordic countries of Europe, attempts to undermine support for the EU and NATO are gaining very little traction (Schmidt-Felzmann 2017). In contrast, the same messages are resonating broadly in Central and East European countries, whose populations and political leaders are more friendly to Russia, and more suspicious of the United States, the EU, and NATO (Wierzejski, Syrovatka et al. 2017).

A third surprise dawned on us over the months of working on this project: The use of social media for political propaganda is rapidly evolving, and we are merely capturing a screenshot (so to speak) of this moment. While use of the Internet for strategic disinformation predates the 2016 U.S. presidential election, the disruption of that election, along with others in Africa, India, and the Brexit referendum, brought into sharp relief the scale at which online political propaganda is now being deployed. As the actors behind it acquire more resources and learn from their successes and failures, and as more “innovation” is piled on our current systems of ubiquitous information, we are likely to see a continuing evolution of disinformational strategies and tactics.

Comparing Indonesia and Russia: State Roles in the Spread of Propaganda

Any attempt to analyze the use of propaganda in two different countries and contexts might be a fool’s errand. It’s difficult to shrink entire countries into narratives small enough to neatly compare one to the other, and it puts the analyst at risk of reducing each country to a singular convenient narrative. However for argument’s sake, let’s try it out:

Russia might be seen as the puppet master, controlling armies of bots and trolls to create havoc in many target countries, and sowing the seeds of discord, distrust, and disinformation to weaken democracies worldwide. Indonesia could be cast as a relatively blameless victim country, a young democracy subjected to attacks of propaganda and fake news from religious groups, and possibly from Russia itself (Sapiie and Anya, 2019). The takeaway might be that Russia, a nuclear power with imperialistic ambitions, has the motivation and resources to spread their propaganda across the globe, while countries like Indonesia do their best to overcome the propaganda threatening their democracy.

Obviously it isn’t that simple. Russia isn’t the only country sponsoring propaganda or attempting to influence the political activity of other countries. The Indonesian government isn’t completely innocent of sponsoring their own propaganda. It would be naïve to regard states as monolithic actors, particularly when it comes to their presence on social media. Finally, attempting to compare propaganda activities in very different countries runs the risk of perpetuating our own received colonial narratives, casting some as the villain and others as the innocent victim. In the world of social media disinformation, it may not be obvious who is colonizing whom.

Theoretical Frameworks

Is there a theory of social media that sheds light on current phenomena, and allows us to confidently make predictions? Or are the pieces moving too fast to do more than merely describe? We explore here the application of two prominent theories in communications research: Framing and Media Ecology.

Framing Theory

Framing Theory fits neatly into the conversation of propaganda on social media. As defined by Entman, framing means to “select some aspects of a perceived reality and make them more salient in a communicating text, in such a way as to promote a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation” (Entman 1993). In contrast to agenda setting or priming, framing theory sets not only the topic of discussion, but the terms as well.

Broadly stated, the effect of framing is to construct a social reality people will use to interpret information and events. Similar to pre-Internet media, social media can provide a “a central organizing idea or story line that provides meaning to an unfolding strip of events . . . The frame suggests what the controversy is about, the essence of the issue” (Gamson & Modigliani 1987).

In traditional print and broadcast media, the power of framing is in the hands of journalists, editors, publishers, producers, networks, etc., and there is a clear division between framers and audiences. Social media dissolves this division as “the people formerly known as the audience” are involved in the framing (Rosen 2012). With social media platforms it is often unclear what is being framed or who has the power to do the framing. Twitter and Facebook don’t create the content users see, and the algorithms that control our timelines determine what information we are exposed to. The power to set frames on social media platforms is controlled by anyone with the ability to leverage the algorithms. This can be good; it allows people other than those traditionally in power to present frames of their own, potentially making audiences aware of a wider range of viewpoints, influences, problems, and solutions.

But as we see in the research presented here, social media also increases the potential for deception and manipulation. When propagandistic content floods our newsfeeds, it is increasingly difficult to identify the true authors (is this a real individual or a bot?), the audience reach (is everyone seeing this, or has it been algorithmically selected for your tastes?), and the purpose of the content. Clearly, framing theory is a useful lens for evaluating disinformation on social media. Research might identify the original source of information attempting to “promote a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation,” and attempt to follow the acceptance of the frame by audiences (Entman 1993).

This approach to analyzing disinformation on social media makes use of framing as “a theory of media effects” (Scheufele 1999). Goffman’s concept of “social frameworks” seems particularly well-suited to examining the effects of social media. We are social animals, and social media platforms have become an important site for our social connections. Our interpretations of information and events are influenced by our social connections, whether or not we are conscious of that influence (Goffman 1974).

Media Ecology Theory

We are aware there is considerable disagreement in the academic world about Marshall McLuhan, but the Media Ecology framework seems particularly well suited for analyzing the technological, social, and political complexities of this particular epoch of the information age.

McLuhan wrote about media as “extensions of some human faculty” (McLuhan, Marshall, & Fiore 1967), and “the new scale that is introduced into our affairs by each extension of ourselves, or by any new technology” (McLuhan 1964). Media ecology theory frames the Internet and social media as hyperextensions of every human sense. And on the Internet those extensions are interconnected by a global network of devices that can send and receive information literally at the speed of light, “extending our central nervous system itself in a global embrace, abolishing both space and time as far as our planet is concerned” (McLuhan 1964).

But media ecology theory “is not exhausted by the thought of Herbert Marshall McLuhan”(Islas & Bernal 2016). Some of the post-McLuhan scholarship directly addresses the social and political effects of digital media. Robert K. Logan, a former colleague of McLuhan, suggests that in a flip reversal of media as extensions of man, “the human users of digital media become an extension of those digital media as these media scoop up their data and use them to the advantage of those that control these media…The feedback of the users of digital media become the feedforward for those media” (Logan, 2019).

Logan is primarily concerned with the abuse of personal data for persuasive communications by digital media monopolies such as Google, Facebook, Amazon, and Twitter. But the same kinds of personal data and persuasive technologies are being used by the propagandistic actors in the scenarios described in this project. They aren’t the owners of the technologies, but they don’t have to be. In today’s neoliberal, unregulated “free market,” social media networks are open to use or abuse at scale by anyone with enough resources. As suggested in the study Bots and Political Influence: A Sociotechnical Investigation of Social Network Capital, the resources required for effective social media propaganda operations are beyond the means of anyone but large institutional actors like governments (Murthy, Powel, et al. 2016). And as is clear in Emilio Iasiello’s article Russia’s Improved Information Operations: From Georgia to Crimea, governments are now budgeting for disinformation campaigns aimed at national and global audiences as a vital part of their geopolitical and military strategies (Iasiello 2017). As applied to the Internet age, McLuhan’s frame is still relevant: the medium is the message, and the user is the content (McLuhan 1964).

Conclusion

During this project we chose to primarily use printed resources from academic or government studies. In some cases we reviewed reports from non-profit organizations focused on digital disinformation and security studies. While news reports could have been helpful in providing the most recent accounts of political disinformation, we decided to avoid possible issues of journalistic story framing. We did our best to vet all sources for credibility, and to weed out resources showing signs of ideological and political bias. Our methodology included an examination of the authors, their body of research, and their institutional affiliations. We believe our choices are justifiable, but our inclusion of these sources does not imply wholesale endorsement of the authors or the information and views they express.

Due to rapid changes in technologies used for disinformation and the circumstances of its use, it is likely that much of today’s research will soon be obsolete. An obvious response to this is more research, and it’s clear from our work on this project that more research is coming. A variety of new institutions and initiatives are beginning to systematically study and counter digital disinformation. Which also raises a caution: Will we begin to see disinformation in disinformation research? All the more reason for us to be critical of our sources, and select only those we can reasonably identify as credible.

Coda

Any analysis of the actions and attitudes of governments and other informational actors will inevitably be shaped by the values and views of the authors. Because a discussion of the author’s perspective is rarely included in their published works, audiences may assume that the analysis is intended to be “objective,” and that the author occupies “the view from nowhere” (Rosen 2003). We wish to make our values and views explicit so as to avoid any ambiguity about our perspectives and motivations.

As librarians we understand that “the values and ethics of librarianship are a firm foundation for understanding human rights and the importance of human rights education,” and that “human rights education is decidedly not neutral” (Hinchliffe 2016, p.81). While there can be different arguments about the merits and flaws of different political and economic systems, the role of corporations and governments, and the obligations of citizens, we are strongly in favor of free expression, self-determination, and social justice. We believe all people have an absolute right to knowledge, and we regard influence operations designed to deceive, confuse, or divide people and nations as violations of their human rights and dangerous to the future of world peace. The Internet has become a medium for influencing the thoughts and behavior of people across the globe. Disinformation is not new, but its potential for disruption has never been greater.

We view social media as potentially a net positive for human welfare and civic life. For now, let’s just say it’s a work in progress.


References

Entman, Robert M. 1993. “Framing: Toward Clarification of a Fractured Paradigm.” Journal of Communication 43 (4): 51–58. https://doi.org/10.1111/j.1460-2466.1993.tb01304.x.

Gamson, William and Modigliani, Andre. 1987. “The Changing Culture of Affirmative Action.” In Research in Political Sociology , edited by Richard Braungart , 137-177. Greenwich, CT: Jai Press, Inc.

Goffman, Erving. 1974. Frame Analysis: An Essay on the Organization of Experience. Frame Analysis: An Essay on the Organization of Experience. Cambridge, MA, US: Harvard University Press.

Hinchliffe, Lisa Janicke. 2016. “Loading Examples to Further Human Rights Education.” https://www.ideals.illinois.edu/handle/2142/91636.

Iasiello, Emilio. 2017. “Russia’s Improved Information Operations: From Georgia to Crimea.” US Army War College: Parameters [Summer 2017], US Army War College Quarterly: Parameters, 47 (2): 51–63. https://www.hsdl.org/?abstract&did=803998.

Islas, Octavio, and Juan Bernal Suárez. 2016. “Media Ecology: A Complex and Systemic Metadiscipline.” Philosophies 1 (October): 190–98. https://doi.org/10.3390/philosophies1030190.

Logan, Robert K. 2019. “Understanding Humans: The Extensions of Digital Media.” Information 10 (10): 304. https://doi.org/10.3390/info10100304.

McLuhan, Marshall. 1964. Understanding Media: The Extensions of Man.

McLuhan, Marshall, Quentin Fiore, and Jerome Agel. 1967. The medium is the massage. New York: Bantam Books.

Murthy, Dhiraj, Alison B. Powell, Ramine Tinati, Nick Anstead, Leslie Carr, Susan J. Halford, and Mark Weal. 2016. “Automation, Algorithms, and Politics| Bots and Political Influence: A Sociotechnical Investigation of Social Network Capital.” International Journal of Communication 10 (0): 20. https://ijoc.org/index.php/ijoc/article/view/6271.

Rosen, Jay. 2003. “PressThink: The View from Nowhere.”. http://archive.pressthink.org/2003/09/18/jennings.html.

Rosen, Jay. 2012. “The People Formerly Known as the Audience.” In The Social Media Reader. ed. Mandiberg, Michael. NYU Press. www.jstor.org/stable/j.ctt16gzq5m.

Sapiie, M.A. & Anya, A. 2019. “Jokowi accuses Prabowo camp of enlisting foreign propaganda help.” From https://www.thejakartapost.com/news/2019/02/04/jokowi-accuses-prabowo-camp-of-enlisting-foreign-propaganda-help.html

Scheufele, Dietram A. 1999. “Framing as a Theory of Media Effects.” Journal of Communication 49 (1): 103–22. https://doi.org/10.1111/j.1460-2466.1999.tb02784.x.

Schmidt-Felzmann, Anke. 2017. “More than ‘Just’ Disinformation: Russia’s Information Operations in the Nordic Region.” In Information Warfare – New Security Challenge for Europe, 32–67. Centre for European and North Atlantic Affairs.

Wierzejski, Antoni, Jonáš Syrovatka, Daniel Bartha, Botond Feledy, András Rácz, Petru Macovei, Dušan Fischer, and Margo Gontar. 2017. “Information Warfare in the Internet Countering Pro-Kremlin Disinformation in the CEE Countries.” Centre for International Relations. https://www.academia.edu/34620712/Information_warfare_in_the_Internet_COUNTERING_PRO-KREMLIN_DISINFORMATION_IN_THE_CEE_COUNTRIES_Centre_for_International_Relations_and_Partners.

Defining Digital Literacy in the Age of Computational Propaganda and Hate Spin Politics

Just like much of the rest of the world, Indonesia is facing a crisis of fake news and bot network infiltration on social media, leading to rampant propaganda, mass belief of disinformation, and not fully understood effects on voters that may affect them deeply enough to alter election results. Salma (2019) describes this crisis and identifies the solution as critical digital literacy, essentially educating people about the nature of fake news, algorithmic gaming of social media platforms, and identifying bot networks.

Salma consolidates the issue into two problems: computational propaganda and hate spin politics. She defines computational propaganda as “the use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks” (p. 328). This includes fake news created and spread on social media, bot networks driving attention to and changing conversation around particular issues, and the groups who organize these campaigns of disinformation. Her definition of computational propaganda encompasses much of the fake news crisis currently rattling the United States, as well as other countries.

The other primary issue she identifies is hate spin politics, which is less easily defined. She describes it as “exploit[ing] freedom in democracy by reinforcing group identities and attempt[ing] to manipulate the genuine emotional reactions of citizens as resources in collective actions whose goals are not pro-democracy” (p. 329). Hate spin politics seems to be the weaponization of identity politics and emotion in the digital political sphere, using religion, nationality, sexuality, and other identity markers to turn people against each other. It not only aims to segregate people based on their identities, but to inspire people to self-select into identify groups to create political warfare.

Computational propaganda and hate spin politics are carried out by several groups in Indonesia. Salma identifies Saracen and Muslim Cyber Army as responsible for various fake news campaigns, and there have been notable suggestions of similar political interference from Russia (Sapiie and Anya, 2019). These tactics have shown to be successful on a large scale and with dire consequences in the case of Basuki Tjahaja Purnama, also known as Ahok, the politician who was imprisoned for blasphemy based largely on an edited video that went viral on social media.

Indonesian government officials are keenly aware of the problem computational propaganda presents, taking significant steps to counter fake news spread. In 2018, they began weekly fake news briefings intended to address false stories that have gained traction (Handley, 2018). Salma suggests an increased focus in critical digital literacy, or teaching people to “evaluat[e] online content or digital skills but also to understand the internet’s production, management and consumption processes, as well as its democratizing potential and its structural constraints” (p. 333). Essentially, critical digital literacy is to computer or technical literacy what reading comprehension is to literacy. It’s not enough for users to be able to use a computer and navigate the Internet; there needs to be a solid understanding of what they’re seeing and why, including who might have produced content and how it came to be presented to that user.

Who could argue with that? Of course increased education about the creation and spread of fake news and algorithmic manipulation would be useful to nearly all Internet users, and it might help counter the spread and impact of computational propaganda. However, Salma offers no explanation of how digital literacy would improve hate spin, which seems to be a larger social issue that’s just as likely to occur offline as on. Hate spin politics also traffics in emotional responses, meaning strictly logical literacy training might not be enough to retrain people to grapple with emotional manipulation.

Paper:

Salma, A. N. (2019). Defining Digital Literacy in the Age of Computational Propaganda and Hate Spin Politics. KnE Social Sciences & Humanities2019, 323-338.

Additional Resources:

Sapiie, M.A. & Anya, A. (2019, February 4). Jokowi accuses Prabowo camp of enlisting foreign propaganda help. From https://www.thejakartapost.com/news/2019/02/04/jokowi-accuses-prabowo-camp-of-enlisting-foreign-propaganda-help.html

Handley, L. (2018, September 27). Indonesia’s government is to hold public fake news briefings every week. From https://www.cnbc.com/2018/09/27/indonesias-government-is-to-hold-public-fake-news-briefings-each-week.html

Aksi Bela Islam: Islamic Clicktivism and the New Authority of Religious Propaganda in the Millennial age in Indonesia

Ahyar and Alfitri (2019) examine the way social media has reshaped the landscape of propaganda, and how it’s being used to change dominate religious authorities. Propaganda used to be a tool wielded almost exclusively by government bodies or other massive organizations. Ahyar and Alfitri say, “In previous eras – especially in authoritarian regimes prior to the reformation era in Indonesia – the state was an authoritative source for social campaigning” (p. 14). The resources needed to create and effectively spread propaganda were simply too great for small groups or individuals to harness.

Social media has completely changed this; the Internet has effectively allowed nearly anyone to create and spread their own propaganda for their own purposes, with the potential for massive virality and impact. Governments no longer have a monopoly on mass information (or disinformation) spreading. Ahyar and Alfitri explain that alternative groups have come to harness propaganda: “In the Reformation era in Indonesia, propaganda is also often done not only by the government, but also by social movements that echo multiple identities; be it a primordial identity of ethnicity, religion, political ideology and profession” (p. 12-13).

They go on to explain how social media has also revolutionized social movements and activism, again with disruption. Because movements can be planned and executed more easily, they need less hierarchical structure to form and continue. They say, “…Social movements appear massively outside the established political or institutional channels within a country. Social movement is closely related to a shared ideal and responds to a political power” (p. 9). Social movements need less planning, promotion, and organization to be successful. All they really need is a powerful motivating factor to spark mobilization. Propaganda can easily fill this role: “The pattern begins with an action of propaganda through the sails of technological devices, which is followed by supportive comments on the propaganda, and ends in mass mobilization for a real social movement for a purpose” (p. 4).

Although there is obvious good in breaking the government’s former monopoly on propaganda and in tools like social media making organizing and protesting easier than ever, there’s also the possibility for increased disinformation, chaos, and abuse. Ahyar and Alfitri consider the example of Basuki Tjahaya Purnama (also called Ahok), the former Jakarta governor who was imprisoned for blasphemy after a misleadingly edited video of one of his speeches went viral, causing controversy among Islamic communities in Indonesia. The doctored video functioned as propaganda, perfectly matching Ahyar and Alfitri’s definition of propaganda as “attempts to shape influence, change, and control the attitudes and opinions of a person or group for a particular purpose or to be brought in a particular direction” (p. 11). That propaganda spread rapidly through social media, acting as the spark that mobilized thousands of people to take to the streets in protests that were easily and spontaneously planned with improved technology and communication. Ahok’s imprisonment serves as testimony to the power and changed nature of propaganda and social movement, and to the danger that these powerful tools have when they are used rapidly and with little opportunity for oversight, consideration, and fact-checking.

Paper:

Ahyar, M & Alfitri. (2019) ‘Aksi Bela Islam: Islamic clicktivism and the new authority of religious propaganda in the millennial age in Indonesia’, Indonesian Journal of Islam and Muslim Societies, 9(i), pp. 1–29.

Countering Terrorist Narratives: Winning the Hearts and Minds of Indonesian Millennials

Narratives are powerful because they’re easy to follow. Factual information and research might provide someone with all of the pieces, but a well-crafted narrative presents itself as an already completed puzzle. Ansis (2018) discusses the narratives that terrorists and extremists use to recruit new members, and how those narratives can be shaped into convincing propaganda that is easily disseminated through social media, focusing primarily on the recruitment of young Indonesians and responses from the Indonesian government. Islamic extremist narratives give followers a consistent worldview, as well as a clearly defined role and purpose within that worldview. Once a follower has accepted extremist narratives, it’s difficult to counter them.

Islamic extremist groups build their narratives on social media the same way many use social media- consistent branding and plenty of selfies. Ansis says, “Many of the selfie photos of young jihadists express their happiness. They smile and carry weapons. The jihadists use this strategy to give a picture that they are powerful and own many weapons” (p. 196). Again, following pretty standard social media manipulation tactics, extremists can deceive followers. Ansis continues, “They may only have a few weapons and ask the jihadists to take turns taking selfie photos carrying the gun” (p. 196). They also use catchphrases. Ansis identifies the phrase “You Only Die Once” or “YODO,” (p. 193) a clear derivative of the popular hashtag #yolo. 

 Ansis’s examples of jidhadist recruiting, specifically her analysis of the film Jihad Selfie, reveal the targeted nature of their recruiting efforts. Extremists’s success isn’t from pouring money into Facebook advertisements; it’s from using social media to talk to vulnerable individuals. There seems to be more to gain from putting significant resources towards the small number of individuals who are able to be flipped than there is in mass recruitment tactics that will fall largely on deaf ears. Again, using social media for this kind of targeted advertising isn’t exclusive to jidhadist groups. Cambridge Analytica’s use of highly targeted advertising has caused outrage worldwide. 

Indonesia has taken several steps to attempt to counter extremist propaganda online, largely in the form of websites offering counter-narratives and promoting peacefulness (p. 202). However, it’s unclear how effective this approach can be. Ansis describes how jidhadists’ use of social media makes them look “cool,” according to former recruits, because of their handling of weapons and the interactions their post get from Muslim women (p. 197). If the appeal of jidhadist’s propaganda comes down to cool factor, it’s really difficult to imagine the government successfully creating something that will actually read as cool to young people. 

The weakest point of Ansis’s analysis comes from her failure to interrogate the term “lone wolf” terrorists. She points out, “Unlike in the past when a terrorist was defined as someone who completed a long process of training and indoctrination through a terrorist group, the lone wolf terrorists are not tied to any terrorist network and have gotten inspiration through the internet,” (p. 195) yet fails to connect that this inspiration through the Internet is often from interacting with content from terrorist networks. 

Paper:

Anis, E. Z. (2018). Countering Terrorist Narratives: Winning the Hearts and Minds of Indonesian Millennials. KnE Social Sciences & Humanities, 2018, 189.

Government Social Media in Indonesia: Just Another Information Dissemination Tool

No matter how much Mark Zuckerberg promises that the goal of Facebook has always been to “connect” the world, it’s increasingly clear that social media might not actually be the most effective tool towards accomplishing that goal. Though social media sties like Facebook and Twitter can make two-way communication between entities easier from a logistical standpoint, scholars remain divided on whether or not social media has been able to live up to possibilities in the political realm.

Idris (2018) examines this possibility for two-way communication between government entities and individuals in Indonesia using social media, finding that two-way communication is much more of a social media ideal than a reality. She specifically looks at two Indonesian government agencies’ social media presences, using social network analysis to determine how and when they interacted with other social media users. For the most part, it turns out they don’t. She says, “…the Indonesian government mostly used social media to disseminate governmental information, to dominate social media conversation, and to amplify governmental messages… Thus, advanced communication technology was not used to transform the government to become more engaging and transparent” (p. 352). Basically, just because social media creates the opportunity for dialogue between governments and citizens, doesn’t ensure that the governments reads, considers, or acknowledges citizens’ responses.

Without two-way communication, there is little or no difference between government information and PR campaigns disseminated on social media and propaganda (p. 338). However, the use of social media allows governments to maintain the illusion of increased communication with citizens while actually perfecting their propagandistic techniques. When communicating directly on social media, a government can effectively bypass traditional media, allowing them to release their content exactly as they see fit, keeping journalistic scrutiny out of their initial message. They can also manipulate social media algorithms to amplify their own content, using nothing more than networks of government social media accounts. Idris describes President Widodo’s network of governmental social media accounts’ objective as “to counter negative opinions about the government and at the same time make government information go viral” (p. 350). Though downright measly compared to something like Russian bot networks, these networks of official government accounts can be enough to spread information and shape conversation. Governments using social media for information dissemination also have the opportunity to perfectly test and reshape their messages in real time. Both the Obama and Trump campaigns in the U.S. saw impressive results using methods like A/B testing to craft and recraft their social media advertisements with incredible precision (Bashyakarla, 2019).

Social media makes a lot of things possible that were not before. This includes both increased transparency and easier back and forth communication between governments and citizens, but also easier dissemination of perfectly-crafted propaganda. Idris makes it clear which of these aims the Indonesia government is pursing.

Paper

Idris, I. K. (2018). Government social media in Indonesia: Just another information dissemination tool. Jurnal Komunikasi: Malaysian Journal of Communication34(4), 337–356. https://doi.org/10.17576/JKMJC-2018-3404-20

Additional References

Bashyakarla, V. (2019). A/B Testing: Experiments in campaign messaging. Retrieved from https://ourdataourselves.tacticaltech.org/posts/ab-testing

Social Media and Politics in Indonesia

Johansson (2016) gives a solid background to the state of media in Indonesia, both traditional and digital. He explains how a narrowly controlled traditional media in a democracy as new as Indonesia created favorable conditions for social media to break through and disrupt the spread of information, focusing largely on the potential for positive change.

He describes issues with Indonesia’s print and television media, starting with their vulnerability to being completely controlled by just a few elite members of society, i.e. elite capture or media capture. He elaborates on how much of Indonesia’s media is owned or influenced by figures tied to politics, including direct family members of politicians (p. 17). He also describes the rise of media conglomerates. In short, he describes a media ecosystem in which power is held by very few people with ties to other powerful people, working towards a future with less and less competition, all of which can contribute to increased media bias.

Next, he explains the culture of social media in Indonesia, and the effect it’s had on political messaging and campaigning. Social media is wildly popular in Indonesia, with users spending an average of 2.9 hours on social media each day, compared to just 1.7 hours of use in the United States (p. 25). Social media is an attractive place for political messaging not only because of its popularity, but also due to “the cost of campaigning on a steady increase, limited political financing, problems with money politics and the limits of traditional media” (p. 25). Johansson also touches on social media strategies from the 2014 presidential election, explaining that Jokowi’s use of a massive volunteer network coordinating and posting on social media ultimately won out over Prabowo’s smaller and more professional social media team.

Although Johansson mentions propaganda only sparingly, his paper works as a useful, fairly comprehensive account of the media landscape in Indonesia, presently as well as historically. His few words on propaganda are also useful, explaining how media exists primarily as a mode of disseminating propaganda when being viewed from the lens of framing theory, among others. Finally, he warns of how effective political messaging on social media may be dangerous, and how it can “result in an ever-increasing difficulty for citizens to differentiate between news, propaganda, and opinions” (p. 37).

Paper:

Johansson, Anders C., 2016. “Social Media and Politics in Indonesia,” Stockholm School of Economics Asia Working Paper Series 2016-42, Stockholm School of Economics, Stockholm China Economic Research Institute.