The Market of Disinformation

This report was produced by the Oxford Information Labs for the purpose of explicating the problem of disinformation on social media, and making some actionable recommendations for the UK Electoral Commission. The authors do an admirable job of describing disinformation strategies deployed by political campaigns with specific examples from recent events, including the inevitable reference to Cambridge Analytica.

The report seems to be written for an audience that may not know what an algorithm is…although the initial explanation of algorithms as “calculations coded in computer software” and “opinions embedded in mathematics” is unlikely to be of much help. From there, the report gets to the heart of the matter, which is that the bias of social media algorithms is to keep people “engaged.” This is a lovely word, but in the context of e.g. Facebook and Twitter it means “trigger the emotions of people to keep them scrolling, clicking, liking, and sharing for as long as humanly possible without literally dying of dehydration” (my wording) and preferably in many sessions per person per day.

So this is “optimization” in social media and the platforms can afford many thousands of engineers and experience designers to do it. The authors don’t let Google off the hook, and they do a reasonable job of explaining web crawling, relevance algorithms, and SEO. They outline recent changes to Facebook’s algorithm and explain why different Facebook users see different things, which leads into an explanation of psychological profiling, personal data aggregation, and microtargeting.

I think the most important point they make is that “(f)uture electoral policy and oversight should be informed by the fact that online and offline actions are necessarily linked, with the offline elements being key enablers of online uses and abuses.” In other words, the older tricks by political propagandists haven’t been replaced by social media; they’ve been augmented by it.

The authors recommend specific measures the UK Election commission could try to put in place. As with many ideas for regulating social media, they seem worthy of consideration, but they might be totally impractical. For example, digitally imprinting campaign material with the source of the information could improve transparency. Location verification of messages could help even more. Campaigns could be penalized for violations with financial sanctions that actually hurt. And finally, transparency in the financing of organizations and people behind political messages might limit the activities of truly bad actors. The objection in the West is likely to be “but Free Speech and Free Markets!” (Here in the U.S. we have the Supreme Court decision in Citizens United v. FEC, which basically says money is speech so you can’t stop money.)

The measures suggested in this report aim to “future-proof” election policies. Elections are special cases, where (in theory) the outcome supports democratic governance. Elections are too important to just say “oh well, free speech and free markets, I guess we can’t do anything about political disinformation.” Some of these recommendations might make a difference in reducing disinformation in political campaigns today. As for future-proofing future elections, I suspect we’re going to need more future reports.

Reference

Hoffmann, Stacie, Emily Taylor, and Samantha Bradshaw. 2019. “The Market of Disinformation.” https://comprop.oii.ox.ac.uk/research/oxtec-disinfo-market/.