Computational Propaganda: Modern Threat to Democracies?

Laras Kineta
4 min readJan 16, 2022

--

Source: The Commons

This essay was written for the final exam assignment on International Security Studies.

As an extension of the public sphere, social media plays a significant role in the flow of information — and disinformation. Premises of wider resources and interconnectedness can help build better democracies (Woolley & Howard, 2019) as it is easier to get messages across, but it also possesses threats of practices malign to democracy. Computational propaganda, digital misinformation and manipulation practices understood through both technical and social aspects of it, describes the use of algorithms, automation, and human curation to purposefully manage and distribute misleading information over social media networks, forming dubious political practices from spreading hoaxes to bots acting as buzzers with the end goal to manipulate information to change people’s opinions and behavior (Woolley & Howard, 2019).

Computational propaganda’s truest threats to democracy are seen in many democracies’ political dynamics leading up to the election process (Bradshaw & Howard, 2019), the main pillar of democracy itself, as it manifests the ‘voice of the people’. Waves of disinformation and morally gray campaigns have ridden the computational propaganda tides, igniting saturated discourses about the candidates — with so many narratives that it is impossible to thoroughly fact-check them all. Powerful (and often anonymous) political actors have used computational propaganda techniques to perpetrate political attacks, to spread disinformation, censor and attack journalists, and create fake trends (Woolley & Howard, 2019). Take what happened during the 2016 US Presidential election. Presidential candidate Clinton was accused of scandalous email leaks, rumored to link her with Pizzagate, a secret child trafficking network (Kang, 2016). The later elected president, Trump, then amplified these rumors by constantly tweeting about “Hillary and her emails” to undermine his opponent and fuel his supporters. Or domestically, what happened during the DKI Jakarta governor election where both candidate’s social media team claimed to focus on positive (and factual) narratives, but it did not translate in practice as both sides employed buzzers paid for tweets and websites dedicated to disseminate one sided news (Lim, 2017). The discourses became very polarized that other voices were obscured, as any opinion or expression that was complex or nuanced, or simply did not adhere to either camp, was rarely welcomed (Lim, 2017).
As these companies exercise a great deal of power and influence, it means they have to face a great deal of scrutiny and transparency (Nyhan, 2018 in Meyer, 2018).

There are, to some extent, some measurements taken by social media platforms in tackling the proliferation of computational propaganda and disinformation that have been helpful. During the 2020 US elections, Twitter imposed the Civic Integrity policy which eventually resulted in the controversial removal of Trump’s account after he called for his supporters to march into the White House as they believed that the election was rigged (Conger & Isaac, 2021). Platforms like Facebook and YouTube have imposed similar policies, taking down contents and accounts deemed as spreading dangerous information during the election season. But these measurements still seem like a band-aid solution to a gaping wound. To be able to approach this problem we have to first tackle the debate on content moderation policy — the blur lines of freedom of expressions, standardization of violations and punishments, and the regulatory framework.

The issue of resources is also still prominent as the balance between automated and human-run systems have not yet to be found. Machine-based, automated algorithms have bigger potential to address the sheer amount of content, but it possesses big risks of false positives and false negatives due to its inability to socially contextualize contents. Human based processes, on the other hand, are able to better determine content eligibility, but the number of moderators needed to keep up with the speed and amount of content would be enormous. Seinnrich (2020) also explained how the need for expansion of scale should also be considered, as laws, and the cultural values that shape both statute and jurisprudence, are local, regional, and national in scale but platforms, however, are global. If the regulatory executions are delegated to these platforms, we undermine national sovereignty and self-determination, creating the conditions for corporatocracy and monoculture. Platforms and corporations also do not bear the duty to uphold democratic values. With these considerations in mind, the current measurements in place are clearly not enough, and still have a long way to go to tackle misinformation and its damages to democracy.

BIBLIOPGRAPHY

Bradshaw, S., & Howard, P. N. (2019). The global disinformation order: 2019 global inventory of organised social media manipulation. Project on Computational Propaganda.

Conger, K., & Isaac, M. (2021, January 8). Twitter Permanently Suspends Trump, Capping Online Revolt. The New York Times. Retrieved June 27, 2021, from https://www.nytimes.com/2021/01/08/technology/twitter-trump-suspended.html

Gillespie, T. & Aufderheide, P. & Carmi, E. & Gerrard, Y. & Gorwa, R. & Matamoros- Fernández, A. & Roberts, S. T. & Sinnreich, A. & Myers West, S. (2020). Expanding the debate about content moderation: scholarly research agendas for the coming policy
debates. Internet Policy Review, 9(4). https://doi.org/10.14763/2020.4.1512

Kang, C. (2016, November 21). Fake News Onslaught Targets Pizzeria as Nest of Child-Trafficking. The New York Times. Retrieved June 27, 2021, from https://www.nytimes.com/2016/11/21/technology/fact-check-this-pizzeria-is-not-a-child-trafficking-site.html

Lim, M. (2017, September 5). Beyond fake news: social media and market-driven political campaigns. The Conversation. Retrieved June 27, 2021, from https://theconversation.com/beyond-fake-news-social-media-and-market-driven-political-campaigns-78346

Meyer, R. (2018, March 9). The Grim Conclusions of the Largest-Ever Study of Fake News. The Atlantic. Retrieved June 26, 2021, from https://www.theatlantic.com/technology/archive/2018/03/largest-study-ever-fake-news-mit-twitter/555104/

Woolley, S. C., & Howard, P. N. (Eds.). (2018). Computational propaganda: political parties, politicians, and political manipulation on social media. Oxford University Press.

--

--