Publications /
Opinion

Back
Bracing for Black Swans: Artificial Intelligence and Elections in 2024
Authors
Nusrat Farooq
April 30, 2024

Nusrat Farooq is an international security and policy expert whose research is at the intersection of emerging technology, international relations, and trust and safety. She is a 2022 alumna of the Atlantic Dialogues Emerging Leaders program. She can be reached at nusratfatooq.fnu@gmail.com. Learn more about her here.

 

Summary: With more than half of the global population across 78 countries participating in elections in 2024, and with artificial intelligence (AI) derived misinformation and disinformation identified as the foremost global risk factor in terms of election outcomes, multiple black-swan events--that are high impact and difficult to predict but inevitable--     can be anticipated. A few tech companies and governments have initiated coalitions and regulatory measures to combat AI misinformation in elections in 2024. However, the fight remains disproportionately challenging despite such initiatives. The primary antidote to these potential AI-driven misinformation black swans is not a handful of AI governance or tech solution measures; rather, it is ‘critical thinking.’

------

The year 2024 marks a pivotal moment in international affairs. With more than half of the global population across 78 countries participating in elections, and with artificial intelligence (AI) derived misinformation and disinformation identified as the foremost global risk factor, in terms of election outcomes, multiple black swan events--that are high impact and difficult to predict but inevitable--     can be anticipated this year. This apprehension is not solely about our understanding of AI’s role in elections but rather about the unknown factors and fallouts due to misuse of AI. While we comprehend what is visible, it’s the unseen force of AI during this election year that provokes concern. The primary antidote to these potential AI-driven misinformation black swans in 2024 is not AI governance or tech solutions; rather it is ‘critical thinking.’

Both AI governance regulations and tech solutions to counter AI misinformation require time, multiple iterations, and feedback from various stakeholders. These stakeholders include staff working at tech companies, users from different age groups across the globe, academics, policymakers, civil society, and others. For regulations to be effective, they should ideally be designed with long-term planning in mind, spanning five to ten years. Similarly, robust tech solutions, initially conceived as quick fixes, may not address the broad spectrum of AI-election content-related issues, particularly those that are currently unknown—issues that have become apparent over the last year.

One such issue arose in Slovakia during elections in September 2023. During the 48-hour moratorium preceding the opening of voting, when media outlets and politicians are expected to remain silent, an audio recording was posted on Facebook. In the recording, Michal Šimečka, the leader of the liberal Progressive Slovakia party, and Monika Tódová, a journalist with the daily newspaper Denník N, were purportedly heard discussing the rigging of the election by purchasing votes from the marginalized Roma minority in the country. Although both Michal and Monika promptly denounced the audio as fake, it proved hard to debunk its authenticity within the 48-hour moratorium period. The election resulted in Progressive Slovakia losing      to SMER-SSD (Direction-Slovak Social Democracy), a party known for its populist views, which campaigned for the withdrawal of military support for its neighbor, Ukraine.

The causality regarding whether the deepfake audio benefited one party over the other in the Slovakian elections, and by what margin, remains contested. What is noteworthy, however, is that the quick-fix technology created for addressing such misinformation could not be applied in this case. Meta’s Manipulated-Media policy only covered videos. The Slovakian case exploited a loophole in Meta’s policy, which does not extend to audio content. This observation is not a criticism directed at Meta, but underscores a broader technological limitation shared by all tech companies, in regards to their current helplessness to effectively contain or control AI-driven election black swans. In fact, no single stakeholder possesses the capability to be able to grapple with the spread of misinformation. Therefore, addressing AI-election-related black swan events in 2024 requires a collective effort, and is not solely incumbent upon tech companies.

Jacinda Ardern, the former Prime Minister of New Zealand, is one of the foremost proponents of large-scale collaboration, exemplified by initiatives such as the Christchurch Call, which she co-founded. Presently, she serves as the New Zealand Prime Minister’s special envoy for this endeavor. In her article published in the Washington Post in June 2023, she advocated for “collaboration on AI as the only option,” emphasizing that “technology is evolving too quickly for any single regulatory fix.” She further asserted that “government alone can’t do the job; the responsibility is everyone’s, including those who develop AI in the first place.”

A few tech companies and governments have initiated tech coalitions and regulatory measures to combat AI misinformation in elections in 2024. For example, on February 16, 2024, at the Munich Security Conference (MSC), a group of 20 leading tech companies—including Microsoft, Meta, Google, Amazon, IBM, Adobe, OpenAI, Anthropic, Stability AI, TikTok, and X—announced an accord to counter video, audio, and image deepfakes during the upcoming elections. Additionally, on March 13, 2024, the European Union passed its Artificial Intelligence Act, aimed at safeguarding general-purpose AI, prohibiting the use of AI to exploit user vulnerabilities, and granting consumers the right to lodge complaints and receive meaningful explanations.

Despite such initiatives, the fight against AI election misinformation remains disproportionately challenging. According to data from Clarity, a machine-learning firm, the number of deepfakes created is increasing by 900% year over year. Collaboration between a handful of governments and tech companies alone cannot solve this issue. Solid and robust solutions to combat AI election misinformation in 2024 and beyond will require time. Building resilience against AI-induced black swans is paramount, and critical thinking—with individuals learning to discern fake from real—is key. This collective fight emphasizes the crucial role of every single voter’s critical thinking capacity in this election year, which could be marked by multiple AI-derived black swans.

Practicing and deepening critical thinking involves approaching suspicious content with skepticism, avoiding immediate belief, and investigating further by asking questions and cross-referencing information from multiple reputable sources. While organizations such as Newsguard, Demagog, Alt-News assist in discerning misinformation, their efforts are limited. Ultimately, it falls on individual users to actively educate themselves and remain vigilant. Governments and tech companies, while important players, are not fully equipped to counter these black swans.

Predicting the exact form of AI-derived election misinformation black swan events is challenging, but their inevitability in this crucial election year is apparent. In hindsight, when we ponder what our rationalization should have been to counter these black swans, individual critical thinking is what it will primarily come down to. So why not apply it today rather than ponder upon it after the damage is done?

RELATED CONTENT

  • June 09, 2021
    In this podcast, we will be looking at the Role of Women and gender equality in Development. The role of women, especially in developing countries, has been recognized as the single most ...
  • June 04, 2021
    The COVID crisis has demonstrated that health can be described as both (geo)political and economic capital, thus emphasizing the role it can play in power struggles at different scales. A ...
  • Authors
    May 31, 2021
    China is the world's largest exporter of goods. It is also, by any plausible criterion, a developing country. China's dual status needs to be better reflected in Chinese policies - recognizing its global responsibilities -- and in those of the Western powers - recognizing China's limitations. Across three important agendas - macroeconomics, development assistance, and climate - important differences between China and the West remain, yet none of these issues appears intractable. ...
  • May 17, 2021
    It has been over a year since COVID-19 has wreaked havoc across the globe – causing a dramatic loss of human life worldwide, devastating economic and social disruptions, and putting half ...
  • Authors
    Eugène Berg
    Pascal Chaigneau
    Jérémy Ghez
    May 3, 2021
    Les Dialogues Stratégiques, une collaboration entre HEC Center for Geopolitcs et Policy Center for the New South, représentent une plateforme d’analyse et d’échange biannuelle réunissant des experts, des praticiens, des décideurs politiques, ainsi que le monde universitaire et les médias au service d’une réflexion critique et approfondie sur les tendances politiques mondiales et les grandes questions d’importance commune pour l’Europe et l’Afrique. Cette publication est issue de la ...
  • Authors
    April 28, 2021
    Preparedness for the next pandemic is an essential investment. To get it right, countries must stay flexible and reinforce their international health networks, not abandon them. With its new health law, Morocco has taken a step in the right direction. ...
  • April 02, 2021
    In a world where conventional threats to public safety and international security are increasingly a back seat, the Covid-19 pandemic has opened a new chapter in the discussion on how to ...