Publications /
Opinion

Back
Bracing for Black Swans: Artificial Intelligence and Elections in 2024
Authors
Nusrat Farooq
April 30, 2024

The author of this piece, Nusrat Farooq, is a 2022 alumna of the Atlantic Dialogues Emerging Leaders program. Learn more about her here.

 

Summary: With more than half of the global population across 78 countries participating in elections in 2024, and with artificial intelligence (AI) derived misinformation and disinformation identified as the foremost global risk factor in terms of election outcomes, multiple black-swan events--that are high impact and difficult to predict but inevitable--     can be anticipated. A few tech companies and governments have initiated coalitions and regulatory measures to combat AI misinformation in elections in 2024. However, the fight remains disproportionately challenging despite such initiatives. The primary antidote to these potential AI-driven misinformation black swans is not a handful of AI governance or tech solution measures; rather, it is ‘critical thinking.’

------

The year 2024 marks a pivotal moment in international affairs. With more than half of the global population across 78 countries participating in elections, and with artificial intelligence (AI) derived misinformation and disinformation identified as the foremost global risk factor, in terms of election outcomes, multiple black swan events--that are high impact and difficult to predict but inevitable--     can be anticipated this year. This apprehension is not solely about our understanding of AI’s role in elections but rather about the unknown factors and fallouts due to misuse of AI. While we comprehend what is visible, it’s the unseen force of AI during this election year that provokes concern. The primary antidote to these potential AI-driven misinformation black swans in 2024 is not AI governance or tech solutions; rather it is ‘critical thinking.’

Both AI governance regulations and tech solutions to counter AI misinformation require time, multiple iterations, and feedback from various stakeholders. These stakeholders include staff working at tech companies, users from different age groups across the globe, academics, policymakers, civil society, and others. For regulations to be effective, they should ideally be designed with long-term planning in mind, spanning five to ten years. Similarly, robust tech solutions, initially conceived as quick fixes, may not address the broad spectrum of AI-election content-related issues, particularly those that are currently unknown—issues that have become apparent over the last year.

One such issue arose in Slovakia during elections in September 2023. During the 48-hour moratorium preceding the opening of voting, when media outlets and politicians are expected to remain silent, an audio recording was posted on Facebook. In the recording, Michal Šimečka, the leader of the liberal Progressive Slovakia party, and Monika Tódová, a journalist with the daily newspaper Denník N, were purportedly heard discussing the rigging of the election by purchasing votes from the marginalized Roma minority in the country. Although both Michal and Monika promptly denounced the audio as fake, it proved hard to debunk its authenticity within the 48-hour moratorium period. The election resulted in Progressive Slovakia losing      to SMER-SSD (Direction-Slovak Social Democracy), a party known for its populist views, which campaigned for the withdrawal of military support for its neighbor, Ukraine.

The causality regarding whether the deepfake audio benefited one party over the other in the Slovakian elections, and by what margin, remains contested. What is noteworthy, however, is that the quick-fix technology created for addressing such misinformation could not be applied in this case. Meta’s Manipulated-Media policy only covered videos. The Slovakian case exploited a loophole in Meta’s policy, which does not extend to audio content. This observation is not a criticism directed at Meta, but underscores a broader technological limitation shared by all tech companies, in regards to their current helplessness to effectively contain or control AI-driven election black swans. In fact, no single stakeholder possesses the capability to be able to grapple with the spread of misinformation. Therefore, addressing AI-election-related black swan events in 2024 requires a collective effort, and is not solely incumbent upon tech companies.

Jacinda Ardern, the former Prime Minister of New Zealand, is one of the foremost proponents of large-scale collaboration, exemplified by initiatives such as the Christchurch Call, which she co-founded. Presently, she serves as the New Zealand Prime Minister’s special envoy for this endeavor. In her article published in the Washington Post in June 2023, she advocated for “collaboration on AI as the only option,” emphasizing that “technology is evolving too quickly for any single regulatory fix.” She further asserted that “government alone can’t do the job; the responsibility is everyone’s, including those who develop AI in the first place.”

A few tech companies and governments have initiated tech coalitions and regulatory measures to combat AI misinformation in elections in 2024. For example, on February 16, 2024, at the Munich Security Conference (MSC), a group of 20 leading tech companies—including Microsoft, Meta, Google, Amazon, IBM, Adobe, OpenAI, Anthropic, Stability AI, TikTok, and X—announced an accord to counter video, audio, and image deepfakes during the upcoming elections. Additionally, on March 13, 2024, the European Union passed its Artificial Intelligence Act, aimed at safeguarding general-purpose AI, prohibiting the use of AI to exploit user vulnerabilities, and granting consumers the right to lodge complaints and receive meaningful explanations.

Despite such initiatives, the fight against AI election misinformation remains disproportionately challenging. According to data from Clarity, a machine-learning firm, the number of deepfakes created is increasing by 900% year over year. Collaboration between a handful of governments and tech companies alone cannot solve this issue. Solid and robust solutions to combat AI election misinformation in 2024 and beyond will require time. Building resilience against AI-induced black swans is paramount, and critical thinking—with individuals learning to discern fake from real—is key. This collective fight emphasizes the crucial role of every single voter’s critical thinking capacity in this election year, which could be marked by multiple AI-derived black swans.

Practicing and deepening critical thinking involves approaching suspicious content with skepticism, avoiding immediate belief, and investigating further by asking questions and cross-referencing information from multiple reputable sources. While organizations such as Newsguard, Demagog, Alt-News assist in discerning misinformation, their efforts are limited. Ultimately, it falls on individual users to actively educate themselves and remain vigilant. Governments and tech companies, while important players, are not fully equipped to counter these black swans.

Predicting the exact form of AI-derived election misinformation black swan events is challenging, but their inevitability in this crucial election year is apparent. In hindsight, when we ponder what our rationalization should have been to counter these black swans, individual critical thinking is what it will primarily come down to. So why not apply it today rather than ponder upon it after the damage is done?

RELATED CONTENT

  • Authors
    Nusrat Farooq
    April 30, 2024
    The author of this piece, Nusrat Farooq, is a 2022 alumna of the Atlantic Dialogues Emerging Leaders program. Learn more about her here.   Summary: With more than half of the global population across 78 countries participating in elections in 2024, and with artificial intelligence (AI) derived misinformation and disinformation identified as the foremost global risk factor in terms of election outcomes, multiple black-swan events--that are high impact and difficult to predict but i ...
  • Authors
    Camila Crescimbeni
    January 30, 2024
    Camila Crescimbeni is a 2023 alumna of Atlantic Dialogues Emerging Leaders program. Learn more about her here. Following a fruitful and broad debate at the 2023 Atlantic Dialogues, Alec Russell, foreign editor of the Financial Times, asked a deep and globally-relevant question: Can democracy survive 2024? With 70 states having elections this year, it is a fundamental question. After some decades of continuous expansion of democracy worldwide, as shown by the V-Dem Electoral Democra ...
  • Authors
    July 27, 2023
    Tea for two was planned in a friend’s house in California’s Beverly Hills, but, surprise, we were joined by one of the great futurist of America, a science fiction master, who turned “Fahrenheit 451” into a bestseller and himself into an admired visionary - Ray Bradbury. Bradbury who? Time is erasing memories, even of great minds - we met in the 80s at Harold Nebenzal, the producer (Cabaret) and author (Café Berlin) whose father produced German  film classics as “M” (1931) and “Das ...
  • June 09, 2023
    In this podcast episode, we delve into the complex and diverse topic of migration and integration in Atlantic Latin America. As we explore the historical events and patterns of migration ...
  • May 30, 2023
    أعلنت الهيئة العليا للانتخابات في تركيا إعادة انتخاب الرئيس رجب طيب أردوغان لولاية جديدة بعد فوزه في الجولة الثانية من انتخابات الرئاسة حيث حصل على 52.14% من أصوات الناخبين أمام منافسه السيد كمال كليجدار أوغلو الذي حصل على 47.86%. خلال الحملة الانتخابية، سجلت الانتخابات التركية اهتمامًا...
  • Authors
    March 6, 2023
    J’ai suivi depuis les années soixante, avec plus ou moins d’intérêt, plus ou moins de passion, presque toutes les élections, présidentielles, parlementaires ou locales, qui se sont succédées en France. Il y eut des séquences et des épisodes fameux que l’histoire, à la fois la grande et la petite, a retenus. De nombreux analystes et acteurs continuent de s’y référer. De toutes ces élections, celles qui se sont déroulées en 2022 sont celles qui ont fait le plus souffrir, de nombreux ...