AI race is deepening existing inequalities across the globe | tackling-disinformation-learning-guide | DW | 21.03.2024
  1. Inhalt
  2. Navigation
  3. Weitere Inhalte
  4. Metanavigation
  5. Suche
  6. Choose from 30 Languages

Tackling disinformation: A learning guide

AI race is deepening existing inequalities across the globe

Generative artificial intelligence is developing in leaps and bounds, and it's deepening the digital divide, writes Heather Dannyelle Thompson from Democracy Reporting International.

Logos of different AI apps on a smartphone screen

AI Chat, Chat AI, Aski AI, ChatGpT are just some of the AI apps available now

If you've ever wanted to know what it looks like when two pirate ships battle it out in a life-sized cup of coffee, you're in luck. Thanks to OpenAI's newest text-to-video model, Sora, highly photorealistic renditions powered by your imagination are now possible without the need for a team of CGI artists or a camera crew.

Sora showcases the rapid pace of generative artificial intelligence (gAI) development: it's leaps and bounds ahead of other text-to-video models, such as Google's Imagen Video, or Meta's Make-a-Video. And all three were released over the past 18 months.

An AI image of three wooly mamoths walking across a snow-covered meadow.

The Sora text to video app generated a video (a still of which is show here) using the prompt: "Several giant wooly mammoths approach treading through a snowy meadow, their long wooly fur lightly blows in the wind as they walk, snow covered trees and dramatic snowcapped mountains in the distance, mid-afternoon light with wispy clouds and a sun high in the distance creates a warm glow, the low camera view is stunning capturing the large furry mammal with beautiful photography, depth of field"

Such gAI tools are here to stay, and they're going to keep improving at a rapid pace. The next few years will see not only more sophisticated tools, but also wider availability, and greater penetration into our daily lives. With the increasing pace and placement of AI in society, the full impact on our digital information ecosystems remains untested. But we can already predict that the protection of those at the margins of society will slip through the cracks.

New capabilities and new threats

Any internet user can generate increasingly realistic synthetic images, videos, text, and voices for their own means with just a handful of monthly subscriptions. And the bad actors among them are already testing the waters.

A graphic showing a large oval representing AI and machine learing. Inside this oval are other ovals, GenAI, Synthentic Media and Fully Synthenic / AI Generated Images, Videos, Text and Audio.

Artificial intelligence and machine learning cover a whole suite of synthetic media including images, video, text, and audio

Voice clones, for example, are on the rise. The most recent models can produce less robotic results and capture the cadence of a speaker's voice more fluidly. They are also difficult to detect. Social media is already full of synthetic voice clones, and with growing investment in the sector, they will soon be even more advanced.

We're already seeing voice clones used for political means. In October 2023, the leader of the UK Labour Party, Kier Starmer, was 'exposed' in an audio clip full of profanity. It was later debunked as a fake. More recently, in February 2024, a voice clone of US President Joe Biden urged voters in New Hampshire to "save their vote" for the general election, a ploy to reduce voter turnout for the primary. A similar voice hoax was disseminated in 2023 before the Slovakian election, when a fake audio recording of Michal Simecka, leader of the Progressive party, circulated online claiming that he wanted to "drastically" raise beer prices.

Images or voice clones could stoke existing fault lines in the electorate, as attempted recently in Argentina, dubbed the world's first AI election. Both Javier Milei's and his opponent Sergio Massa's campaign used highly stylized synthetic media to promote its candidate and detract from its opponent.

People walk past electoral propaganda of Economy Minister and presidential candidate of the Union por la Patria party, Sergio Massa, made with AI.

The placards show Sergio Massa, a presidential candidate in Argentina's 2023 elections, made with AI. Massa is depicted as standing firmly and exuding authority in a Soviet style

Inequality in detection

As it becomes harder to detect synthetic media from real-life depictions, late-campaign scandals (known in US-American parlance as "October surprises") undergirded by synthetic media will be harder for news outlets to debunk and could lead to AI detection inequity across the globe.

Take the example of voice clones. Companies such as Reality Defender say they use their own AI models to detect AI-generated media based on signatures like missing frequencies, the change in pitch of the speaker's voice, or how the speaker breathes. It requires feeding their machine massive amounts of clearly labelled (and expensive) real and synthetic data, creating a barrier to entry for any company hoping to develop similar solutions. Even with this advantage, they admit detection will always be one step behind production.

Many of the best AI detection tools require expensive subscriptions that smaller organizations will seldom be able to afford. That means a small local news outlet covering disinformation during, say, the Tunisian election may not have access to the tools it needs to detect and fact-check AI disinformation during its election cycle.

AI doomerism or business as usual?

Still, some experts dispute claims that AI will fundamentally change our information ecosystems. Voting behaviour, they say, is complex and determined by multiple factors, including background and identity, with each individual's choice usually decided long before election day. Given the complex array of sociological factors that goes into politics, it is yet unproven that a single piece of disinformation has tipped an election one way or the other.

But what happens with massive covert operations aimed at manipulating public perception long before votes are cast? In 2019, the Stanford Internet Observatory found evidence of fake journalist personas created by a Russian manipulation team, the Internet Research Agency (IRA). The IRA-linked think pieces made their way into Western publications in an attempt to create cynicism or confusion "about what is real and who is saying what." The same lab later found evidence of similar operations in Africa, where strategies differed by country, but were found to promote Russian-based political actors and beliefs.

At the time, the operation used the words and photos of real people. With newly available gAI technologies, we are already seeing similar operations with fully synthetic personas and writing, making them harder to trace and detect. In February, Microsoft announced it had found state-backed hackers from Russia, China, and Iran using their AI tools to improve their disinformation campaigns.

An unequal playing field

In the English-dominant, US-based tech industry, this means that non-English, non-Western democratic contexts, where lower access to detection tools, less democratic governance, and less attention from Big Tech, could lead to serious consequences. We already saw how unregulated social media platforms exacerbated Myanmar's Rohingya crisis.

But today, a lack of robust data protection and AI policies are also likely to contribute to a greater global divide between those with the capital to join the AI race, and those without. Many already point to the rise of AI and its impact on old colonialist pathways, dubbing it "digital colonialism": Big Tech wipes out local competition with its software and harvests user data to monetize it for business and consumer services falling along historic lines of inequity.

A screenshot showing Richard Mathenge talking on a computer screen.

Richard Mathenge, a leader of the African Content Moderation Union, speaks at 2023 DisinfoCon about the hidden costs of AI in East Africa, where content moderators are exposed to harmful material and develop PTSD without labour protection

Evading responsibility will become even easier with the penetration of gAI. Politicians and media personalities are ready to abuse the so-called "liar's dividend," as evidenced not only in recent elections, but other high-stakes contexts, such as the Israel-Gaza war. A mere hint of fabricated evidence at play can be enough for politicians to dismiss real scandals as fake.

The average user has difficulty understanding the sheer mass of information floating online and has limited time to do their own research. The forecasted 'infocalypse' will likely continue to erode citizen trust in media and institutions. A recent poll, for instance, showed that 53% percent of Americans think that AI-spread misinformation will affect the results of the presidential election. This rise of artificial intelligence has come at an inopportune time: 2024 is a super election year with about 2 billion voters headed to the polls, including the highly consequential American Presidential election, elections in India, and the European Parliament elections.

The problem of self-regulation

In the meantime, efforts and policies to protect our democracies against the threat of AI are uneven at best. While some efforts to safeguard are increasing (see the upcoming EU AI Act and the White House Executive Order on AI), many of the most promising efforts are still self-regulated.

Just last week, Meta announced that Instagram, Facebook, and Threads will no longer recommend political content to their users via their algorithm. On its face, the move seems sensible to platforms that have grown wary and cautious about their impact on politics. But some experts warn that the gap in supply of political information caused by this policy will incentivize users to find less reliable information elsewhere.

At the same time, platforms have been cutting back on trust and safety teams, especially those in the Global South, and those dealing with non-English content, most often citing the cost and challenges of monitoring their platforms for disinformation. And of course, many of the upcoming solutions and legislation in Europe and the United States will not apply outside of the West, leaving many countries vulnerable and left behind in an increasingly fast-paced information ecosystem.

Still, more tech companies are embracing solutions such as watermarking and provenance technology or nutrition labels to increase users' ability to understand the media they encounter. Meta, Google, and OpenAI, for example, recently announced various commitments to watermarking and labelling in their products. Still, companies fall short of an outright ban of AI election materials.

A graphic showing how watermarking works.

Watermarking is an important tool in the fight against AI-powered disinformation, helping authenticate the source and integrity of media

There is also the issue of open source LLMs, most of which do not have the same safeguards (say to prevent users from using the likeness of a politician). At Democracy Reporting International, we tested three widely available open source LLMs and found that they regularly generated the malicious content we requested. Open-source LLMs cannot be discounted in the rise of AI, as they will likely continue to have fewer safeguards than do models from large, US-based AI firms.

Graphic showing the testing of three LLMS under Accessibility, Direct command racism and conspiracy theory, Speficiation of narrator and context, Suggestive question.

Democracy Reporting International tested three widely available open-source LLMs to see how reliably they returned problematic content

Overall, we have yet to see the true effect of gAI on the information ecosystem. It's likely that AI will have some impact by increasing the persuasiveness and pervasiveness of disinformation, but it is yet to be seen how much that will actually impact online information. What is already occurring is our perception of AI's impact on our news ecosystem and the doubt it sows in our politics. We will likely continue to see sustained scepticism towards information online, the difficulty of media outlets to quickly fact check and contain disinformation, and users retreating into safe spaces.

This article was a guest contribution by Heather Dannyelle Thompson, the Manager of Digital Democracy for Democracy Reporting International.

Her team focuses on digital threats to democracy globally, with a focus on creating actionable evidence for regulators and policymakers through social media monitoring of online political discourse and advocating in the EU and beyond for democratically infused principles in tech policy.

Heather holds a bachelor's degree in engineering from Carnegie Mellon University and a master's in public policy from The Hertie School in Berlin.

This article is part of Tackling Disinformation: A Learning Guide produced by DW Akademie.

The Learning Guide includes explainers, videos and articles aimed at helping those already working in the field or directly impacted by the issues, such as media professionals, civil society actors, DW Akademie partners and experts.

It offers insights for evaluating media development activities and rethinking approaches to disinformation, alongside practical solutions and expert advice, with a focus on the Global South and Eastern Europe.

DW recommends

Audios and videos on the topic