Democracies Are Dangerously Unprepared for Deepfakes

Disinformation, foreign interference, fraud and conspiracy will worsen as digital forgeries become indiscernible from reality.

April 27, 2022
Untitled-1.jpg
Illustration by Paul Lachine.

Ukrainian President Volodymyr Zelenskyy’s speeches and videos have made him a global icon for democracy. By manifesting his country’s defiance of Vladimir Putin’s attempt to annihilate Ukraine, he’s rallied governments and citizens worldwide in support of the principle of self-determination. On March 16, however, a different type of video of Zelenskyy emerged. Rather than presenting his trademark assurance, a defeated president told Ukrainian forces to give up their weapons and go home.

But the Zelenskyy surrender video wasn’t real. Instead, it was a deepfake floated online, a scenario Ukrainian officials foresaw happening two weeks earlier. This echoed a prediction by Estonian intelligence in February 2021 that deepfakes — digital forgeries of audiovisual content created using artificial intelligence (AI) — would soon be a preferred Russian tactic to provoke rifts within the societies of foreign rivals.

Nina Schick, the author of Deepfakes: The Coming Infocalypse, told Reuters that the Zelenskyy fake was “very crude” and “an absolutely terrible face-swap.” But Schick is also among a chorus of experts warning that deepfakes may soon reach a point where they are unnoticeable to the average eye — all while democratic societies are already struggling with the corrosive effects of far cruder forms of disinformation.

As deepfake technologies increase in sophistication and accessibility, they will unleash new deceptive capabilities that some will leverage for personal, professional or political gain. Whether democratic societies are at all prepared to deal with the consequences is an open question.

Hastening the Disintegration of Shared Reality

Deepfakes are created by a machine-learning model that uses facial recognition and other algorithms to first develop two complimentary data sets. One set is built by analyzing the target subject’s appearance and voice in isolation, to capture and encode their unique characteristics. The other set comes from scanning other faces, voices and images — hundreds, if not thousands — to accrue a deeper understanding of facial features, inflections and general lighting and shadow dynamics.

Those two data sets are then used to train an AI neural network to combine the subject’s unique characteristics with the acquired knowledge of general human expression. After enough repetition, the network learns to digitally graft the subject’s face onto someone else’s body. The new replicant can then be programmed to say anything at all.

The result is quality-tested using what is called a generative adversarial network. In essence, an opposing AI neural network tries to decipher whether the product of the first network is real. If the first network can trick the second one into believing the content it has produced is genuine, the final product is a serviceable “deepfake” — named after the Reddit user who first drew mainstream attention to the process in 2017 by posting fake celebrity pornography. Proponents prefer the term “synthetic media” to deepfake.

Various industries are keen on the technology’s potential commercial benefits. Since 2020 television networks in South Korea have been testing digital news anchors to deliver breaking news more quickly. Hollywood is using similar technology in “de-aging” older actors to play younger characters onscreen. The fashion industry sees potential for deepfakes to allow customers to try on clothing virtually, or to slash advertising production costs by up to 75 percent by eliminating the need for human models. London-based software firm Synthesia has helped thousands of companies develop corporate training and communications programs in at least 60 languages, using more than 45 different deepfake avatars.

Yet no matter the business upside for some industries, the broader risks for social cohesion are enormous. As deepfake content evolves and proliferates, it will bring a frightening new tangibility to mis- and disinformation and greater velocity to their spread, especially when combined with the profit motive of surveillance capitalism and underlying advances in AI itself.

Platforms’ AI-powered content algorithms already pitch facts and falsehoods as competing commodities online, presenting radically different realities from one user to another. Meanwhile, deepfake generator apps and voice emulators are improving at a rapid pace and now require little to no coding or programming ability for users. The Belgian visual effects artist behind the DeepTomCruise TikTok account — which posts deepfakes of the A-list star doing banal things like eating cereal or cleaning a mop bucket — has predicted that the most advanced deepfake capabilities today will be outdone by a Snapchat filter by 2025.

Computer scientist Stuart Russell, founder of Center for Human-Compatible Artificial Intelligence at the University of California-Berkeley, has warned that “more advanced AI will enable malicious actors to do what they already do, at a far greater scale.” Russell underscores this by highlighting common behaviour among social media users at opposite ends of the political spectrum: the more extreme an individual’s views, the more predictable they are in their preferred cognitive consumption and the emotional triggers that prompt user engagement.

Deepfakes will be manna from heaven for algorithms that prioritize reinforcing the visceral beliefs and suspicions of predictable users — thereby generating more engagement, data and revenue for platforms. In the lead-up to the United States’ 2020 presidential elections, Peter Donolo, the communications director for former Canadian prime minister Jean Chrétien, identified these same dynamics as the long-standing business model of America’s cable news networks. Politics as entertainment and outrage, particularly on the right, Donolo wrote in a piece for the Globe and Mail, “is an endless cycle that radicalizes viewers and monetizes that radicalization.”

Fake Content, Real Consequences

The possible applications of deepfakes are almost endless.

Cybersecurity software company Trend Micro claims the technology will be the next frontier of enterprise fraud. One famous incident from 2019 saw criminals dupe an executive of a British energy company into wiring them US$243,000 by synthetically replicating the voice of the executive’s German boss. In April 2021, senior members of Parliament in Baltic countries and the United Kingdom — including two chairs of parliamentary foreign affairs committees — briefly joined video calls with a troll who was masquerading as an ally of jailed Russian opposition figure Alexei Navalny.

With geopolitics in a multipolar world becoming more hostile and complex, deepfakes could prove to be a useful tool for foreign interference and subversion as well.

Ruthless actors will latch onto how deepfakes can enhance methods of state-sponsored blackmail of politicians and bureaucrats, enabling spy agencies to secure valuable human intelligence in an era when potential moles are more reluctant to be recruited because of the omnipresence of digital surveillance. Counterintelligence possibilities exist too. The head of automation within the National Geospatial-Intelligence Agency, a support unit of the US Department of Defense, told a technology conference in March 2019 that China has developed expertise in using deepfake technology to create fake satellite imagery. These forgeries are being released and promoted as supposedly benign open-source material in the hopes of tricking Western military planners and analysts into using them.

In 2020, an investigation by The Daily Beast uncovered that conservative media and news outlets across the world had been hoodwinked into publishing fake analysis from “Raphael Badani” — a fictional international affairs expert. Badani’s online profile was just one among a network of 19 made-up personas, each with a credible online presence that, over the span of one year, had close to 100 articles published across dozens of publications in the United States, the Middle East and Asia. All these pieces parroted the foreign policy agenda of the United Arab Emirates. In future, deepfakes of similar bogus experts could appear on foreign news talk shows or radio programs to strategically drip-feed propaganda to key audiences abroad.

From the perspective of an aggressor, deepfakes will lend themselves to the production of more believable false-flag operations and a grim escalation of “whataboutism” — the practice of minimizing war crimes by manufacturing evidence showing opponents committing similar atrocities. On March 27, for example, pro-Russia accounts on social media started sharing an unconfirmed video of Ukrainian soldiers torturing Russian prisoners of war by shooting them in the legs during interrogations.

And as historian Margaret MacMillan has recently said, Vladimir Putin is selling his merciless invasion of Ukraine on the basis of correcting a purported historical injustice — highlighting how history itself is becoming an instrument of war. Author Naomi Klein describes this same phenomenon as toxic nostalgia — “a violent clinging to a toxic past and a refusal to face a more entangled and interrelational future.”

Deepfakes will allow aggrieved actors to rationalize their agendas by bringing their alternative histories vividly into the present, as displayed in a project launched in 2020 by the Massachusetts Institute of Technology’s Center for Advanced Virtuality. To educate the public about deepfakes, researchers worked with various AI companies to build a website that hosts a convincing seven-minute deepfake of former US president Richard Nixon in 1969 announcing in a press conference from the White House that the Apollo 11 moon landing mission was a deadly failure.

Domestically, deepfakes will aggravate political and social tribalism by increasing the effectiveness of computational propaganda to influence electoral outcomes. Conspiracy theories could gain new palpability as well. From White Replacement Theory, Donald Trump’s Big Lie and the Great Reset, those that benefit politically or economically from conspiratorial movements will be able to breathe new life into these wayward causes by producing the audiovisual “proof” that adherents crave. The same goes for QAnon, which Alex Kaplan, a researcher at left-leaning watchdog group Media Matters for America, calls an “anti-reality online distribution network.”

Authoritarian populist groups have also been adept at using some form of conspiracy theory as a political tactic, mostly to discredit the expertise and institutions that serve as a check on power in mature democracies.

In her book How to Lose a Country: The Seven Steps from Democracy to Dictatorship, Turkish writer and political commentator Ece Temelkuran outlines the template that Turkey’s President Recep Tayyip Erdoğan and his ruling Justice and Development Party — together the pioneers of twenty-first-century authoritarian populism — first developed in the late 2000s to control media narratives and influence public thought. Mistruths are born within internal fora, such as private chat groups or political party communications. These mistruths are then spread virally through social media by trusted operatives and fringe media allies using a legion of automated bot accounts, before being picked up by sympathetic channels within mainstream or state-owned media that profess to follow certain editorial codes of conduct. Here, invented conspiracy theory is sanitized as it is wrapped in a veneer of corporate or state-backed credibility.

This same playbook was used in Israel in 2020, when a Facebook page called Zionist Spring began posting stories from “leftists for Bibi” — made-up voters professing disillusionment with the Israeli left because of protests rising up against then prime minister Benjamin Netanyahu after he was indicted on corruption and bribery charges. The fabricated testimonies were picked up by far-right channels friendly to Netanyahu and shared thousands of times, even after they were known to be untrue.

These instances illustrate how deepfakes can be expected to inflate the so-called liar’s dividend. Falsehoods will not only gain exposure and new life through attempts to debunk them (or through users clicking on them out of curiosity to see if they can spot the fake media for themselves), but more realistic material will also open up legitimate content to be called into question because of the circulation of so much false synthetic content. In reaction to the Zelenskyy deepfake, Sam Gregory, a technologist and program director of New York-based rights group Witness, told NPR, “it’s easy to claim a true video is falsified and place the onus on people to prove it’s authentic.”

Fledgling democracies in the developing world will be hit hardest. Governments in the Global South often lack the resources and expertise to combat the spread of disinformation in all its dimensions. In some cases — such as instances of African governments benefiting from Russian influence campaigns — doing so would harm a ruling party’s self-interest.

And thanks to Meta’s Free Basics initiative, Facebook and its related apps are the de facto internet for tens of millions of people in dozens of developing countries who can’t afford mobile data. Yet, despite this massive uptake in its services, documents provided to the Wall Street Journal as part of the Facebook Files show that the platform’s employees and contractors in 2020 spent only 13 percent of their 3.2 million working hours addressing false or misleading material posted from outside the United States, even though Facebook users in America comprise only six percent of the platform’s 2.9 billion overall active users globally. Facebook’s autonomous AI-monitoring systems, which are supposed to flag disinformation and abusive content, are hobbled by language barriers in non-English-speaking markets. These include countries such as Ethiopia, India, Myanmar and Nigeria, where Facebook, WhatsApp and Instagram have all been openly used to incite political violence and broadcast online hate campaigns against rival ethnic groups or government critics.

So long as social media platforms and their content-selection algorithms remain opaque and unregulated — and growing numbers of citizens in democratic societies rely on social media as their main gateway to information — deepfakes are bound to be used to indulge biases and reinforce unfounded beliefs. But there are ways to push back.

Vigilant, critical press will remain a key factor in flagging digital hoaxes.

The Best Defence Is a Good Offence

Mainstream awareness of deepfakes arguably emerged in early 2018 — first due to media reporting about the synthetic celebrity porn fad on Reddit, but also because of a public service announcement (PSA) by Buzzfeed News featuring a deepfake video of Barack Obama. In the PSA, a likeness of Obama, later revealed to be a simultaneous avatar of movie producer Jordan Peele, warns that “moving forward, we need to be more vigilant with what we trust from the internet,” and urges viewers to rely even more on “trusted news sources.”

Vigilant, critical press will remain a key factor in flagging digital hoaxes. Help comes too from how deepfake detection software is progressing, alongside a growing number of online practice programs that allow any internet user to learn how to spot deepfake content. This typically involves looking for an unevenness in a subject’s complexion or voice, an unnatural slowness or rapidity of blinking, an absence of light reflections where some should be — or simply a computer-generated eeriness to the whole scene.

Ukraine’s swift denunciation of the Zelenskyy deepfake also underscores the necessity for governments, politicians, corporations and other organizations — including established media outlets, which could suffer deepfake attempts to discredit their work — to proactively warn about the threat of deepfakes and prepare responses in advance.

Deepfakes are a boon for autocracies because of the way authoritarians gain strength from technology that obfuscates and lends itself to social control. Democracies, on the other hand, rest on a foundation of freedom of expression, alternative viewpoints, uncomfortable truths and the open exchange of ideas — no matter how fractious that exchange often is. The rapid evolution of disinformation in recent years has revealed democracy’s foundation to be a simultaneous vulnerability. With deepfake technology on the cusp of rapid growth, it’s a problem that is likely to get much worse.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Kyle Hiebert is a researcher and analyst formerly based in Cape Town and Johannesburg, South Africa, as deputy editor of the Africa Conflict Monitor.