Home
ViSmedia
News

How can Deepfakes Impact the 2020 US Elections?

Fake news, and specifically deepfakes, might turn out to be a crucial challenge during the 2020 US elections. Deepfake technologies can be used to make video and audio clips of individuals doing and saying things they never did or said. In a paper in the SSRN eLibrary, Vismedia researchers Deborah G. Johnson and Nicholas Diakopoulos address the ethical implications of deepfakes in an election context.

Vote Illustration
Photo:
ViSmedia

Main content

Deepfakes have significant implications for the integrity of many social domains, including elections, the authors point out. You can synthesize a particular individual’s voice, swap one person’s face onto another person’s body in a video, or synthesize an entirely new video. The Face2Face system enables real-time facial puppeteering by taking an input video of an actor’s face and transferring the mouth shape and expressions onto a synthesized target face. This technology can be paired with the target person’s real voice as input, the voice of an impersonating actor, or a fully synthesized voice mimicking the target person.

Less technology-intensive “cheapfakes”, such as the widely circulated clip of House Speaker Nancy Pelosi that was slowed to make her appear intoxicated, have already demonstrated the potential for even low-tech manipulated video to have an impact.

In the groundbreaking article, Johnson and Diakopoulos not only anticipate and identify the ethichal problems with deepfakes; they also present strategies to cope with the rapidly evolving technologies that enable such manipulation.

Deepfake technology offers a host of potential benefits in entertainment and education, while at the same time it challenges aural and visual authenticity and enable the production of disinformation by bad actors. Deepfakes have the potential to wreck havoc in certain contexts, for instance news, where audio or video are treated as a form of evidence that something actually happened. Deepfakes also have the potential to undermine democratic elections.

The harm of deepfakes
The authors found that deepfakes can cause harm not only to the targeted subjects of deepfakes, but also to the viewers and to social institutions. In the election context, this means that deepfakes can cause harm to the voters, to the candidates and to the democratic election system. 

Deception is used to induce voters to believe something that isn’t true about a candidate. This undermines their ability to make informed decisions in their own best interests. While this is true for any false information distributed during an election, deepfakes are unusual insofar as they appear more authentic.    

Intimidation
The use of deepfakes to intimidate people has already emerged as an ethical issue related to deepfake pornography. In the election context deepfakes can be used to pressure targeted voters into not voting. The pressure is exerted by means of a threat to release a defamatory pornographic deepfake of the voter if he or she votes.

Harms to Subjects
In the election context, the primary targets of deepfakes are campaigns and candidates. Reputational harm can be caused by videos depicting a candidate saying something that suggests the candidate is not who she or he purports to be, in a way that makes the candidate seem hypocritical.

Legally, deepfakes of this kind could fall under the concept of defamation (the act of damaging a person’s good reputation). However, the courts have been reluctant to interfere with campaign speech. Courts in the U.S. have been divided on the constitutionality of laws that prohibit false campaign speech. The reluctance of U.S. law to ban false speech has more to do with the dangers and challenges of trying to regulate false claims, and in that respect the courts do not deny the harmfulness of false speech.

Manipulation of a person’s image, like in deepfakes, can be a violation of ownership rights of the subjects. Manipulation of a person’s image is not problematic when the subject has granted permission for their image to be used. Manipulation of a person’s image is also less problematic in contexts in which the falsity is clearly evident (e.g. parody).

What to do
Johnson and Diakopoulos identified four intervention strategies: education and media literacy, subject defence, verification, and publicity modulation.

The authors suggest that individuals should develop awareness of the capabilities of technology so as to be able to spot characteristic flaws in deepfakes, or more generally acquire skills in how to verify, fact check, and do research to assess online sources of information. An unintended consequence of increased public education about deepfakes is the possibility of what Chesney and Citron (2019) have termed the “liar’s dividend.” This is the idea that as individuals learn to be more critical and skeptical of media, they may begin to doubt real video and audio evidence. This in turn makes it easier for some to deny and cast doubt on the occurrence of real events.

In the election context, campaigns could adopt a strategy of monitoring platforms where deepfakes circulate so that any manipulated representations of the candidate can be quickly debunked. The problem with reactive defence interventions is that they don’t stop the deepfake from being seen. Many of the harms are still done as in the case of intimidation and undermining of trust

Deborah Johnson
Photo:
Private

Deborah G. Johnson, Professor, University of Virginia.

Proactive defence
In terms of proactive defence, the authors find that options are more limited. Since deepfakes use audio-visual material as training data, campaigns could try to restrict the audio-visual material of a candidate in a way that would prevent the creation of deepfakes.

A potentially powerful antidote to deepfakes is the use of verification techniques and technologies that help reveal how the audio and/or video was constructed. These provide evidence about whether a piece of media was synthesized entirely or in part,  and include automated algorithms as well as semi-automated forensics procedures for detection and determining provenance.

Nicholas Diakopoulous
Photo:
Private

Nicholas Diakopolous, Assistant Professor at Northwestern University School of Communication.

Verification techniques increase the likelihood that fakes will be identified as such and, therefore, minimize harm to viewers. Verification can also provide support to other intervention strategies. Since automated techniques are unable to detect the intent of a deepfake’s author, parody and satire may be weeded out along with deceptive content. Hence, in practice, verification techniques will need to be paired with expert human moderators capable of assessing intent and appropriateness, the authors conclude.

Proactive defence
In terms of proactive defence, the authors find that options are more limited. Since deepfakes use audio-visual material as training data, campaigns could try to restrict the audio-visual material of a candidate in a way that would prevent the creation of deepfakes.

A potentially powerful antidote to deepfakes is the use of verification techniques and technologies that help reveal how the audio and/or video was constructed. These provide evidence about whether a piece of media was synthesized entirely or in part,  and include automated algorithms as well as semi-automated forensics procedures for detection and determining provenance.

Verification techniques increase the likelihood that fakes will be identified as such and, therefore, minimize harm to viewers. Verification can also provide support to other intervention strategies. Since automated techniques are unable to detect the intent of a deepfake’s author, parody and satire may be weeded out along with deceptive content. Hence, in practice, verification techniques will need to be paired with expert human moderators capable of assessing intent and appropriateness, the authors conclude.

Paper reference
Diakopoulos, Nicholas and Johnson, Deborah, Anticipating and Addressing the Ethical Implications of Deepfakes in the Context of Elections (October 23, 2019).

Available at SSRN