AUG 27 2020
Source: Getty Shidonna Raven Garden and Cook
Imagine a few days before an election, a video of a candidate is released showing them using hate speech, racial slurs and epithets that undercut their image as being pro minorities. Imagine a teenager seeing an explicit video of themselves on social media. Imagine a CEO on the road to raise money to take their company public when an audio clip of them stating their fears and anxieties about the product is sent to the investors. These scenarios are examples of the malicious use of AI-generated synthetic media known as deepfakes[1].
Recent advancements in Artificial Intelligence (AI) have led to rapid developments in audio, video, and image manipulation techniques. Access to commodity cloud computing, public research AI algorithms and abundant data has created a perfect storm to democratise the creation of deepfakes for distribution at scale via social platforms.
Although deepfakes could be used in positive ways, such as in art, expression, accessibility, and business, it has mainly been weaponised for malicious purposes. Deepfakes can harm individuals, businesses, society, and democracy, and can accelerate the already declining trust in the media. Such ab erosion of trust will promote a culture of factual relativism, unravelling the increasingly strained fabric of democracy and civil society. Additionally, deepfakes can enable the least democratic and authoritarian leaders to thrive as they can leverage the ‘liar’s dividend’, where any inconvenient truth is quickly discounted as ‘fake news’[2].
Creating a false narrative using deepfakes is dangerous and can cause harm, intentional and unintentional, to individuals and society at large. Deepfakes could worsen the global post-truth crisis as they are not just fake but are so realistic that they betray our most innate senses of sight and sound. Putting words into someone else’s mouth, swapping someone’s face with another, creating synthetic images and digital puppets of public personas to systematise deceit are ethically questionable actions and should be held responsible for the potential harm to individuals and institutions.
Deepfakes can be used by non-state actors, such as insurgent groups and terrorist organisations, to represent their adversaries as making inflammatory speeches or engaging in provocative actions to stir up anti-state sentiments among people. For instance, a terrorist organisation can easily create a deepfake video showing western soldiers dishonouring a religious place to flame existing anti-West emotions and cause further discord. States can use similar tactics to spread computational propaganda against a minority community or another country, for instance, a fake video showing a police officer shouting anti-religious slurs or a political activist calling for violence. All this can be achieved with fewer resources, internet scale and speed, and even microtargeted to galvanise support.
Deepfakes created to intimidate, humiliate, or blackmail an individual are unambiguously unethical, and their impact on the democratic process must be analysed.
Types of Deepfakes
Celebrity and revenge pornography were among the early malicious uses of deepfakes. Deepfake pornography is placed in the macro-context of gender inequality and exclusively targets and harms women, infecting emotional and reputational harm. About 96 percent of deepfakes are pornographic videos, with over 134 million views on the top four deepfake pornographic websites[3].
Pornographic deepfakes can threaten, intimidate, and inflict psychological harm on an individual. It reduces women to sexual objects and torments them, causing emotional distress, reputation harm, abuse and in some cases even financial or employment loss.
Deepfake pornography, often non-consensual, is disturbing and immoral, and several sites have pre-emptively banned such content[4][5][6].
The ethical issue is far more convoluted where consensual synthetic pornography is concerned. While some may argue that this is equivalent to the morally acceptable practice of sexual fantasy, consensual deepfakes could normalise the idea of artificial pornography, which could further exacerbate concerns about the negative impact of pornography on psychological and sexual development. The realistic virtual avatars could also lead to negative outcomes. It may be morally acceptable to act adversely towards a virtual avatar, but how will this impact behaviour with another person?
Another area of concern is synthetic resurrection. Individuals have the right to control the commercial use of their likenesses. In a few US states, like Massachusetts and New York, this right extends to the afterlife as well. But this may be different and complex process in other countries[7].
The main question concerning public personalities is who owns their face and voice once they die. Can they be used for publicity, propaganda, and commercial gain? There are moral and ethical concerns about how deepfakes can be used to misrepresent political leaders’ reputation posthumously to achieve political and policy motives. Although there are some legal protections to using the voice and face of a deceased person for commercial gain, if the heirs have the legal right to use these features, they can use it for their commercial benefit.
Another potential ethical concern creating a deepfake audio or video of a loved one after they have passed. There are voice technology companies that will create synthetic voice as a new kind of bereavement therapy or help people remember the deceased and remain connected with them[8]. Although some may argue that it is akin to keeping pictures and videos of the deceased, there is a moral ambiguity about using the vocal and facial features to create synthetic digital versions.
Although voice assistants like Alexa, Cortana and Siri are increasingly sounding more realistic, people can still identify them as synthetic voices. Improvement in speech technology allow voice assistants to imitate human and social elements of speech like pauses and verbal cues. Efforts are underway, such as Google’s Duplex, to develop voice assistant features to make calls on behalf of a person in a way that it is indistinguishable from human voice[9].
A potential human-sounding synthetic voice raises several ethical concerns. Since the deepfake voice technology is created to project human voice, it could undermine real social interaction. Racial and cultural bias could also occur due to prejudice in the training dataset for these tools.
Synthetic voice deepfakes can also be used to deceive people for monetary and commercial gain. Automated call centers, deepfake audio of public personalities and phone scamsters can use synthetic voice tools maliciously for their benefit.
Deepfake technology can create an entirely artificially generated face, person, or object. Creating and enhancing fake digital identities for fraud, espionage or infiltration purposes is unethical[10].
A synthetic face is generated by training a deep-learning algorithm with a large set of real face images to generate a deepfake. But it is unethical to train a model using real faces unless proper consent for such uses has been granted.
Democratic Discourse and Process
In politics, stretching a truth, overrepresenting a policy position and presenting alternate facts are normal tactics. They help mobilise, influence and persuade people to gain votes and donors. Political opportunism is unethical but now the norm.
Deepfakes and synthetic media may have a profound impact on the outcome of election if political parties choose to use them. Deception creates profound harm to individuals because it impedes their ability to make informed decisions in their own best interests. Intentionally distributing false information about the opposition or presenting an alternate truth for a candidate in an election manipulates voters into serving the interests of the deceiver[11]. These practices are unethical and have limited legal recourse. Similarly, a deepfake used to intimidate voters not to vote is immoral as well.
Deepfakes may also be used for misattributions, telling a lie about a candidate, falsely amplifying their contribution, or inflicting reputational harm on a candidate. A deepfake with an intent to deceive, intimidate, misattribute, and inflict reputational harm to perpetuate disinformation is unambiguously unethical. It is also unethical to invoke the liar’s dividend.
A Moral Obligation
The creators and distributors of deepfakes must ensure they employ and implement synthetic media ethically. Big technology platforms like Microsoft, Google, and Amazon, which provide tooling and cloud computing to create deepfakes with speed and at scale, have a moral obligation[12]. Social media platforms like Facebook, Twitter, LinkedIn and TikTok, which offer the ability to distribute a deepfake at scale, must show an ethical and social responsibility towards the use of deepfakes, as must news media organisations and journalists, legislators and policymakers, and civil society.
The ethical obligation of the social and technology platforms is to prevent harm. While users on these platforms have a responsibility towards sharing and consuming content, structural and informational asymmetries make it hard to expect users to play a primary role in effectively responding to malicious deepfakes. The burden-shifting to users to respond to malicious synthetic media might be ethically defensible. Still, platforms must do the right thing and bear the primary responsibility of identifying and preventing the spread of misleading and manipulated media.
Most technology and social platforms have policies for disinformation and malicious synthetic media, but these must be aligned to ethical principles. For instance, if a deepfake can cause significant harm (reputational or otherwise), the platforms must remove such content. These platforms should act to add dissemination controls or differential promotional tactics like limited sharing or downranking to stop the spread of deepfakes on their networks. Labelling content is another effective tool, which should be deployed objectively and transparently, without any political bias or business model considerations.
Platforms bear ethical obligations to create and maintain the dissemination norms of their user community. Framing of community standards, community identity and user submission constraints can have a real impact on content producers. Norms and community guidelines, including examples of desirable behaviour and positive expectations of users as community participants, can reinforce behaviour consistent with those expectations. Terms of use and platform policies play a meaningful role in preventing the spread of harmful fabricated media.
Institutions interested in combating problems related to manipulated media have an ethical obligation to ensure access to media literacy programmes. Platforms must empower users with knowledge and critical media literacy skills to build resiliency and engage intelligently to consume, process, and share information. Practical media knowledge can enable users to think critically about the context of media and become more engaged citizens, while still appreciating satire and parody.
Conclusion
Deepfakes makes it possible to fabricate media, often without consent, and cause psychology harm, political instability, and business disruption. The weaponisation of deepfakes can have a massive impact on the economy, personal freedom, and national security. The ethical implications of deepfake are enormous. Deepfake threat models, harm frameworks, ethical AI principles and commonsense regulations must be developed through partnerships and civil society oversight to promote awareness and encourage advancement and innovation.
Endnotes
[1]Robert Chesney and Danielle Keat Citron, “A Looming Challenge for Privacy, Democracy, and National Security”, California Law Review 1753, 2019.
[2] Ibid.
[3] Giorgio Patrini, Henry Ajder, Francesco Cavalli, and Laurence Cullen, “The State of deepfakes”, Deeptrace, October 7, 2019.
[4] LinkedIn, “LinkedIn Professional Community Policies”, LinkedIn, 2020.
[5] Reddit, “Updates to Our Policy Around Impersonation”, Reddit, January 9, 2020.
[6] Twitter, “Building rules in public: Our approach to synthetic & manipulated media”, Twitter, February 4, 2020.
[7] Tim Sharp “Right to Privacy: Constitutional Rights & Privacy Laws”, Live Science, 2013.
[8] Henry Ajder, “The ethics of deepfakes aren’t always black and white”, The Next Web, June 16, 2019.
[9] Natasha Lomas, “Duplex shows Google failing at ethical and creative AI design”, Tech Crunch, May 10, 2018.
[10] “The State of deepfakes”
[11] Nicholas Diakopoulos and Deborah Johnson. 2019. “Anticipating and Addressing the Ethical Implications of Deepfakes in the Context of Elections”. New Media & Society 27, 2020.
[12] Mira Lane, “Responsible Innovation: The Next Wave of Design Thinking”, Microsoft Design on Medium, May 19, 2020.
What are the ethics surrounding deepfake? What are the health implications of deepfake? What should be the laws governing deepfake?
If these articles have been helpful to you and yours, give a donation to Shidonna Raven Garden and Cook Ezine today. All Rights Reserved – Shidonna Raven (c) 2025 – Garden & Cook.
Comments