AI and Deepfakes in the Courtroom
Last Updated on April 22, 2024 by Editorial Team
Author(s): Nimit Bhardwaj
Originally published on Towards AI.
AI and Deepfakes in the Courtroom
From seamlessly swapping audio/visual elements to fabricating entirely false material, the impact of deepfakes on public trust and society looms large, especially in the wake of the 2016 political events like Trumpβs presidency in the US and Brexit in the UK[1].
Once considered a distant threat, deepfakes have now come to pose genuine, immediate dangers, with advancements like OpenAIβs Sora pushing the boundaries of deception.
Deepfakes β a combination of βdeep learningβ and βfakeβ β refers to advanced artificial intelligence which blurs reality by enabling the creation of synthetic yet realistic images, audio, and video hoaxes[2].
In this article we will discuss how deepfakes jeopardize public trust in the legal system, exploring how technology impacts the authenticity of evidence, and the impact this can have on wider society.
A Brief History of Technology and Court Evidence Types
Evidence plays a crucial role in any courtroom, but is only ever as good as everyoneβs ability to trust in it. We started with witness testimony and physical artefacts, which are now trumped by voice recordings, photographs, and videos such as CCTV.
Technology-based evidence creates more concrete arguments: a video of someone committing a murder is a bit more convincing than the murder weapon being found in their possession. Hence, βhistorically, audio and video evidence are considered [the] gold standardβ[3].
But with all types of evidence, there is room for manipulation. An eyewitness could lie, or misinterpret something they see. A murder weapon can be planted, and fingerprints can be tampered with.
Once technology is introduced, these manipulations scale in complexity. Voice editing software and photo-editing techniques, e.g. photoshop, can cast doubt on the authenticity of audio and visual evidence.
Where with traditional dishonesty there is often a physical trail of evidence to follow, technological manipulation requires a higher level of skill to be able to trace it e.g. inconsistent time stamps, audio splicing, and minor discrepancies indicative of tampering[4]. With the evolution of technology, the legal system has, therefore, had to adapt its best practice procedures over time.
With regards to evidence gathering and use, this has meant more robust and stringent measures to test authenticity and reliability, as well as a new direction of forensic sourcing/ testing (digital forensics[5]) and chain of custody checks[6].
However, deepfakes now present an even more significant leap in evidence manipulation capabilities. These sophisticated AI-generated media can convincingly depict individuals saying or doing things they never actually did. Unlike previous forms of manipulation, deepfakes go so far as to blur the line between reality and fabrication, posing unprecedented challenges for courts in assessing the authenticity and reliability of audio-visual evidence.
Case Studies
The most obvious ways deep fakes can be used to help circumvent the law are:
- The fabrication of evidence to provide alibis for activities and prove innocence.
- The fabrication of evidence showing someoneβs guilt and involvement in a crime.
- Using the possibility of deepfakes to protest the legitimacy of authentic yet incriminating material[7].
Our first case study looks at fabricating evidence to incriminate someone wrongfully.
In 2019, A deepfaked recording was used in a UK custody battle to discredit the fatherβs worthiness of shared custody. According to the fatherβs lawyer, Byron James, a βheavily doctored recordingβ had been presented to the court in which the father is heard making βdirect and violentβ threats to his wife[8].
However, after further examination, it was found that the recording presented in court had been manipulated to include words which had not been used by the father. In fact, demonstrative of how increasingly accessible editing technologies have become, the mother had used βsoftware and online tutorials to put together a plausible fileβ[8].
While James had not previously encountered AI-doctored evidence in the courtroom, he has since commented on how it calls into question what kind of evidence can actually be trusted these days. Had they not discovered the tampering and managed to obtain the original file and metadata, the mother may have successfully persuaded the courtroom as to the fatherβs fictitiously violent character.
While the outcome of the hearing and those involved in it are confidential, it highlights the dangers of taking audio/visual evidence at face value. Not only this, but with the increasingly widespread ease and access to deepfake technologies versus the relatively older age of judges, the necessary weariness of deep learning technologies may not be understood.
Our second case study looks at people claiming real videos are deepfakes. In a more high-profile case, Elon Muskβs lawyers attempted to get a lawsuit against Tesla dismissed by claiming Musk had been the victim of deepfake videos.
In 2016, Musk was videoed at a tech conference speaking about the extreme safety of Teslaβs Model S and Model X self-driving autonomous features. The videos from this interview have been on YouTube for 7 years now. However, in 2023, the videos resurfaced when a man died after his Tesla crashed while in self-driving mode, and the manβs family and lawyers cited those 2016 claims. Muskβs lawyers tried to deny their authenticity[7].
Being such a public figure, Musk is indeed the subject of deepfakes, which is what his lawyers tried to argue in this instance. The video here was, of course, real, though, and the courts did not buy Muskβs lawyers' claims. However, this highlights how, in the age of deepfakes, people can not only fabricate but also deny reality.
βAs people become more aware of how easy it is to fake audio and video, bad actors can weaponize that scepticismβ[7]. Especially for public figures, deepfakes offer a layer of protection and immunity behind which they can hide, and avoid taking ownership of reality. Even for non-public figures, instances like this have been seen in courts before and continue to increase in commonality.
Deepfakes and International Regulation
Recent regulations targeting deepfakes have emerged globally, with notable initiatives like Californiaβs disclosure requirement for AI-generated political ads[9] and the UKβs Online Safety Act[10] mandating social media platforms to crack down on harmful content, explicitly including deepfakes. This said, the majority of deepfake regulation currently only really targets deepfakes used with pornographic intent, rather than to circumvent justice systems.
While these regulations offer frameworks for accountability and deterrence, their effectiveness hinges on enforcement, resource allocation, skill, and adaptability.
Furthermore, although regulation will only grow in necessity for deepfakes under many applications, governments and policymakers will have to consider the potential for negative unintended consequences. This includes impacts on freedom of speech and expression. Vaguely worded laws could lead to censorship or self-censorship, reducing participation in public dialogue and harming democracy.
Balancing the objective of combatting deceptive deepfake media with safeguarding human rights will continue as a nuanced challenge, showing the complexities of the ethical considerations surrounding deepfakes, society, and public trust.
References
[1] N. H. Behav, Exposure to untrustworthy websites in the 2016 U.S. election (2020), National Library of Medicine
[2] N. Barney, deepfake AI (deep fake) (2023), TechTarget
[3] R. Curry, AI deepfakes are poised to enter court proceedings at time of low trust in legal system (2024), CNBC
[4] Audio Forensic Expert, Authentication of Digital Audio Recordings (2014), Audio Forensic Expert
[5] BlueVoyant, Understanding Digital Forensics: Process, Techniques, and Tools (n.d.), BlueVoyant
[6] Champlain College Online, Digital Forensics and the Chain of Custody: How Is Electronic Evidence Collected and Safeguarded? (2024), Champlain College Online
[7] S. Bond, People are trying to claim real videos are deepfakes. The courts are not amused (2023), NPR
[8] P. Ryan, Deepfakeβ audio evidence used in UK court to discredit Dubai dad (2020), The National
[9] T. Wu, California Looks to Boost Deepfake Protections Before Elections (2023), Bloomberg
[10] Ministry of Justice, New laws to better protect victims from abuse of intimate images (2022), GOV.UK
Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.
Published via Towards AI