Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Read by thought-leaders and decision-makers around the world. Phone Number: +1-650-246-9381 Email: [email protected]
228 Park Avenue South New York, NY 10003 United States
Website: Publisher: https://towardsai.net/#publisher Diversity Policy: https://towardsai.net/about Ethics Policy: https://towardsai.net/about Masthead: https://towardsai.net/about
Name: Towards AI Legal Name: Towards AI, Inc. Description: Towards AI is the world's leading artificial intelligence (AI) and technology publication. Founders: Roberto Iriondo, , Job Title: Co-founder and Advisor Works for: Towards AI, Inc. Follow Roberto: X, LinkedIn, GitHub, Google Scholar, Towards AI Profile, Medium, ML@CMU, FreeCodeCamp, Crunchbase, Bloomberg, Roberto Iriondo, Generative AI Lab, Generative AI Lab Denis Piffaretti, Job Title: Co-founder Works for: Towards AI, Inc. Louie Peters, Job Title: Co-founder Works for: Towards AI, Inc. Louis-François Bouchard, Job Title: Co-founder Works for: Towards AI, Inc. Cover:
Towards AI Cover
Logo:
Towards AI Logo
Areas Served: Worldwide Alternate Name: Towards AI, Inc. Alternate Name: Towards AI Co. Alternate Name: towards ai Alternate Name: towardsai Alternate Name: towards.ai Alternate Name: tai Alternate Name: toward ai Alternate Name: toward.ai Alternate Name: Towards AI, Inc. Alternate Name: towardsai.net Alternate Name: pub.towardsai.net
5 stars – based on 497 reviews

Frequently Used, Contextual References

TODO: Remember to copy unique IDs whenever it needs used. i.e., URL: 304b2e42315e

Resources

Take our 85+ lesson From Beginner to Advanced LLM Developer Certification: From choosing a project to deploying a working product this is the most comprehensive and practical LLM course out there!

Publication

AI and Deepfakes in the Courtroom
Artificial Intelligence   Latest   Machine Learning

AI and Deepfakes in the Courtroom

Last Updated on April 22, 2024 by Editorial Team

Author(s): Nimit Bhardwaj

Originally published on Towards AI.

AI and Deepfakes in the Courtroom

Image generated by DALLΒ·E

From seamlessly swapping audio/visual elements to fabricating entirely false material, the impact of deepfakes on public trust and society looms large, especially in the wake of the 2016 political events like Trump’s presidency in the US and Brexit in the UK[1].

Once considered a distant threat, deepfakes have now come to pose genuine, immediate dangers, with advancements like OpenAI’s Sora pushing the boundaries of deception.

Deepfakes β€” a combination of β€˜deep learning’ and β€˜fake’ β€” refers to advanced artificial intelligence which blurs reality by enabling the creation of synthetic yet realistic images, audio, and video hoaxes[2].

In this article we will discuss how deepfakes jeopardize public trust in the legal system, exploring how technology impacts the authenticity of evidence, and the impact this can have on wider society.

A Brief History of Technology and Court Evidence Types

Evidence plays a crucial role in any courtroom, but is only ever as good as everyone’s ability to trust in it. We started with witness testimony and physical artefacts, which are now trumped by voice recordings, photographs, and videos such as CCTV.

Technology-based evidence creates more concrete arguments: a video of someone committing a murder is a bit more convincing than the murder weapon being found in their possession. Hence, β€œhistorically, audio and video evidence are considered [the] gold standard”[3].

But with all types of evidence, there is room for manipulation. An eyewitness could lie, or misinterpret something they see. A murder weapon can be planted, and fingerprints can be tampered with.

Once technology is introduced, these manipulations scale in complexity. Voice editing software and photo-editing techniques, e.g. photoshop, can cast doubt on the authenticity of audio and visual evidence.

Where with traditional dishonesty there is often a physical trail of evidence to follow, technological manipulation requires a higher level of skill to be able to trace it e.g. inconsistent time stamps, audio splicing, and minor discrepancies indicative of tampering[4]. With the evolution of technology, the legal system has, therefore, had to adapt its best practice procedures over time.

With regards to evidence gathering and use, this has meant more robust and stringent measures to test authenticity and reliability, as well as a new direction of forensic sourcing/ testing (digital forensics[5]) and chain of custody checks[6].

However, deepfakes now present an even more significant leap in evidence manipulation capabilities. These sophisticated AI-generated media can convincingly depict individuals saying or doing things they never actually did. Unlike previous forms of manipulation, deepfakes go so far as to blur the line between reality and fabrication, posing unprecedented challenges for courts in assessing the authenticity and reliability of audio-visual evidence.

Case Studies

The most obvious ways deep fakes can be used to help circumvent the law are:

  • The fabrication of evidence to provide alibis for activities and prove innocence.
  • The fabrication of evidence showing someone’s guilt and involvement in a crime.
  • Using the possibility of deepfakes to protest the legitimacy of authentic yet incriminating material[7].

Our first case study looks at fabricating evidence to incriminate someone wrongfully.

In 2019, A deepfaked recording was used in a UK custody battle to discredit the father’s worthiness of shared custody. According to the father’s lawyer, Byron James, a β€œheavily doctored recording” had been presented to the court in which the father is heard making β€œdirect and violent” threats to his wife[8].

However, after further examination, it was found that the recording presented in court had been manipulated to include words which had not been used by the father. In fact, demonstrative of how increasingly accessible editing technologies have become, the mother had used β€œsoftware and online tutorials to put together a plausible file”[8].

While James had not previously encountered AI-doctored evidence in the courtroom, he has since commented on how it calls into question what kind of evidence can actually be trusted these days. Had they not discovered the tampering and managed to obtain the original file and metadata, the mother may have successfully persuaded the courtroom as to the father’s fictitiously violent character.

While the outcome of the hearing and those involved in it are confidential, it highlights the dangers of taking audio/visual evidence at face value. Not only this, but with the increasingly widespread ease and access to deepfake technologies versus the relatively older age of judges, the necessary weariness of deep learning technologies may not be understood.

Our second case study looks at people claiming real videos are deepfakes. In a more high-profile case, Elon Musk’s lawyers attempted to get a lawsuit against Tesla dismissed by claiming Musk had been the victim of deepfake videos.

In 2016, Musk was videoed at a tech conference speaking about the extreme safety of Tesla’s Model S and Model X self-driving autonomous features. The videos from this interview have been on YouTube for 7 years now. However, in 2023, the videos resurfaced when a man died after his Tesla crashed while in self-driving mode, and the man’s family and lawyers cited those 2016 claims. Musk’s lawyers tried to deny their authenticity[7].

Being such a public figure, Musk is indeed the subject of deepfakes, which is what his lawyers tried to argue in this instance. The video here was, of course, real, though, and the courts did not buy Musk’s lawyers' claims. However, this highlights how, in the age of deepfakes, people can not only fabricate but also deny reality.

β€œAs people become more aware of how easy it is to fake audio and video, bad actors can weaponize that scepticism”[7]. Especially for public figures, deepfakes offer a layer of protection and immunity behind which they can hide, and avoid taking ownership of reality. Even for non-public figures, instances like this have been seen in courts before and continue to increase in commonality.

Deepfakes and International Regulation

Recent regulations targeting deepfakes have emerged globally, with notable initiatives like California’s disclosure requirement for AI-generated political ads[9] and the UK’s Online Safety Act[10] mandating social media platforms to crack down on harmful content, explicitly including deepfakes. This said, the majority of deepfake regulation currently only really targets deepfakes used with pornographic intent, rather than to circumvent justice systems.

While these regulations offer frameworks for accountability and deterrence, their effectiveness hinges on enforcement, resource allocation, skill, and adaptability.

Furthermore, although regulation will only grow in necessity for deepfakes under many applications, governments and policymakers will have to consider the potential for negative unintended consequences. This includes impacts on freedom of speech and expression. Vaguely worded laws could lead to censorship or self-censorship, reducing participation in public dialogue and harming democracy.

Balancing the objective of combatting deceptive deepfake media with safeguarding human rights will continue as a nuanced challenge, showing the complexities of the ethical considerations surrounding deepfakes, society, and public trust.

References

[1] N. H. Behav, Exposure to untrustworthy websites in the 2016 U.S. election (2020), National Library of Medicine

[2] N. Barney, deepfake AI (deep fake) (2023), TechTarget

[3] R. Curry, AI deepfakes are poised to enter court proceedings at time of low trust in legal system (2024), CNBC

[4] Audio Forensic Expert, Authentication of Digital Audio Recordings (2014), Audio Forensic Expert

[5] BlueVoyant, Understanding Digital Forensics: Process, Techniques, and Tools (n.d.), BlueVoyant

[6] Champlain College Online, Digital Forensics and the Chain of Custody: How Is Electronic Evidence Collected and Safeguarded? (2024), Champlain College Online

[7] S. Bond, People are trying to claim real videos are deepfakes. The courts are not amused (2023), NPR

[8] P. Ryan, Deepfake’ audio evidence used in UK court to discredit Dubai dad (2020), The National

[9] T. Wu, California Looks to Boost Deepfake Protections Before Elections (2023), Bloomberg

[10] Ministry of Justice, New laws to better protect victims from abuse of intimate images (2022), GOV.UK

Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming aΒ sponsor.

Published via Towards AI

Feedback ↓