Chuck Kellner is strategic discovery advisor at Everlaw. Views are the author’s own.
AI-generated video and audio deepfakes give bad actors an ability to create convincing propaganda and disinformation. Scarlett Johanssen’s complaint earlier this year against OpenAI signals the kind of issues about copyright, personality rights and deepfakes that we’ll have to deal with. Will an avalanche of deepfake-related litigation overwhelm us?
Navigating the deepfake landscape
Realistic deepfakes can be created in minutes and submitted as evidence through discovery or deposition. Moreover, companies risk both reputational and financial losses over deep fakes. The financial services industry is already under attack, with deepfake incidents in the fintech sector increasing 700% in 2023 over the previous year. Adjacently, the appearance of deepfake evidence in litigation is almost certain, predicated by the reality that the tools used to detect deepfakes lag behind those used to create them.
The good news is that lawyers will not need to scrutinize or outsource the examination of every piece of evidence for deepfakes. Instead, in-house legal departments can save time by following a series of steps to filter for illicit evidence that is germane to the impending deposition or trial.
Is it too good, or too weird, to be true?
Lawyerly intuition can provide an effective first-level screen to surface deepfake evidence: is it simply too good or too weird to be true? Metaphorically, is there rubbish in the bin and that’s what you’re smelling? To prepare for this contingency, legal departments can establish procedures to verify if an audio or video recording is a deepfake.
First, identify whether the recording in question was part of discovery or proffered as evidence. Focus on using tools and expertise that should be available in any litigation or investigation – tools and expertise that are associated with your or your opponent’s obligations to ensure that the original source data is preserved for forensic inspection.
Launch a progressive investigation
To help surface content that may be suspect, legal teams should make sure that all audio and video evidence is searchable. Machine transcription offers an early look that allows straightforward evaluation for content relevance, including that which is too good or too weird to be true.
To manage the risk of reputational damage, companies might look to automating search and collection of their mentions in social media (think Google Social Searcher or Social-Searcher.com). Posts can be collected and evaluated for source and accuracy. These can be collected in an ediscovery application for further search and analysis.
If a deepfake is suspected, engage computer forensics experts. The expertise of these professionals and the advanced analytic tools they can use bring you closer to the techniques used by deepfake criminals. Some of the discrepancies forensic experts look for in deepfake video, audio, and images include:
- Metadata detection. Files that have been manipulated into deepfakes can show discrepancies in metadata. For example, OpenAI recently launched a “tamper resistant” tool to detect metadata that can reveal whether a photo was made with its Dall-E 3 generative AI image application. When OpenAI’s Sora video tool is released to the public, videos created with it will have similar metadata.
- Artifacts with temporal and spatial discrepancies. The heart rate in deepfake videos is often irregular, and pixels in faces can be too smooth. Advanced methods allow remote estimation of the heart rate in face videos, exposing rate fluctuations that indicate tampering. Other approaches can reflect pixelation artifacts and spatial traces of smoothing caused by generating fake faces.
Keep abreast of rules
As fast as deepfake detection tools are advancing, the tools to deceive remain at least one step ahead. It is therefore critical to build in extra time to evaluate exhibit lists, and to research procedures to challenge evidence prior to trial.
Rules of evidence around the products of Generative AI, including deepfake video, are a topic of intense debate across the legal industry. The U.S. Judicial Conference Advisory Committee on Evidence Rules is considering proposals, including Rule 901(c). It is authored by retired federal judge and Duke University School of Law professor Paul Grimm and professor Maura Grossman, who teaches at the University of Waterloo and Osgoode Hall Law School in Canada.
Rule 901(c) would govern "potentially fabricated or altered electronic evidence," reading, "If a party challenging the authenticity of computer-generated or other electronic evidence demonstrates to the court that it is more likely than not either fabricated, or altered in whole or in part, the evidence is admissible only if the proponent demonstrates that its probative value outweighs its prejudicial effect on the party challenging the evidence."
Some experts believe that the Judiciary will take a wait-and-see approach, with courts using existing rules to make decisions which will then be appealed.
Expect the unexpected
Legitimate evidence has also been portrayed to the court as deepfake, such as Elon Musk’s 2016 videos touting the safety of Tesla’s Autopilot self-driving feature. These videos were cited in a 2019 lawsuit filed by the family of a man killed while using Autopilot. Musk’s attorneys claimed they were deepfakes. The case was recently settled a day before the trial was to begin.