The last year has shown an explosion in both interest and applications in generative AI; use cases run the gamut from ad generation to social media writing and research. In fact, a Fishbowl survey of working professionals from early 2023 showed that 43% had already used AI tools like ChatGPT to complete tasks.
Without a doubt, many of us are using tools like ChatGPT for weeknight meal ideas and assistance writing difficult emails.
But in the context of high-stakes litigation and investigations, the simple novelty of a shiny new AI toy isn’t enough to coax risk-averse legal teams to dive head-first into new workflows.
The Risk of Not-So-Smart AI in Legal
The process of moving data from the onset of a lawsuit to its ultimate presentation in a courtroom, known as e-discovery, is a critical period in any legal matter. This is when lawyers, associates, in-house stakeholders, and opposing parties are able to dig into potential evidence—when they’ll uncover the insights that will make or break their strategy in winning a case.
In large matters, this might mean parsing through millions of pages of information to find the thread of a conversation between two individuals that kicked off the timeline of the events under examination.
If that sounds taxing, time-consuming, and expensive, that’s because it is. e-Discovery is an $11 billion global industry in 2023, according to Fortune Business Insights. Legal teams are always under pressure to get more done faster in an effort to minimize costs and get down to the real business of legal strategizing.
Rushing isn’t a viable option when you’re searching for key words and sentences among billions of lines of text. That’s where AI can help, by using computational power to surface relevant materials sooner and get useful insights into attorneys’ hands faster.
However, with these high stakes and wide margins for error in hand, the accuracy, defensibility, and reliability of that AI is subject to intense professional and judicial scrutiny. And for good reason: anything less than the thoughtful, intentional development of specifically fit-for-purpose AI applications in this realm can contribute to the loss of millions of dollars, the livelihoods of countless individuals, and deeply damaging miscarriages of justice.
The Evolving Standards of an Evolving User Base
The risks are real, they’re expensive, and they’re stressful. But the rapid innovation of AI applications for the legal industry has progressed anyway, and the lack of guidance and regulation on its development has left some open questions for this software category’s creators and customers alike.
In an effort to provide some such guidance, the recent “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” underlines several key considerations for developers of AI technologies across sectors—and those in the legal world should take note. The directive emphasizes the importance of thorough safety testing, intentional data privacy, transparency around AI-generated work product, and mindfulness around discreet use cases.
Legal tech companies can hit almost all of those marks by hiring and developing the right talent and technology in-house. But at least one—the use cases—simply cannot be discerned as accurately without input from users in the field. These real-world professionals, who will ultimately benefit from the tools under development, must have input on how those tools are developed.
It’s a symbiotic relationship: The software users will enjoy a more purpose-built piece of software that better suits their needs, workflows, and restrictions. The software developers will invest their time and energy into more market-ready products that are sure to land well with their user base.
Consumer-facing brands have already leaned in hard on the value of customer experience as simple good business. With the potential benefits and risks of AI applications in B2B settings standing so high, software developers in this realm greatly benefit from doing the same.
Relativity—which makes software to help users organize data, discover the truth, and act on it during litigation and investigations—has taken this approach in its development of AI applications for its SaaS platform, RelativityOne. Its newest product, Relativity aiR for Review, is an excellent study of this collaborative workflow. Read more about it here.