Jitendra Gupta is director of Ops Decision Science at Wolters Kluwer. Views are the author’s own.
It’s no secret that artificial intelligence has swiftly woven itself into everyday life. ChatGPT gained one million users in a mere five days, fast food chains are using AI to take drive-thru orders and Reddit co-founder Alexis Ohanian called the coming AI revolution “bigger than the smartphone.” While the buzz around AI has grown tremendously in recent months, AI-enabled solutions have been around far longer — including many that target corporate legal departments.
A lot has already been written about how companies can build, mature and benefit from artificial intelligence. But once a corporate legal department has implemented AI — a process that includes getting buy-in from the top and building trust from below — its work isn’t done. Instead, legal teams need to be able to ensure the AI they’ve implemented is effective. With that in mind, let’s dive into three ways to test your AI’s effectiveness.
1. Gamify adoption
Because artificial intelligence improves with more input, incentivizing adoption is key to creating effective AI. Gamify the task to spur employees to squeeze the most out of a new tool. By placing AI in a competitive context, people’s competitive spirit will come out and adoption is likely to be embraced with far more enthusiasm.
For example, a legal team could test the effectiveness of its AI by giving a large set of invoices to a team with access to the tool — and a comparable set of invoices to a team without it. At first, many users may aim to beat the AI. But as time goes on, it’s likely that more people will want to be on the review team that is being supported by the technology. By gamifying the experience, in-house teams can create buzz around the tool, get people excited about trying it and ultimately spur both buy-in and adoption.
2. Run AI behind the scenes
Despite your best efforts, though, skepticism around AI-enabled software is likely to persist. This is particularly the case considering AI is not deterministic but probabilistic. When experts are used to software with a very black-and-white output, they tend to focus on what AI gets wrong. One way to avoid this problem is to consider running the AI behind the scenes first.
For instance, have AI review sets of law firm invoices that human reviewers have already gone over. Gauge the speed and accuracy of the AI-driven process versus the manual process. Chances are good that the AI will be quicker — and might even find billing errors humans did not initially notice.
This is an effective approach because it turns the tables: instead of humans checking the work of AI, AI can check the work of humans. Running AI in the background can give proof of the AI’s effectiveness, which can then be shared with the process owner. Thus, when AI is introduced to users, they’ll be able to see its value more quickly and may be more eager to jump on board.
3. Use process mining software
Finally, process mining software is extremely valuable to understand how and to what degree an AI solution is improving efficiency. Process mining software is like an X-ray machine into people’s workflows.
Process mining software is great for monitoring workflow efficiencies and identifying potential areas for improvement. For example, process mining can be used to ascertain how much time corporate lawyers are spending on core, operational platforms versus other solutions, like Excel. It can also be useful to see how much time it takes for legal teams to perform certain actions — reviewing and signing off on an invoice, for example.
If this seems intrusive, just do it on a sample of users. The point is to be able to identify, using clearly defined KPIs, how people’s workflows look both pre-and post-implementation of the AI-enabled solution. Having data to demonstrate how and where the AI solution is improving efficiency can be useful in spurring adoption and thus refining the tool.
The bottom line
While there’s tremendous buzz around AI, don’t bring in an AI solution for the sake of saying you did so. Instead, be strategic and set expectations appropriately. Because AI is not deterministic, there will be a learning curve. During that time, it’s crucial to check in and assess whether the solution is trending in the direction you hoped.
Remember: there are many ways to test a new tool's effectiveness while also generating user enthusiasm and buy-in. Make it a game, measure KPIs and, above all, be honest about how things are going.