Dive Brief:
- 88% of in-house legal professionals know about generative AI in software tools but only 36% have actually used it, a survey by the Association of Corporate Counsel - New Jersey and law firm Lowenstein Sandler finds.
- Almost half say they’re not aware of anyone in the legal department using it and about a fifth say it’s possible someone in the department is using it. Another fifth say they’ve either built its use into their work or are in the process of seeing how to use it.
- Legal departments will likely start using it more once they get a handle on data and other risks it creates and write policies around its use, says Mary Hildebrand of Lowenstein Sandler. “Prudent policies and thorough training … could prove to be the deciding factor for entities to successfully engage with AI,” she said.
Dive Insight:
For the most part, members of in-house legal teams haven’t seen signs others in the broader organization are using genAI, but to the extent they have, it’s mostly the teams in marketing (22%), research (20%) and IT (19%) that are doing so.
Nor are legal teams getting a lot of questions about it from others, but when they do, it’s mostly from IT (16%), research (16%) and the company’s board members (13%).
When they do get questions, it’s mostly about legal risks (67%), ethics (57%), use restrictions (52%) and any policies the organization has on it (52%). A handful say they haven’t gotten any questions.
Those on the legal team who’ve used it themselves mostly say they’ve gotten some benefit out if it (42%), but almost as many say it’s too soon to tell (35%). A small percentage say it didn’t help at all and about a quarter said it helped a lot.
Associate counsel are a big exception to those who said it helped a lot. Not a single respondent who identified as associate counsel said it helped a lot; it was mainly the legal team leader (30%), like the general counsel or chief legal officer, who said it helped a lot.
Legal team members’ main concern over its use is accuracy. Almost half say that’s number one. Privacy is the next biggest concern. Risks over copyright infringement and plagiarism, which have been in the news a lot, aren’t high on their list of concerns right now.
The main way they envision reducing risk is human review. About half say that. Having the tools draw from data that don’t include personally identifiable information is seen as the next best way to reduce risk.
Half say they have no training on genAI planned; the other half either say they’ve had training or have training in the works.