As political and corporate leaders grapple with governing artificial intelligence, lawyers are much less confident than their business colleagues about the effectiveness of current AI policy, according to a global survey of senior executives and legal counsel.
Only 40% of legal counsel are “highly confident” in their company’s ability to comply with current AI regulation and guidance, citing a lack of internal training and inadequate data management/security protocols as the primary reasons, according to a recent Global AI Regulation Report from Berkeley Research Group. Intellectual property (IP) and misinformation/deep fakes were cited as the least-effective areas of current policy.
“More and more, we’re seeing a gap between what outside counsel recommends and what executives are open to when it comes to AI policies and procedures,” Amy Worley, a BRG managing director and associate general counsel, said in the report. “Good advisers can say yes, there is a lot of regulatory uncertainty, and where there is uncertainty there is also value.”
Another gap on policy was found geographically, with executives and lawyers in North America far less confident in current AI policies than those in Asia or Europe. Nearly one-fifth (18%) of North Americans said current regulation is “not effective.” Overall, only about one-third of respondents considered current policies as “very effective.”
While more than half expect effective AI policy (57%) within three years, only 36% feel strongly that future regulation would provide the necessary guardrails, the survey found. Lawyers were most concerned about policy being enforceable, while executives focused on it being adaptable/flexible and transparent/ explainable, BRG said.
The U.S. has adopted a “decentralized innovation-friendly approach” to AI regulation, a contrast with the frameworks that are becoming common across Asia and the European Union, the report said. North America also has robust private litigation that’s not as common in other regions, BRG noted.
However, Worley said AI may shift that traditional paradigm in North America. “There is a sense that legislators and regulators missed the window of influence with big tech, and they are trying not to do that with AI,” Worley says. “This means regulatory efforts in North America are moving faster than usual.”
BRG analysts also expect that U.S. regulators will use “sector-specific rules and guardrails” in lieu of overarching federal laws. The report noted that the Federal Trade Commission adopted a resolution last year authorizing the use of compulsory process in nonpublic investigations involving products and services that leverage AI. The Food and Drug Administration released updated guidance earlier this year related to the development of medical products.
The Biden administration also issued an executive order in 2023 outlining new AI safety standards.
Lawyers are also concerned with risk, primarily in the areas of noncompliance and irresponsible AI use, according to the report, which also cited cybersecurity threats from AI. The EU’s landmark AI Act, approved in March, can impose fines that amount to as much as 7% of annual global revenues, while the technology poses significant cybersecurity threats.
“There is very little AI case law but that is about to change, and it will happen quickly,” Richard Finkelman, a BRG managing director, said in the report. AI technology disputes will lead to new case law, while failure to properly identify and handle AI data will lead to new precedent, he said.
The inaugural BRG survey was conducted in March-April and included 214 corporate leaders, with 43% of the respondents in the executive C-suite, 33% at the senior executive or director level, 13% as in-house counsel and 11% as external attorneys. Participants were split evenly among North America, Europe-Middle East-Africa and Asia-Pacific.
“Businesses that are looking to develop AI want it to be properly regulated,” said Byron Phillips, a Hogan Lovells partner based in Hong Kong. “They want it to operate in an environment where the human endeavor is advanced through AI, where safe innovation is the goal. I don’t worry that innovation will be stunted by ethics and governance; ethics and governance free them up to innovate within respectable parameters.”