General counsel tasked with developing generative AI policies for their organizations should start by working to identify “must-avoid outcomes,” according to recent guidance from Gartner.
These potential outcomes can be determined through risk tolerance discussions with the organization’s senior management about the possible applications of generative AI models within the business.
Additionally, legal chiefs should discuss with other leaders which potential end results feature acceptable risks given the anticipated benefits of emerging artificial intelligence such as improved efficiency.
“Guidance on using generative AI requires core components to minimize risks while also providing opportunities for employees to experiment with and use applications as they evolve,” said Laura Cohn, senior principal, research at Gartner, in a prepared statement.
Use cases and safeguards
Gartner recommends legal leaders organize a list of possible generative AI use cases by perceived risk. This includes determining the likelihood and severity of the risks.
Common risks associated with generative AI usage include cybersecurity, data privacy and IP issues.
General counsel can help determine what types of controls to implement for various use cases based on their risk levels.
For lower-risk use cases, such as translating regulations written in a foreign language, GCs could require human review.
Meanwhile for higher-risk use cases, such as producing content for customer consumption, approval from an AI committee or manager could be required.
“General counsel should not be overly restrictive when crafting policy,” Cohn said. “Leaders can consider defining low risk, acceptable use cases directly into policy, as well as employee obligations and restrictions on certain uses, to provide more clarity and reduce the risk of misuse.”
Additionally, outright AI prohibition could be considered for the highest-risk cases, Gartner suggests.
GCs also have an important role to play in determining who has the authority to make decisions regarding various generative AI use cases.
These determinations should be made in tandem with executive leadership.
Once such decisions are made, in-house legal teams must work with other business units to review their AI duties and risk ownership.
This approach can include highlighting which department or departments that employees should turn to when seeking the go-ahead for AI use cases requiring approval.
“Document the enterprise unit that governs the use of AI so that employees know to whom they should reach out with questions,” Cohn said.
Gartner recommends organizations have a policy of disclosing the use and monitoring of generative AI technologies to both internal and external stakeholders.
General counsel are advised to help companies decide what to disclose and with whom.
These disclosures could include labeling text that is machine generated and placing watermarks in AI-generated images to the extent possible.
“Consumers want to know if companies are using generative AI applications to craft corporate messages, whether the information appears on a public website, social channel, or app,” Cohn said.
Overall, she said generative AI policies and guardrails “will better prepare enterprises for possible future legal requirements.”