Seth P. Berman is partner and lead of the privacy and data security practice at Nutter McClennen & Fish. Views are the author’s own.
Government is playing catch-up when it comes to artificial intelligence, but even in the absence of AI-specific rules, organizations face a legal minefield as they deploy AI in their workflows.
Expect regulators and litigators to use existing laws to pursue what they see as corporate wrongdoing when it comes to AI. This is especially the case in heavily regulated industries such as finance and health care. But even outside these industries, consumer protection and privacy regulations will provide ammunition to those who believe a company has misused AI.
If an AI chatbot gives customers inaccurate information about a company’s products, for example, it won’t be a defense to say it was the AI and not a person who provided the bad information.
Similarly, if an organization’s use of AI invades someone’s privacy or acts in a manner that violates anti-discrimination laws, the organization can expect to be held liable, even if none of its employees was aware of what the AI was doing.
Indeed, this is what we’ve long seen with new technology. To this day, the Federal Trade Commission and the Securities and Exchange Commission are bringing computer hacking-related cases against companies based on regulatory authority going back to early last century.
In the case of AI, the legal playing field is crowded with state privacy laws, the EU’s General Data Protection Regulation and even Biden Administration executive orders. All of these legal efforts create guardrails around AI implementation.
And you can’t expect the companies that create the technology that AI uses — OpenAI, Anthropic, Google, Microsoft and others, in the case of generative AI — to shoulder the liability. If AI causes a violation, the company that implemented it will be on the hook for legal consequences at least as much as the company that created the technology.
Risk management steps
So, what can organizations do to protect themselves? Here are a few ideas.
- First, organizations can create an internal governance structure to review the legal implications of a new AI implementation. Simply put, decisions about AI implementation can’t be solely up to developers and product managers; there must be a defined role for compliance and legal in assessing these tools before they are implemented and in monitoring them after they are in production.
- Second, companies should consider how their use of AI could impact existing laws and regulations. As examples, businesses may face liability for giving customers inaccurate or misleading information in automated chats or AI-generated marketing content, generating false or offensive content through interactions with Chat GPT or other generative AI models, and exposing customers’ and clients’ sensitive information to security risks if shared with an AI engine. They may also be liable if the AI produces content that is copywritten by someone else. The laws that might be impacted are as much related to the specific proposed use of the AI as to the AI’s underlying capabilities and limitations.
- Third, companies should consider how evolving AI regulations may impact their AI implementation and create a process to ensure that they are able to respond to the rapidly changing technical and legal landscape.
While these precautions may not necessarily prevent a lawsuit over AI-related misconduct, they will assist in creating a helpful fact pattern showing that a company diligently sought to avoid the AI-aggravated problems, and thereby acted reasonably (and not negligently).
History suggests we’ll look back at 2024 as a Wild West era for AI in the business world. But that same history suggests that Wild West legal eras are usually followed by periods of regulation by litigation, in which regulators use existing laws to sue companies and force best practices on the market. As a result, organizations must be smart to make sure they don’t let the technology of tomorrow get them in trouble with the laws of today.