If your use of artificial intelligence is material in something you’re seeking to patent, you must disclose that, but if you use AI simply to fill out documents as part of a submission, you don’t need to disclose that but you do need to make sure anything AI contributes to the submission is accurate, the U.S. Patent and Trademark Office says in guidance it released last week.
“Given the potential for generative AI systems to omit, misstate, or even ‘hallucinate’ or ‘confabulate’ information, the party or parties presenting the paper must ensure that all statements in the paper are true to their own knowledge and made based on information that is believed to be true,” the agency says.
The guidance is the latest on AI that the agency has released to align its policies with the Biden administration’s effort to get out ahead of the AI rush, fueled mainly by the use of generative AI. The agency’s earlier guidance was released in February.
Little in the guidance will be surprising to legal professionals working in the patent space, but it does clarify the responsibility companies have when submitting material to the agency.
“There is no prohibition against using these [AI] computer tools in drafting documents for submission to the USPTO,” the agency says. “Nor is there a general obligation to disclose to the USPTO the use of such tools.”
But because companies must ensure the accuracy of any signed documents they submit, they must, as part of the agency’s broader duties of candor and good faith, double check every part of the submission that’s AI generated.
“The party or parties should … perform an inquiry reasonable under the circumstances confirming all facts presented in the paper have or are likely to have evidentiary support and confirming the accuracy of all citations to case law and other references,” the guidance says. “This review must also ensure that all arguments and legal contentions are warranted by existing law, a nonfrivolous argument for the extension of existing law, or the establishment of new law.”
The agency’s requirement that AI use be disclosed when it is material to the patentability of something is intended in part to prevent inventors from getting credit when they leave the hard work to AI.
“Material information could include evidence that a named inventor did not significantly contribute to the invention because the person's purported contributions were made by an AI system,” the guidance says. “This could occur where an AI system assists in the drafting of the patent application and introduces alternative embodiments which the inventor(s) did not conceive and applicant seeks to patent.”
“Alternative embodiments” refers to variations of an invention that lead to the same results, a tactic that can help protect the patent from challenges. The agency is saying that, if the AI tool identifies these variations, then that would count as a material contribution to the application and would need disclosing.
Other steps that need to be taken under the guidance are what legal professionals would expect when it comes to AI: information that needs to be kept confidential must be protected from leaking out into the broader data corpus that AI large language models rely on for training, and to take steps to protect data if the AI platform is outside the United States and could pose a security risk.
“Practitioners must be mindful of the possibility that AI tools may utilize servers located outside the United States, raising the likelihood that any data entered into such tools may be exported outside of the United States, potentially in violation of existing export administration and national security regulations or secrecy orders,” the guidance says.
Look for more AI guidance to come. “Today’s notice is part of our work shaping AI policy,” USPTO Director Kathi Vidal said in a statement announcing the guidance April 10. “We will continue to listen to stakeholders on this policy and on all our measures to use AI responsibly.”