G
Blog

Regulatory Frameworks Recognize AI’s Value in High-Risk Industries Facing Compliance Concerns

The current state of artificial intelligence (AI) regulation looks too uncertain for patent attorneys to consider the use of AI-powered tools in their daily practice. However, government action in several countries, and the negotiation process developing regulatory frameworks for AI, strongly suggest that the legal industry will adopt AI-powered tools once it gets some clarity on government draft legislation and executive orders targeting this sector of the innovation economy.

National Governments Seek Clarity on Use of AI in High-Risk Industries

In the United States and the European Union, sweeping executive and legislative actions have painted consumer safety and data privacy concerns with very broad strokes. President Joe Biden’s executive order this past October demanded government action on developing standards around preservation-reserving techniques and addressing safety concerns related to critical infrastructure. Lost in the discussion, however, is President Biden’s firm commitment to catalyzing AI research and promoting a competitive ecosystem for AI development to establish U.S. leadership in AI innovation.

The European Union is similarly working on a broad regulatory framework, and the legal industry is at the center of some difficult negotiations. Recent reports indicate that the EU’s discussions on the AI Act have stalled due to disagreements over the use of artificial intelligence by police and other law enforcement officials. The EU’s AI Act is being specifically drafted to regulate high-risk applications of AI, including any that involve the operation of law, in the hopes of reducing the capacity of AI systems to cause harm.

Regulatory Clarity Often Uncovers Simple Path Toward AI-Powered Value

Other ways in which AI Act negotiations in the EU show that proactive self-regulation by AI companies will be looked upon very favorably while AI rules are being written. In late November, news reports indicated that several major EU member states were seeking fewer obligations for the training of foundation models underpinning massive open source models like OpenAI. Those proposals focus on adherence to codes of conduct, like those which legal professionals operate within daily, with penalties available only after several unchecked violations.

Other countries are deciding that artificial intelligence platforms are compliant within existing legal frameworks in the face of strong opposition within the intellectual property world. In January 2023, Israel’s Ministry of Justice issued a legal opinion finding that the use of copyrighted content when training machine learning models was within the scope of fair use doctrine. Commentators have noted that Israel’s fair use provisions are modeled on the statutory language from the United States’ Copyright Act, which could presage legal opinions reducing liability concerns for AI models trained on expressive works.

What is the Capacity to Cause Harm in the Practice of Patent Law?

The regulatory environment surrounding the entire legal profession is immense. Further, ethical concerns cannot be eliminated by fair use arguments. For patent attorneys, there are code of conduct rules enforced by patent agencies around the world that exist in addition to ethical rules on client representation. Client confidentiality issues also create questions on inadvertent disclosures that, in the patent world, can lead to the invalidity of patent claims having critical value for a small business.

AI developers who want to operate with partners in the legal industry are going to have to address these concerns themselves. Some AI companies are already taking notes from government regulators and developing internal data privacy and cybersecurity practices that adhere to well-respected industry standards. Many of these same standards helped usher the legal adoption toward greater adoption of cloud-based docket management and software-powered analytics tools.

For patent agents and attorneys, AI-powered platforms will also have to demonstrate their ability to answer questions specific to the operation of patent law. Obviousness, novelty and written description requirements are easy to talk about in the abstract. However, the conversations happening in the highest halls of government bodies show that even high-risk industries like patent law can at least begin conversations with companies who can demonstrate their seriousness in addressing regulatory concerns faced by legal professionals.