G
Blog

GenAI Solutions for Patent Drafting Can Meet High Security Standards Under Self-Regulatory Frameworks

Regulatory frameworks around artificial intelligence (AI) and generative AI platforms have been rapidly developing in recent months. The flurry of political debate created by legislative efforts to protect consumers has caused confusion in many sectors, especially among patent attorneys whose careers depend upon their ability to keep invention disclosures and other client communications confidential.

India and California Legal Reforms Focus on Self-Regulation by Generative AI Providers

Recently, policymakers in two jurisdictions important to the adoption of new technologies have recognized that generative AI providers are capable of meeting the safety risks posed by artificial intelligence. Particularly in the country of India, it appears that lawmakers are walking back certain restrictions that would have prevented the legal profession from considering the use of an algorithm-powered drafting solution for legal documents including patent applications.

In mid-March, India’s Ministry of Electronics and Information Technology (MeitY) issued an advisory rescinding a previous mandate that required generative AI platforms to obtain government approval before offering their services to consumers. Now, AI platforms operating in India can operate more freely if their AI-generated output is appropriately labeled for any possible unreliability or other issues inherent to the output. Under the new guidance, AI providers are being directed to develop mechanisms by which consent popups are provided to consumers to inform them of these risks.

In the United States, the state of California has been a driving force in the global tech world thanks in large part to the incredible success of companies in Silicon Valley. On March 20, the California State Senate published an amended version of SB-1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act. If enacted, the bill would direct companies operating computer clusters used to power AI platforms to create policies governing the use of their clusters in creating AI models that meet a high threshold for calculations per second. Operators of AI platforms would also be required to make a positive safety determination regarding any hazardous capabilities prior to training the model.

Proactive AI-Powered Platforms for Patent Drafting Will Exceed Security Expectations

Both India and California are taking an approach to AI regulation that gives generative AI platforms the opportunity to exceed safety and security expectations, even prior to market launch. In patent drafting, the development of internal practices by generative AI companies that reinforce both data encryption and zero data retention can meet the requirements of these laws, even if the AI models themselves don’t reach the computing power threshold set by California’s legislative draft. At davinci, we’ve already implemented many such cybersecurity measures into our patent drafting platform to ensure compliance both with government regulations and rules of legal ethics.

While these regulatory efforts are paving the way for increased adoption of generative AI by many industries, they don’t address the biggest problem most patent attorneys have when assessing a generative AI platform. Every AI company talks about how important security is to them. However, very few actually take the effort needed to develop internal procedures meeting international standards that promise the highest level of cybersecurity. Patent attorneys should always be interested in asking whether their potential AI partner might risk any aspect of the attorney’s ability to represent applicants at patent agencies or patent owners at court. 

These proposed regulatory frameworks are merely the first wave of legal reforms for generative AI expected this year. According to BSA – The Software Alliance, there were a total of 407 AI-related bills that have been proposed across 44 U.S. states as of early February of this year. Many of these bills target high-risk applications of generative AI, including any uses of these platforms that might have legal ramifications. As many of these proposed bills are passed into law, they will ultimately support the operations and business prospects of generative AI firms offering legal drafting services that are properly proactive about addressing cybersecurity and client confidentiality concerns.