In this bold new era of Large Language Model (LLM) generative pseudo-AI, people are looking for the next way to get rich quick or hack the usual limits with this new technology. LLMs can be prompted to generate free and unique images, plausibly realistic text in various styles, and even carry on conversations — so why not have it come up with patentable inventions, then patent them?
Well… there would be some problems.
Examining this topic as a professional patent law firm, we see two main questions here, at least:
- Can anything an LLM engine generates be novel, nonobvious, inventive, or useful?
- If so, how would that invention be handled by the law, and how should it be?
Regarding the first question, the adage about monkeys with typewriters comes to mind: even a random sentence generator might come up with ideas that a human could run with and create something patentable from (i.e. “an ice cream machine being used as a carburetor”), let alone a more sophisticated algorithm that’s less random. What seems likely is that the prompted LLM as the tech currently stands would come up with a prompt response that seems coherent and maybe viable but is more likely to be just gibberish; the human would still have to follow up the abstract idea with some analysis and proof of concept. In that case, the LLM’s suggestion would be little more than a “creative writing prompt” in the field of coming up with new inventions: maybe helpful, but hardly revolutionary.
With that said, though, technology improves, and LLMs do have the strength of being very good at synthesis: pulling from data across a broad range of disciplines and knowledge, and making connections humans might never think of otherwise — “this obscure ice cream machine found on the internet could actually be used for an obscure carburetor also found in a different corner of the internet” — and making such connections is generally considered to be patentable subject matter. Inventions can be patentable when an idea taken from one area of technology is creatively applied to another area of technology. This dynamic seems especially fruitful for LLM investigations; LLM’s can “know” vast areas of knowledge and heavily rely upon contextualization. (Yet) Clever prompting could quite possibly pull out old expressions that the LLM is merely guessing will seem credible to the prompter. This fundamental drive of the LLM to merely seem plausible could include a redirection of established knowledge by clever decontextualized prompting. Realizations of novel repurposing by a human are arguably patentable when the repurposing is sufficiently non-obvious. But what is non-obvious in the view of an LLM, or as coaxed from an LLM? Will mastery of prompting be accepted as legitimate inventive labor?
We may have to also accept – to some degree – that, like so many things that used to be only made by hand one at a time by a skilled artisan, the era of intellectual property as a manufacturable commodity may be approaching, changing our very understanding of the concept in ways beyond prediction. If machines truly could someday improve our lives by coming up with new and useful inventions faster than a human could – and the only reason to forbid it is the legal and economic question of who would own that work — then the actual concern is over whether our society is ready for this technology and would use it appropriately.
This brings us to the second question: What are the legal implications of inventions by LLMs and AI, and how should such inventions be regulated or credited? We see several issues of concern that may need addressing soon, such as scope of prior art, inventorship, and mass production.
It’s possible that use of LLMs could expand the definition of what is considered Prior Art in patent examination. One of the key criteria for patentability in the United States is that the claimed invention is “novel and nonobvious”: if the patent Examiner can find something that’s already been invented that’s close enough to the examined application, that might be a bar to registration. The advent of the internet, and expansion of internet search capability, has already expanded the scope of what can be considered Prior Art, and LLM technology – with its possibilities of synthesizing everything together – threatens to expand the scope and definition of Prior Art further.
Inventorship status is slippery, and the definition of “who counts as an inventor” has changed over time. Currently, there is a trend toward larger groups of people being credited as co-inventors on a patent application, such as research teams and academic departments. The basic US law requires that an inventor must contribute to the conception of the invention. This determination can be slippery; memories become confused over time, where one or more possible co-inventors that this or that novel element recall having, even if not expressing, similar ideas. This matter can become murkier when attempting to distinguish contributions of inventive matter and the fleshing out of enabling detail; the law is clear that enabling an invention is not inventing. These distinctions may also need to be clarified when there are many participants in a product development or research team. When dealing with inventions developed in group work environments, a wise patent attorney attempts to clarify early in application drafting process. Few people will be dishonest in these matters; it is very helpful to broadcast within an entire team a request that all members declare early in the application drafting process whether or not they contributed to the inventive matter. How is patent law going to view allegedly invented matter that was harvested from the prompted response of a large language model?
Permitting inventorship status for humans who prompt, operate, or facilitate LLMs also carries the potential danger of the USPTO getting swamped with “spam applications” – nonsense generated by LLM that nevertheless has to be entered and examined before it can be called nonsense. A particularly bad embodiment of this effect might be “patent farming” – imagine a large server rack used to churn out LLM-generated patent applications, either with minimal input by one or more low-paid human operators or even autonomously with randomly-generated prompts all day and night to convert processing power into money far more efficiently than bitcoin mining.
So in summary, we hold there are lines to be drawn in the near future as to inventorship utilizing LLMs and other AI technology, and intellectual property law may soon need protection from being hacked for easy profit. Patents are protected by law because doing so serves the public good: inventors don’t have to be secretive in order to make a living by their creativity, and the publicly-available patent database is both secure storage of our collective technical knowledge and a resource for the inventors of tomorrow. Intellectual property protection exists to incentivize creativity, not to make individual inventors rich, and we assert that it would be better if it stayed that way.
By Patrick Reilly, Esq. and Clio Mykland, Technical Writer