Artificial Inventors again

Artificial Inventors again

While there has been a lull in news stories about DABUS and the Artificial Inventor project, the appeal hearing at the Federal Court of Australia last month notwithstanding (see here for a summary), the debate has continued on social media. While we are waiting for the decision of the Federal Court of Australia, expected towards the end of the year, for the written decision from the EPO refusing the DABUS appeal, and to see whether the Supreme Court of England will grant permission to further appeal, here is a little thought-experiment to illustrate how I see the current debate sits with technological reality.
 
First, here is what I gather to be the state of technology, which seems uncontroversial amongst the machine learning community. There is no such thing as machine consciousness (yet), and machines, even those programmed with AI technology, do not act autonomously. AI technologies, in particular deep learning, are good at generating answers to questions that involve many (vast numbers of) variables that are interrelated. This happens by ingesting high-dimensional data into the system producing outputs in accordance with many (huge numbers) of model parameters. The parameters are tweaked to optimise an objective that measures whether the outputs are the correct answer ("it is a cat") or whether it causes the system to meet a specified goal ("you win the game of chess"). This tweaking is called learning. A more mathematical way of looking at this is that good machine learning models perform a highly efficient search in a vast search space of parameters that capture something about the data to optimise the objective. To achieve this, machine learning engineers need to work hard to design the system architecture, how data is represented, how goals are set, etc. All of this design activity is specific to the search space at hand. For example, one such search space might be the space of all texts completing a prompt, of all computer programs solving a stated problem or of all the possible ways an amino acid sequence might fold itself into a protein). In short, machine learning systems do not build themselves, and they do not solve problems but search high-dimensional spaces for answers that optimise an objective.
 
Now to the thought-experiment, using second medical use inventions (or drug repurposing to you and me) as an example. Consider a hypothetical researcher, let’s call her Alice, who runs a screening program to discover a cure for cancer. She takes a free public database of organic compounds, picks compounds at random and tries them one by one on a mouse cancer model. There are a lot of compounds, some of them benign, some toxic, and so over the years, there are a lot of dead mice. After many years, Alice is surprised to come to the lab one morning and find that the mouse fed compound #123456789 is clear of cancer. Alic writes a patent application for the manufacture of a medicament for cancer treatment using compound #123456789 and files it at the patent office.
 
So, who is the inventor? It seems uncontroversial that Alice is the inventor, having made the discovery. Whether randomly picking compounds and testing them fits with the plain meaning of devising an (inventive) concept or not, Alice would be considered an inventor and devisor of an inventive concept. It would appear fanciful to propose that the animal house providing the mice is the inventor. Or, had Alice used a high-throughput-assay system to do the screening in vitro, that this system would be the inventor.
 
Now along comes Berta. She sets out to do the same thing but knows about machine learning and has found an online machine learning system for predicting the cancer activity of organic compounds. The system has been made freely available online by a charitable fund. Berta takes a database like the one Alice used and runs it through the system. In an overnight computational run, the system predicts that compound #123456789999 has the highest likelihood of curing cancer. She tries this compound in a mouse model, which is cured of all cancer overnight. Berta files a patent application for the manufacture of a medicament for the treatment of cancer, this time using compound #123456789999.
 
Assuming that Alice would indeed be the inventor of treating cancer using compound #123456789, there seems to be no reason to doubt that Berta is the inventor of treating cancer using compound #123456789999. There appears to be no reason why Alice should be an inventor but not Berta, just because Berta has used a modern tool to search the vast space of organic compounds for one having cancer-treating activity. Nor does it appear to be any more reasonable that the machine learning system is an inventor, any more than Alice's animal house or high-throughput-assay system. Fundamentally, Berta has done no more or less than Alice, except that she has used state of the art technology to exhaustively search organic compounds for cancer-curing activity in an automated, less manual way.
 
The above (and the current debate) is limited to the question of inventorship. Whether a patent should actually be granted and whether the invention actually involves an inventive step (is non-obvious) is a different question. For that, suffice it to say here that this is an objective question, and so that question would have to be answered in light of what part of the notional skilled person's mental toolkit when Alice and Berta filed their respective application. The answer will always be fact-specific, but I already look forward to that debate in the future.
 
Paraphrasing a recent comment I came across on LinkedIn, if you picture innovation as an engine, AI acts like a turbocharger, but this does not change the fact that the engine itself remains human. If my thinking on the above holds water, it would seem that the question of AI inventorship, at least now and probably for some time to come, is an entirely hypothetical one. Is now the time to discuss it? Do leave a comment if you have some views on that!
 
* My thanks are due to Alyssa Telfer of Phillips Ormonde Fitzpatrick for bringing the Federal Court hearing and the blog post about it to my attention and to Tim Harris and Will James of Osborne Clarke for their input on my thought-experiment. Thanks also to Conrad Fritsche for the brilliant motor/turbocharger metaphor.

Follow Alexander Korenberg’s AI musings

AI musings

Cookies improve the way our website works. By using this website you are agreeing to our use of cookies. For more information see our cookie policy I accept