Now that the U.K. Supreme Court has handed down its decision in Thaler v The Comptroller, the long running saga of whether a concept allegedly generated by a machine called DABUS could be patented by DABUS’ creator Dr Thaler or not has run its course in the U.K.. The case raised many questions, some of which are answered by the decision and others which are not. Questions such as “can machines invent?” or “why are courts worldwide kept busy deciding on machine inventorship without evidence that machines can actually invent”. Alas, I will not be able to provide you with answers to these questions, either, but I will indulge in some speculation on the concept of machine inventions, what it means to invent, and whether we are likely to need a change in policy. But first, the decision.
Given the proceedings at the earlier three instances (patent office, High Court, Court of Appeal) and the reasoning in these decisions, neither the outcome, nor the reasoning behind the findings, is surprising. I will therefore not delve deep into the details of the decision, which in any case is fairly brief and very clear. But here is a summary of what was held, confirming the lower instance decisions:
DABUS is not an inventor - DABUS is a machine, a machine is not a person, an inventor needs to be a person.
Dr Thaler has no right to secure a grant of a patent for anything described in the applications at issue on the factual assumptions, that is on the basis that any technical advances in the application were generated autonomously by DABUS. There is no legal mechanism that would make a concept created by a machine owned by the owner of the machine.
The UKIPO was correct to find that the applications were deemed withdrawn for failure to file a statement identifying the person or persons whom he believes to be the inventor or inventors and to indicate the derivation of his right to the granted patent, in short for failing to file a valid statement of inventor, as required by the UK Patents Act.
Thus, the Supreme Court has affirmed the decisions of the lower courts and Dr Thaler's applications remain withdrawn for failure to file a valid statement of inventor. The tale has run its course, at least in the U.K..
A more interesting question
A more interesting question arising from this case is: what does it mean to invent? And, what does this mean for inventions that were created using a machine? Given that machines, and in particular AI, are becoming increasingly important in discovery, research and development processes at least in the biotech and pharma fields, are we heading for a policy black whole regarding the protection of inventions in this field? Here are some brief thoughts, without any pretention of completeness or authority.
What does it mean to invent?
What does it mean to invent in terms of the policy decisions expressed in our patent law (rather than a as a question of metaphysics)? The supreme court decision sensibly starts with the words of the patent act, which says that the inventor is the actual deviser of the invention. Put differently, in the words of the Lord Hoffman in the Yeda Research case in the House of Lords (the precursor to the Supreme Court in the U.K.): the inventor is the natural person who came up with the inventive concept.
Coming up with an inventive concept can be done in many ways, thinking in my mind while contemplating a problem, making some more less directed experiments to answer a question (see here for a thought experiment along those lines), making a discovery and realising its practical uses and, more recently, using machines to extract information from large amounts of information to test hypotheses, discover trends or correlations or predict properties in the real world.
It should not be controversial that whichever of these invention mechanisms is used, the inventor is the person who came up with the inventive concept. For example, in the case of using an AI machine to predict binding or activities of a class of molecules, this would be picking a molecule that is predicted to work well (under whichever criteria one has chosen), realising that it might work and then initiating some tests that might establish this not as a theoretical possibility but a likely practical reality.
Inventions using AI
More generally but staying in the realm of AI, however powerful an algorithm one might be using, no machine will provide an output unprompted, or have anything approaching consciousness that would enable it to appreciate and realise that an output is an invention. While machines may automate more and more of the discovery process, they are not, and likely will not be for the foreseeable future, autonomous in the true sense of the word.
This distinction between automation and autonomy is an important one, as has been recognised by academics in the field (see here). While Dr Thaler certainly may believe that his machine is “sentient”, which would suggests notions of consciousness and perhaps autonomy, and maybe the ability to make choices and decision of one's own will, there is no shred of evidence that this is actually the case.
Even if one accepts the (as far as I am aware) unproven factual assumptions which underpin the DABUS cases worldwide, namely that the concepts in the applications were created by DABUS autonomously without human intervention, for there to be an invention, it would require a natural person to realise the utility of the concept and take matters forward. In that sense, unless one endows a machine with the ability to realise the importance or utility of a concept and communicate this to a natural person who then blindly believes the machine and acts on its instructions, a human will always be in the loop of any inventive process, and there will always be an inventor.
In short, the absence of an inventor in the DABUS case arises merely from the assumed fact pattern, which is at best unproven.
Conception of an invention – recognition and appreciation
A look across the pond can be helpful here. In the U.S., the question of inventorship is much more under scrutiny than in the U.K. or elsewhere in Europe because the consequence of getting inventorship wrong are much more significant and because the timing of an invention previously could decide patentability and who could get a patent if there were multiple independent inventors. As expressed in one line of US case law (summarised by the US patent office manual of patent practice here) conception of an invention requires both recognition and appreciation of the invention. It is not enough to happen on the subject matter of an invention by accident without appreciating what one found (“an accidental and unappreciated duplication of an invention does not defeat the patent right of one who, though later in time was the first to recognize that which constitutes the inventive subject matter”). This surely must be the same as “coming up with an inventive concept”, which is more than just happening on it. Machines do not recognise or appreciate the importance of anything – they may be extremely powerful in making predictions, but that is it.
The US case law on inventorship in some unpredictable technologies is even more instructive to consider. According to this line of case law (see link above), when there is significant uncertainty on whether what has been found actually works, then conception of an invention does not actually happen until the invention has been shown to work (“reduced to practice”). “[I]n some unpredictable areas of chemistry and biology, there is no conception until the invention has been reduced to practice.” Given that the most advanced uses of AI in research and development are in precisely these fields, this bodes well for human inventors. What is more, given that even state of the art AI technology is prone to hallucination and fabulation and that in any case AI technology is based on predictions and likelihood, it is arguable that the conception of an invention made using AI will never happen until a natural person has appreciated and recognised the invention as being one that can be put into practice.
A case for policy change?
Considering all this, a scenario in which an actual technical advance is made that is sufficiently important to be protected (rather than just a likely but nevertheless stochastic combination of components) appears at the moment and in the foreseeable future a theoretical hypothesis at best. There is at present no recognition or appreciation of a concept output by an AI machine as an invention without the involvement of a natural person to appreciate and recognise the invention.
There is, of course, the possibility that this may change in the future. However, to be clear, my view that a human inventor will always be involved in using AI as a tool to invent is not based on some tehchnical limitation of current systems or the power of the predictions and outputs they can produce. It is based on the difference between automating discovery with ever more powerful algorithms and a qualitative change to true machine autonomy that would bestow on a machine the independent will to make choices, recognise and appreciate. There is no knowing that this will never happen, but it does not seem to be on the cards just yet.
Thus, as it stands, the only reason that the Supreme Court found that there is no inventor in the DABUS case is, I would argue, a specific hypothetical fact pattern that was assumed but never tested. And, if that is so, there seems to be no policy imperative to change our laws for inventions made by AI without human involvement to be protectable. And in that sense, the Supreme Court decision is a welcome closure to a long-running saga, rather than the beginning of a missed opportunity, at least for the time being.