This past month the Boards of Appeal issued the (unpublished) decision in T 0755/18 (Semi-automatic answering/3M INNOVATIVE PROPERTIES). The catchwords issued with the decision suggest that this decision considers technical effects in the context of supervised learning, a type of machine learning where the system is trained using an error between a known desired output for a given input and the actual output (hence the machine is "supervised"). On the face of it, this could be highly relevant to the patentability of machine learning inventions. The full text of the decision can be found here and is worth a read as a good summary of many of the considerations relevant to patenting machine learning, but for those pressed for time, I will give you a quick overview of the decision before considering what it really means for patenting machine learning inventions.
The claimed invention
The invention subject of the appealed decision relates to a method of verifying billing codes derived from a set of forward logic applied to first and second concept extraction components that extract concept from a data source. The billing codes have been generated for medical treatments from various data sources. The application relates to a method of generating billing codes for medical treatments from various data sources and verifying these billing codes. A user reviews the billing codes and provides input that represents a verification status of the billing code, that is whether the code is correct or not. The method then involves applying inverse logic to identify the concept extraction components involved in generating the billing code and "applying negative reinforcement" to the concept extraction components if the user input indicates that the billing code is inaccurate. This is said to help improve the accuracy of future billing codes being generated.
In addition to this main request, 3M also filed claim sets adding "applying reinforcement … thereby to improve the accuracy of the billing codes" in general terms and reducing a reliability score of an extraction component if it is found to be inaccurate and to require a human review of extracted concepts if a concept extraction component is found not to be reliable.
The Board's decision
The Board found that the claimed subject matter of the main request differed from a general-purpose computer only in details of administrative considerations and mathematical methods that did not contribute to any technical character. Consequently, the claimed subject matter did not have a technical effect over and above a general-purpose computer and hence lacked an inventive step. The features added with the additional claim sets did not sway the Board either and did not alter the conclusion of lack of inventive step. On the assumption that the differences really amounted to no more than administrative considerations and mathematical methods that did not contribute to the technical character of the claimed subject matter, the Board's decision would be uncontroversial and aligned with the now long-established case law of the EPO Board's of Appeal, for example as recently reviewed in G1/19, and the current examination practice as codified in the EPO Guidelines for Examination.
Taking a step back from the specific reasoning in the decision, the invention underlying the appealed decision relates to an administrative process (generation of billing codes) that is implemented in terms of relatively abstract functional components (concept extraction component) that are somehow rated based on their performance in terms of their contribution the accuracy of billing codes. The way this is done is by means of the abstract concept of an inverse logic that at the claimed level of generality seems to find out which extraction components are involved in generating a billing code based on the logic used to generate the billing code and by applying an abstract and initially unspecified "negative reinforcement" to a component that produced an inaccurate billing code. It would seem that this could be summarised as the mere automation of an administrative procedure that scores concept extraction components based on the accuracy of billing codes they contribute to. At this level of abstraction, it is not surprising that the Board found the claimed subject matter to lack technical character.
The additional claim sets added a general, abstract desideratum ("thereby to improve accuracy") and high-level mathematical detail in the form of a reliability score, as well as a request for human checking if an extraction component was found to be unreliable. It is also not surprising that the addition of abstract mathematical and administrative details did not alter the conclusion that the subject matter lacked technical character.
Administrative technical effects (Reason 3.2 of the decision)?
As part of its finding of lack of technical character, the Board considered whether an improvement of the accuracy of the billing codes could be considered a technical effect so that the abstract non-technical features of the claim nevertheless could have technical character by contributing to achieving this technical effect. The Board found that the accuracy of the billing codes was an administrative measure and that, therefore, any improvement in the administrative accuracy of the billing code was itself non-technical so that there was no technical effect associated with improving the billing code accuracy. So, one key-point to note from this case is that improvements in administrative criteria, such as the accuracy of a billing code, are unlikely to be amenable to demonstrate a technical effect of the claimed subject matter. This seems like a useful clarification, if not unexpected in light of the established practice of the EPO's first instance and Boards of Appeal.
Technical contribution of a non-technical effect (Reason 3.3 of the decision)?
3M also argued that even if an improvement in the accuracy of the billing codes was not considered to be a technical effect, improving the accuracy of the billing codes had the advantage of avoiding wasting system resources. By using the inaccurate billing codes to improve the system, the codes were used to their maximum utility, and resources were saved by improving accuracy because fewer iterations were necessary to obtain the desired result. The Board was not convinced and observed that, while the case law generally recognises that non-technical features can make a technical contribution if they are causally linked to a technical effect, such as reducing resource usage, not every such physical change qualifies as a technical effect. Rather, a physical change can be considered a technical effect if the non-technical features are based on technical considerations aimed at controlling that physical change and have to be purposively used in the solution of a technical problem. In the present case, the Board found that this not to be the case but rather that the computer program features of the invention were the result of non-technical administrative and programming consideration, being computer program components that mimic the administrative procedure of generating a billing code from input data.
In the Board's words: "[The computer program features] are the result of non-technical administrative considerations by the administrative expert about how to generate a billing code and non-technical programming considerations about how to program a computer to generate a billing code according to the chose administrative procedure."
A computer engineer in the realm of machine learning (Reason 3.5 of the decision)?
A further argument put forward by 3M was that the choices to provide the claimed system were not administrative but were technical choices made by a computer engineer in the realm of machine learning, or an expert in machine learning, as the Board put it. This was not found to be decisive in this case by the Board, since the work of an expert in machine learning includes non-technical computer programming tasks. Instead, the decisive question was whether the computer program features resulted from technical considerations beyond merely finding a computer algorithm to carry out some procedure. In this case, the Board found only the presence of non-technical administrative and programming considerations. In particular, the Board noted that the claim does not describe "any non-trivial technical characteristics of the "components, e.g. the concept extraction components, and that the choice of the components is not driven by technical constraints".
"If neither the output of a machine-learning computer program nor the output's accuracy contribute to a technical effect, an improvement of the machine achieved automatically through supervised learning to generate a more accurate output is not in itself a technical effect"
This is the catchword of the decision and, reflecting reason 3.2, would suggest that the decision is highly relevant to the patentability of machine learning technology, specifically supervised learning. However, reading the decision, it becomes clear that this more due to the language employed by 3M to argue for a technical effect of its invention, rather than the technical reality of what has been claimed. Looking at the claim, it is clear that it does not relate to machine learning at all, despite using terms like "reinforcement" (presumably alluding to the field of reinforcement learning, where the machine learns a policy on which to act in order to maximise rewards). The crux of the matter is that the invention considered here does no more than implement administrative processes such as generating billing codes and human checking of unreliable components. The crux of the matter is set out by the Board in Reason 3.5, noting that the components of the claim lack "any non-trivial technical characteristicsfrom which a refusal on the basis of lack of inventive step can be expected.
In that, it can in fact be seen that the decision does in fact not break new ground and is completely agnostic as to whether machine learning can be considered technical or not, as the invention in this does not in fact relate to machine learning, at least not in the breadth claimed, but rather to a straightforward implementation of the administrative desire to check on unreliable components of an administrative system. The claimed subject matter is simply devoid of any characteristics of machine learning technology, as the system does not actually learn to do anything but only identifies the sources of errors in the system. The "concept recognition components" are not improved or adapted in any way in what is claimed, and the system does not learn, other than to ask a human to intervene and make checks. There is no supervised learning in a technical sense (in that there is no desired outcome that is compared to the actual outcome to automatically adjust the system) and no reinforcement learning either, in spite of the claim referencing negative reinforcement.
As far as machine learning is concerned, the mentioning of relevant sounding terms in this decision (and the claims) is a red herring. I will discuss in the next section why this is important and leaves some hope that the practice of the EPO in relation to patenting AI, and in particular machine learning, invention will be able to adapt to the current technological realities.
Current examination practice and the future of patenting AI and machine learning at the EPO
The current examination practice for AI and machine learning inventions as set out in the Guidelines for Examination is that machine learning technology such as Neural Networks or other models used are in essence no more than mathematical concepts and therefore, as such, considered to be non-technical and not able to contribute to a technical effect, as such. While all is not lost, the Guidelines envisage that, like any mathematical method, machine learning technology can contribute to a technical effect in one of two ways: by limiting a claim to a specific application, such as image processing, some forms of low-level text processing, analysis of measured signals and the like; or by the mathematical method being specifically adapted to a specific hardware it is run on, for example distributing processing between CPUs and co-processors in a way that optimises processing. While this, in particular the first option, provides many areas in which machine learning innovation can be patented, the most fundamental developments of machine learning technology are not limited to specific use cases or hardware implementation but enable enormous technological progress in many fields of application, for all kinds of inputs. Yet, by requiring specific applications or hardware adaptations, these fundamental developments cannot be patented to the full extent of their technological contribution, which is to enable machines to better self-adapt irrespective of any specific application or hardware.
In doing so, the EPO focuses on a very theoretical and limited understanding of machine learning as a mathematical method, ignoring the real, wide and fundamental impact machine learning has on technology and our society as a whole. Recognising this would mean that self-adapting machines (maybe a less anthropomorphic description of machine learning) are considered just as much a technological field as, for example, telecommunications or computer storage and security. These latter fields cover similarly wide ranges of application as self-adapting machines/machine learning and yet provide a precedent of highly abstract mathematical methods making a technical contribution and thus creating technical effects in the broad fields of technology, such as in data compression, encryption, secure communication and storage, digital signature, document authentication and so on. All of these technological advances, just as machine learning, employ highly complex and abstract mathematical methods to improve the respective technologies, for which the EPO routine, and rightly, recognise technical effects such as better compression or security, faster or more storage efficient processing, and so on. The discrimination against machine learning technology, by not recognising machine learning as a field of technology just as much as data storage or telecommunications, does not appear to be in step with the times, the enormous technological importance of the field and the need to incentivise and encourage development and dissemination of fundamental advances in what may be the most ground breaking technological developments this century. Not for nothing have some compared data and machine learning to steam and heat engines of the first industrial revolution.
Not only is the treatment of AI at the EPO out of step with the times and technological reality, it appears to be entirely unnecessary. There appears to be no case law indicating that machine learning is not a technical field just as much as telecommunications, and, despite first impressions, the present case does not change this. This is because the invention here cannot objectively be described as machine learning, and the decision is no more than the consistent application of established practice and case law to a straightforward computer implementation of abstract administrative processes. Further, the treatment of mathematical inventions in the context of telecoms or the other mentioned technologies provides a clear precedent, one could say imperative, to recognise machine learning as the technical field it so clearly is. Happily, it therefore appears that the route remains open for the EPO to bring its practice up to date and accord mathematical innovations in machine learning an equal footing to that long recognised for mathematical innovation in telecommunications and other technical areas.
If you have any questions, please contact Alexander or your usual Kilburn & Strode advisor.