In its decision T 0702/20 the Board of Appeal considered an invention directed to a sparsely connected neural network, which the applicant tried to protect per se, independent of any specific technical implementation. The decision confirms that neural networks are, fundamentally, a class of mathematical functions specified by the structure of the network and its parameters. But unlike many recent decisions involving neural networks, the invention here goes beyond the “black box” view of neural networks and delves into the network’s structure. As such, the Board gets a chance to delve into the technical details. While the decision does not establish a positive data point for patenting machine learning inventions, it provides highly relevant guidance for practitioners on the types of facts and arguments that will not convince the EPO to acknowledge that a technical problem is solved. What is more, the Board, unusually, makes some comments going beyond the reasons for the decision. Reading the tea leaves, these comments, in my opinion, indicate that it is time for the right kind of machine learning cases to come before the Boards of Appeal.
Briefly, the invention relates to a sparsely connected neural network structure, which therefore has a reduced number of connections (and hence parameters) that need to be learned from data and is said to be less prone to overfitting (as is well known, overfitting, learning the noise in the data rather than the underlying structure, occurs when there is insufficient data for a given number parameters). Unlike the acknowledged state of the art, the invention determines the set of connections before training, and hence independent of the training data. The decision, in turn, at points 7 and 8, and 9 to 11, respectively, discusses in detail the technical background of neural networks and the relevant case law of the Boards of Appeal before delving into the arguments brought forward by the applicant. For those wishing to read the case, this is a good starting point.
The Board then deals with the specific arguments brought forward by the applicant. While I would love some more positive examples (as discussed in this note) of how fundamental machine learning technology could be patented at the EPO, the next best thing is to know what does not work and why (see reasons 12-19 of this decision). Here is a summary of the applicant’s arguments and the Board’s reaction.
The neural network of the invention, with its sparse connectivity, reduces the resource requirements, in particular storage, compared to a fully connected network; this should be recognised a technical effect in the computer (G1/19). The Board did not buy it – while a network with less connections needs less storage, it is also not the same network anymore. A single neuron neural network will have the least storage requirements of all. The argued comparison is incomplete as it focuses only on the computational requirements ignoring that the modified network will not learn in the same way. My take: when arguing lesser resource requirements, it is important not to forget the flip side: the argument needs to show, at least, that the invention use less resources to achieve at least a comparable thing; doing less with less is not enough.
An artificial network is an artificial brain and artificial brains solve an automation problem; neural networks mimic the human brain and cannot be understood by their programmer. Not impressed, the Board saw no evidence that a neural network functions like a human brain – having a structure inspired by the brain does not imply it functions like one. Its parameters and results are fully determined given the training data and procedure. A neural network can be simple and understandable or complex, giving it the appearance of unpredictability. Yet, that a learning system is complex does not mean that it replicates the functioning of the brain. In the Board’s own words: “The Appellant thus has not convinced the Board that neural networks in general function like a human brain or can replace the human in performing complex tasks. Even less so has the Appellant established that the claimed neural network solves the "brain" automation problem in general.” My take: I would suggest that this line of arguments is not really worth trying in the future.
Implied further technical use. This was a point the Board considered of its own motion. In contrast to cryptography, which is equally maths based, solving a classification or regression problem per se did not imply a technical use, as the use could be to forecast stock market evolution. There was thus no further technical use implied, contrary to the encryption of digital messages implying an increase in system security. My take: “Further technical use” is unlikely to be helpful when trying to patent fundamental AI – here a focus on how the design of the method is based on technical considerations of the internal functioning of computers will be more important. Yet, best practice is to make sure any technical uses / purposes are carefully discussed in the application as an alternative route to patentability.
In summary, then, while the presence of a computer meant that the invention was not excluded from patentability per se, no technical effect could be acknowledged and hence the invention lacked an inventive step. Therefore, the appeal was dismissed.
The Board could have left it there but proceeded to make some hopeful comment for those seeking effective protection of machine learning inventions per se. In the Board’s own words:
“The Board stresses that there can be no reasonable doubt that neural networks can provide technical tools useful for automating human tasks or solving technical problems. In most cases, however, this requires them to be sufficiently specified, in particular as regards the training data and the technical task addressed. What specificity is required will regularly depend on the problem being considered, as it must be established that the trained neural network solves a technical problem in the claimed generality.” (Emphasis added)
Perhaps even more interesting is that the Board finds it necessary in this case, in spite of already finding a lack of inventive step, to note that even if general methods for machine learning per se were to be considered as not part of the items of excluded subject matter (not a mathematical method per se), it would remain questionable whether the proposed connectivity scheme would provide a benefit beyond the mere reduction of storage requirements, for instance “a ‘good’ trade-off between computational requirements and learning capability”. Thus, while the Board could not see, in this particular case, for which type of learning tasks the proposed structure may be of benefit, and to what extent, there may yet be prospects for broadly protecting machine learning techniques that implement a given learning task with a demonstrable benefit, such as trading off conflicting technical requirements in a “good” way. While in practice some cases that can demonstrate such a benefit, for example by enabling parallel processing, do get allowed in first instance proceedings, it would be good to have some positive examples from the Boards of Appeal that help to settle this aspect of practice at the EPO. This case is a hopeful sign that if applicants involved in this area of patenting are looking out for the “right” cases to appeal, there are prospects of positively developing the case law in this area.