Machine learning is obvious...

Machine learning is obvious...

A recent board of appeal decision (discussed in this post) found that using a neural network, per se, is obvious and that a lack of any discussion of the training data for a neural network renders the disclosure of an application insufficient to enable the invention being put into practice. In T1191/19, hot off the press, the same Board found that using a known machine learning technique without details of how it is adapted for a novel application is obvious, expanding on the previous decision. Additionally, the applicant again failed to describe the training data in their application and, unsurprisingly, the Board's view on this was consistent with its earlier decision: the disclosure was insufficient. The Board also had an interesting way of dealing with the lack of clarity found at first instance. 

The application is related to applying meta-learning to model and guide processes related to brain plasticity. At the start, the Board had to grapple with claim terms found to be unclear at first instance. The Board had an attractive solution to this. The Board noted that a document newly cited by the applicant in support of the clarity of the claim terms used the same terminology as in the claim. Thus, the Board felt it could move straight to inventive step and leave the question of clarity open (the implicit reasoning being that whatever the terms mean, they will mean the same thing in the claim and cited document). 

In its submissions, the appellant argued that the new document uses the same general meta-learning scheme as claimed in the application and that applying meta-learning to model and guide processes related to brain plasticity was a novel and inventive strategy. On that basis, the Board considered the submitted document the most relevant document for assessing inventive step. The Board held that the mere application of a known machine learning technique to a problem in a particular field is a general trend in technology and cannot be inventive in itself, citing the decision discussed in the post linked above. In the present case, the problem at hand was predicting personalised interventions for a patient in processes of which the substrate is neuronal plasticity. The question to be asked, then, was whether the claimed method applied the meta-learning scheme of the new document to the specific problem at hand in a manner which would not have been obvious to the skilled person. The Board did not find any non-obvious detail of applying the meta-learning scheme to the problem beyond a mere reiteration at an abstract level of the disclosed technique. The claims were found to lack inventive step. Beyond the specifics of this case, the conclusion is that a straightforward application of a disclosed machine learning method to a new problem, without any non-obvious details of how the method is applied, was considered obvious. 

The Board also considered whether the disclosure of the application was sufficient to put the invention into practice. The Board found that the application does not disclose how the meta-learning scheme was applied to the problem at hand in a manner sufficiently clear and complete for it to be carried out by the person skilled in the art. Specifically, the application did not disclose any example set of training data and validation data, which the meta-learning scheme required as input. The application did not even disclose the minimum number of patients from which training data should be compiled to give a meaningful prediction, nor the set of relevant parameters. No details of the heuristics mentioned in the claims for the solution of the problem at hand were disclosed. Nor were any details disclosed of the structure of the artificial neural networks used as classifiers, their topology, activation functions, end conditions or learning mechanism. At the level of abstraction of the application, the available disclosure was an invitation to a research programme. Given the lack of disclosure in this specific case, it is difficult to know what level of disclosure would be sufficient in the Board’s eyes. One thing, however, is clear: applicants must at least include some information about the training data used and the structure of models used. Applicants must also include at least some information on how the models are trained and any other rules associated with their training or use in inference. 

As in the earlier case in front of the same Board, the appellant did not respond to the preliminary opinion the Board sent out before the appeal hearing, nor did they attend the hearing. Appellants are, of course, in their right to do so. However, consequently, the Board in these cases pronounced on points fundamental to the patenting of machine learning inventions without the benefit of testing these points in discussion with the appellant. What is more, the level of the decisions in these cases and the appellant's involvement suggest that these cases may not have been the best ones to move the needle on our understanding of how to assess machine learning. We will only advance the understanding of these questions when the Boards of Appeal get to rule on cases that test the limit of what can be patented and how - cases with promising subject matter that are vigorously argued. When that happens, you will be sure to read about it here.  

Follow Alexander Korenberg’s AI musings

AI musings

Cookies improve the way our website works. By using this website you are agreeing to our use of cookies. For more information see our cookie policy I accept