EPO case law – when the context of a technical application is not enough 

EPO case law – when the context of a technical application is not enough 

The EPO guidelines for examination consider AI and ML to be abstract maths but acknowledge two routes to the all-important technical character for AI and ML features:  technical implementation and technical application. You can read my thoughts about case law for the former, but there has been little modern case law on the latter. This may be because a technical application case often leads to a patent being granted. So, the recently issued decision in T1635/19 should make interesting reading and I will pick out the key points for you here.  

The invention in question relates to the maintenance of rolling stock like trucks and trains and specifically a method for adaptive remote maintenance of a truck or train that includes receiving diagnostic information including sensor data and identifying maintenance incidents based on rules applied to the sensor data. The invention differed from the cited state of the art in that it created and applied a new rule for the identification of incidents from the sensor data by applying supervised learning to the sensor data. The supervised learning uses information indicating which of the generated incidents have been validated or discarded by human experts as ground truth information. 

The difference was found to relate an abstract scheme of learning a new classification rule on the basis of input data (the diagnostic information/sensor data) and expected output data (the “ground truth”) indicating whether events generated by the current rules are correct (validated) or not (discarded). This means that the scheme as such is non-technical and therefore needs to be taken into account for inventive step only to the extent that it interacts with the technical subject matter to solve a technical problem.   

Does it? The applicant argued along with other things that the newly created rules could identify rolling stock issues with better accuracy and that since the issues were measured by sensors, they were of a technical nature. Therefore, the technical problem of improving the accuracy of identifying technical issues with rolling stock was solved.  

The Board disagreed. Because the quality of the new rules depended entirely on the quality of the ground truth, which could be the result of cognitive processes, any improvement in accuracy would not be a technical effect achieved by the features of the claim. Therefore, the learning scheme did not make a technical contribution beyond being implemented in a generic manner on a processor and therefore did not contribute to an inventive step. The application was refused for lack of an inventive step.  

In short, according to the Board, merely claiming an abstract learning scheme that is applied to a technical context (rolling stock maintenance) and/or taking a technical input (sensor signal) is not enough by itself to provide a contribution to inventive step. What is needed is that the learning interacts with the technical subject matter to solve a technical problem. If the solution to the technical problem is entirely dependent on the quality of some ground truth that may be provided by cognitive processes, a technical context or input will not be enough to recognise a technical contribution of the learning scheme.  

Read previous article | Read next article

Follow Alexander Korenberg’s AI musings

AI musings

Cookies improve the way our website works. By using this website you are agreeing to our use of cookies. For more information see our cookie policy I accept