This article was first publsihed by Law360, click here to view the orginal. A shorter review was published in Alexander's newsletter.
Artificial intelligence and machine learning have become ubiquitous, with rarely a day going by without an exciting new development or application. Together with the rise of AI and ML generally, the importance of patenting AI inventions has increased.
The European Patent Office, or EPO, treats AI methods per se as mathematical methods that are considered not to be technical and hence not to contribute to patentability.
However, AI can acquire technical character either by being used for a technical purpose or by itself, if its design is driven by technical considerations. The latter, ML technology per se that is not application dependent, is often referred to as core AI.
Guidance from the EPO Boards of Appeal on what this all means is still relatively rare, in particular for core AI. What is more, most recent cases before the boards have dealt with applications that have taken a black-box approach to describing and claiming the use of AI for a technical purpose, for example blood pressure estimation or gas turbine control.
Perhaps unsurprisingly, all we have learned from these decisions is that the EPO considers the use of a generic machine learning model or neural network to be commonplace in technology and hence obvious — see, for example, the previous Board of Appeal decisions in Neuronal plasticity/Institut Guttmann, Fan flutter/Raytheon Technologies Corp., Äquivalenter Aortendruck/ARC Seibersdorf Labor GmbH and Beschichtung eines Werkstücks/MTU Aero Engines GmbH.
Overview of the Mitsubishi Decision
The recent decision Sparsely connected neural network/Mitsubishi Electric Corp., issued by the EPO Boards of Appeal at the end of January 2023, is different in two important aspects.
The application at issue tried to claim an ML technology per se, core AI, rather than a specific use for a technical purpose.
The application went beyond a mere black box description but rather was concerned with details of how the ML technology, specifically a sparse neural network was implemented.
The decision is therefore of interest as it is one of the rare cases to date in which the board considers the possibility of patenting AI and ML per se and deals in detail with technical arguments about improvements to ML technology itself.
The decision provides us with some guidance as to what is needed to be able to argue that a new ML technology has a technical effect per se, not dependent on any specific application of it.
While the board maintains that AI and ML per se are currently seen as nontechnical mathematical methods, the board, unusually, considers how the case would be decided if that were to change.
Before turning to how the board assessed the invention, briefly, the invention relates to a sparsely connected neural network structure, which therefore has a reduced number of connections, and hence parameters, that need to be learned from data and is said to be less prone to overfitting.
As is well known, overfitting  — learning the noise in the data rather than the underlying structure — occurs when there is insufficient data for a given number parameters.
Unlike the acknowledged state of the art, the invention determines the set of connections before training — hence independent of the training data.
The decision, in turn, at points 7 and 8, and 9 to 11, respectively, discusses in detail the technical background of neural networks and the relevant case law of the Boards of Appeal before delving into the arguments brought forward by the applicant. For those wishing to read the case, this is a good starting point.
Reasons for the Decision
Turning now to the substantive assessment of the arguments brought forward by the appellant, while the appeal ultimately fails, it is good news for those interested in patenting AI and ML inventions by providing us with an indication of what sort of arguments do not work and why — see reasons 12-19 of this decision.
According to the applicant, the neural network of the invention, with its sparse connectivity, reduces the resource requirements, in particular storage, compared to a fully connected network: This should be recognized a technical effect in the computer, G1/19. 
The board did not buy it. While a network with fewer connections needs less storage, it is also not the same network anymore. A single neuron neural network will have the least storage requirements of all. The argued comparison is incomplete as it focuses only on the computational requirements ignoring that the modified network will not learn in the same way.
My take is that when arguing lesser resource requirements, it is important not to forget the flip side: The argument needs to show, at least, that the invention use less resources to achieve at least a comparable thing — doing less with less is not enough.
The applicant also said that an artificial network is an artificial brain and artificial brains solve an automation problem; neural networks mimic the human brain and cannot be understood by their programmer. Not impressed, the board saw no evidence that a neural network functions like a human brain — having a structure inspired by the brain does not imply it functions like one. Its parameters and results are fully determined given the training data and procedure.
A neural network can be simple and understandable or complex, giving it the appearance of unpredictability. Yet, that a learning system is complex does not mean that it replicates the functioning of the brain. In the board's own words:
The Appellant thus has not convinced the Board that neural networks in general function like a human brain or can replace the human in performing complex tasks. Even less so has the Appellant established that the claimed neural network solves the "brain" automation problem in general."
The board's response is not surprising. I would suggest that this line of arguments is not worth trying in the future.
The board also considered if there was an implied technical use. This was a point the board considered of its own motion.
In contrast to cryptography, which is equally math based, solving a classification or regression problem per se did not imply a technical use, as the use could be to forecast stock market evolution. There was thus no further technical use implied, contrary to the encryption of digital messages implying an increase in system security.
My take is that "further technical use" is unlikely to be helpful when trying to patent fundamental AI — here a focus on how the design of the method is based on technical considerations of the internal functioning of computers will be more important.
In summary, then, while the presence of a computer meant that the invention was not excluded from patentability per se, no technical effect could be acknowledged and hence the invention lacked an inventive step. Therefore, the appeal was dismissed.
The Decision and Beyond
The board could have left it there but proceeded to make some hopeful comment for those seeking effective protection of machine learning inventions per se. In the board's own words:
The Board stresses that there can be no reasonable doubt that neural networks can provide technical tools useful for automating human tasks or solving technical problems." 
The board continues on and provides us with guidance as to how neural networks should be claimed. Specifically, the board observes that the neural network must be sufficiently specified in terms of, in particular, the training data and the technical task at hand. What that means in any one case will depend on the problem being solved as the applicant must establish that the trained neural network solves a technical problem across the scope of the claimed generality.
Perhaps even more interesting is that the board finds it necessary in this case, in spite of already finding a lack of inventive step, to note that even if general methods for machine learning per se were to be considered as technical per se — hence not a mathematical method per se and not part of the items of excluded subject matter — it would remain questionable whether the proposed connectivity scheme would provide a benefit beyond the mere reduction of storage requirements, for instance "a 'good' trade-off between computational requirements and learning capability."
Thus, while the board could not see, in this particular case, for which type of learning tasks the proposed structure may be of benefit, and to what extent, there may yet be prospects for broadly protecting machine learning techniques that implement a given learning task with a demonstrable benefit, such as trading off conflicting technical requirements in a "good" way.
This decision is both remarkable and not surprising at the same time. Board of Appeal decisions that deal with machine learning at a technical level are rare, and this decision may be the most detailed yet at a technical level.
While the outcome is not surprising given the details of the case, the case is remarkable in that it seems to indicate a willingness by the boards to engage with the machine learning details.
Thus, while we are still waiting for confirmation from the boards on this, it stands to reason that core AI inventions that are supported by a description demonstrating a technical benefit at the level of generality the invention is claimed should in principle be patentable at the EPO.
While in practice some cases that can demonstrate such a benefit, for example by enabling parallel processing, do get allowed in first instance proceedings, it would be good to have some positive examples from the Boards of Appeal that help to settle this aspect of practice at the EPO.
This case is a hopeful sign that if applicants involved in this area of patenting are looking out for the right cases to appeal, there are prospects of positively developing the case law in this area.
 ECLI:EP:BA:2022:T119119.20220401 https://www.epo.org/boards-of-appeal/decisions/pdf/t200702eu1.pdf
 ECLI:EP:BA:2021:T224618.20211022 https://www.epo.org/boards-of-appeal/decisions/pdf/t200702eu1.pdf
 ECLI:EP:BA:2020:T016118.20200512 https://www.epo.org/boards-of-appeal/decisions/pdf/t200702eu1.pdf
 ECLI:EP:BA:2010:T196808.20100810 https://www.epo.org/boards-of-appeal/decisions/pdf/t200702eu1.pdf
 ECLI:EP:BA:2022:T070220.20221107 https://www.epo.org/law-practice/case-law- appeals/recent/t200702eu1.html.
 20 January 2023: https://www.epo.org/boards-of-appeal/decisions/pdf/t200702eu1.pdf
 https://www.ibm.com/topics/overfitting#:~:text=your%20model%20accurately.- ,What %20is%20overfitting%3F,unseen%20data%2C%20defeating%20its%20purpose.
 ECLI:EP:BA:2022:T070220.20221107 at 12-19. https://www.epo.org/law-practice/caselaw-appeals/recent/t200702eu1.html#:~:text=Main%20request-,12.%20The%20claimed,- neural%20network%20apparatus.
 ECLI:EP:BA:2021:G000119.20210310 https://www.epo.org/law-practice/case-lawappeals/recent/g190001ex1.html.
 ECLI:EP:BA:2022:T070220.20221107, at 20. https://www.epo.org/law-practice/caselaw-appeals/recent/t200702eu1.html#:~:text=20.%20The%20Board%20stresses