With the excitement of the Court of Appeal Dabus case (here and here) just settled, the UKIPO keeps AI-devised inventions in the limelight in this recently issued consultation. It has always occurred to me as remarkable that legal thinkers and policymakers should spend such an amount of time and energy on an issue—inventorship in light of autonomous machine invention—that the AI / ML community widely believes to be non-existent, at least for the coming decades. Yours truly would much prefer to see a consultation on the economic and societal impact on the current stance by the UKIPO on patenting fundamental AI innovation. That said, creation of copyright works by AI is most definitely a technical reality today and so the copyright part of the consultation asking whether current provisions for digitally created works are sufficient is indeed timely.
The consultation also concerns questions of licensing and copyright exceptions for data mining, which is highly pertinent to AI research and development and current real-world concerns. See for example here. This may turn out to be the most important and impactful part of the consultation.
For anyone who, like me, wonders how it can be that a hypothetical in the form of autonomous machine invention has captured the legal and policy-making agenda to this point, this article crystallises the issue beautifully and suggest some useful categories to consider, in particular the distinction between automation and autonomy and the fact that in the state of the art computers are bound by the instructions with which they are programmed. Here are some choice quotes, although the article is well worth a read in its entirety:
“‘The real danger of artificial intelligence is not that computers are smarter than us, but that we think [they] are.’”
“The amount of legal writing highlighting the incompatibility of the existing patent system with ‘artificial inventions’ appears to be in stark contrast with the seeming non-existence of technical inquiries on the very source of concerns ‒ the phenomenon of ‘autonomous generation of inventions’ by computers.”
“[…] computational methods of problem solving ‒ including ANNs and EAs ‒ essentially rely on the instructions that determine how inputs are mapped into output through computation. Thus, as long as a computer is bound by an algorithm, there is no reason to assign to it ‘cognitive’ autonomy, irrespective of the complexity of the computational process. As an operative test to prove the decisive role of such instructions, it is suggested to run a counterfactual where a computer would need to solve a problem in their absence.
“As we do not personify the laws of physics or chemistry, neither should we attribute a mystic personality to computational processes carried out according to the laws of mathematics and statistics. Even though it became common to use the language that anthropomorphises algorithms, such tendency was viewed as ‘an obstacle to properly conceptualizing’ legal and societal challenges posed by AI techniques, as well as misguiding the policy priorities. If computers only execute the problem-solving mechanism […] the distinction between human and ‘non-human’ (algorithmic) ingenuity is simply pointless.”
For those who cannot get enough of this debate, this recent panel I participated in with colleagues from Pinsent Masons and the industrial mathematician and Turing Fellow Prof. Chris Dent came to a similar conclusion and highlighted some urgent areas for AI regulation we need to focus on. (Webinar recording available at the link).