Me, myself & AI – an interview with Professor Ryan Abbott

Me, myself & AI – an interview with Professor Ryan Abbott

Partner Alex Korenberg and Associate Tom Hamer sit down at the virtual table with Artificial Intelligence expert and author Professor Ryan Abbott.
 
Fittingly, the interview was recorded through an AI-powered audio-to-text tool (Ryan has confirmed that the tool needn’t be named as a co-author of this article).
 

About the author

Professor Abbott is the author of The Reasonable Robot: Artificial Intelligence and the Law and has published widely on topics associated with law and technology, health law, and intellectual property in leading legal, medical, and scientific books and journals. You can learn more about him and his work here.
 
Professor Abbott is also leading the team behind The Artificial Inventor Project which has filed patent applications worldwide naming AI system DABUS as a sole inventor. Partner Alex Korenberg, software patent and AI expert, has commented on these applications here: UKIPOEPO.


Alex: Where do you see AI being particularly disruptive, and in what legal areas? Will Intellectual Property be the most disrupted, or will it be another field?

Prof. Abbott: AI is going to be broadly disruptive to many areas of the law. In IP, AI autonomously making stuff is not the central concern that most companies have. But, what is a concern for mainstream companies is that the teams involved in creating new output are getting larger and more diverse, working together more collaboratively, and increasingly relying on digital technologies. All of this complicates things like determining who is an author or inventor for something that has commercial value.
 
I suspect, particularly in the copyright arena, we’re going to see a new found commercial importance to AI-generated output as AI becomes capable of producing massive amounts of mediocre-quality works at almost no marginal cost. AI is also going to disrupt other aspects of IP like inventive step analyses, as machines start to make previously inventive leaps look more obvious, and enforcement, as it becomes cheaper and easier to detect infringement.
 
Alex: What about tort liability? You know, what happens if an AI-driven car runs somebody over? Where do you see the biggest problems?

Prof. Abbott: AI running someone over is certainly a problem, but I think it shouldn’t be our main focus. Because, ultimately, AI is going to be a lot less likely to run someone over than a human driver. That means we’re going to experience major safety gains by promoting the development and use of these technologies.
 
One of the things I argue for in the book (The Reasonable Robot) is that we should be careful about limiting people's freedom and autonomy. But while I don’t think we need to ban people from the road, maybe we should start holding people to a higher standard, i.e. the standard of a self-driving car. If someone causes an accident because they choose to drive rather than use a nearly perfectly safe vehicle, then at least they should be liable for the harm they cause.

I think the same reasoning applies to medicine, where already companies are saying that their AI can outperform doctors in very specific applications like diagnosing certain conditions, or extrapolating from radiology images or pathology slides. In ten or twenty years, we may have robots autonomously doing certain surgical procedures. At that point, I'm not sure we want human doctors doing procedures where we have good evidence that a robot can do the same thing only safer, cheaper, and with faster recovery times.
 
Tom: What are the main issues that private practice firms should be thinking of when advising clients on AI issues in the future? What should I have on my radar as an attorney wanting to advise clients on AI and IP?

Prof. Abbott: Clients and in-house counsel are busy and may not have spent much time thinking through some of the challenges and opportunities that will result from AI. Which, to be sure, is highly dependent on the client and their business model. As outside counsel, one of the ways you can add value is by helping clients think through these issues and how they will be impacted, and what you can proactively do for AI-assisted or AI-generated inventions. That could be things like making sure everything's done right with contractual and assignment obligations, and looking at R&D workflows to make sure that, if push comes to shove, systems have been designed to have a person you could point to as having done something traditionally inventive.
 
Tom: Where there's a risk or question over inventorship, how can companies protect and prepare themselves?

Prof Abbott: Providing good solutions requires really understanding the underlying problems in fact-specific contexts. Again, for inventorship, you need a thorough understanding of the R&D process, who is doing what, what is doing what, and ensuring that, for subsistence issues, you have someone who traditionally qualifies as an inventor.
 
But there are a lot of dimensions to these risks and not a single one-size-fits-all approach. For example, where R&D teams are large and collaborative between companies or academic-industry partnerships, it may be important to have alternative dispute resolution agreements in place to help resolve disputes before they escalate to litigation, through processes like pre-dispute mediation, project neutrals, and early neutral case assessment.
 
Tom: What are you hoping to achieve with the Artificial Inventor Project? 

Prof. Abbott: I became interested in the issue of AI-generated inventions in around 2013–2014 while I was teaching patent law, specifically the law related to inventorship, and at the same time working in the life sciences industry as a patent attorney. In that role, I was coming across companies that were advertising that they had AI doing contract research, or that AI was being applied in contract research, and doing the sorts of things that tend to make an individual an inventor. I began thinking through some of the challenges this might pose.
 
As I started speaking about these AI-generated works, I found that companies had concerns in the space but that there really was very little guidance on what to do and essentially no case law. I think there was also a sense that these issues were too difficult to deal with, because they pose some really fundamental challenges to IP systems, and so some people were reluctant to talk about it.
 
Certainly one of the reasons for the project was to start a conversation about AI and IP because I think it would be unfortunate to have another ten years of commercial investment and technological improvement only to have policymakers try to retroactively legislate. We’ve been very pleased to see agencies like WIPO, UKIPO, and USPTO launching multi-stakeholder dialogues about these issues, and I think it has been helpful to have actual cases to discuss rather than just hypotheticals.
 
Alex: It seems that we will have to do something about AI inventorship when autonomous inventing by machines becomes common place. But when that happens, it seems imaginable that using an AI to invent will be seen as a routine tool that is obvious. So are we stuck between there being no inventor or no inventive step, both preventing AI generated inventions being patented?

Prof. Abbott: The difference is the time frame. We are likely to have AI pull ahead of people at solving problems in certain areas where it has a natural advantage, such as repurposing existing drugs by looking at large databases of patient information from, for example, NHS databases. But, I think we’re unlikely to have those sorts of AIs render everything obvious because they’re not likely to be widely available. The inventive step analysis doesn't focus on a hypothetical inventor but rather on a ‘skilled person’ who is more of an average researcher. So, if a company is using an exceptional proprietary AI, that system shouldn’t set the baseline for whether something is obvious because it isn’t something society has access to.
 
At a minimum, we’re likely decades away from something like artificial general intelligence or artificial superintelligence. Once we have widespread access to machines that can invent with trivial ease, that sort of activity should change inventive step analysis, but I think it would be a great outcome if incentivising AI-generated inventions at this stage helped us to get to artificial general intelligence faster.
 
Alex: And when we do get to that stage, with artificial general intelligence able to solve lots of non-specialised problems?

Prof. Abbott: Hard for me to say, and probably hard for anyone to say with any degree of certainty. But AI has been progressively getting faster, cheaper, and better and I see no indication that it will stop doing so. We already have AI that can dramatically outcompete people at every traditional board game, which was once a key measure of machine intelligence.
 
Alex: Where does human creating and training on the AI stop, and where does the AI itself inventing begin? Will we ever be able to say which is which?

Prof. Abbott: To give the lawyer’s answer: it depends. For each invention, it is a fact-specific analysis of who did what, when, and how relevant what someone did is to the inventive nature of the AI’s output. For example, if I create a software program to do something very narrow, like, to design a specific industrial component, select the training data (if it’s an AI-based on machine learning), give specific criteria I’m looking for, then the AI gives me 10 options and I evaluate these to pick the best one - I think it is very clear you have a human inventor and an AI being used as a tool. On the other end of the spectrum, if you have a team of people create a program that can more broadly generate industrial designs in several contexts, if another group provides the AI with training data, like existing schematics for a desired component, and another group actually uses the AI to generate output, if the problem being solved is well known, and the AI outputs one option that is obviously fit-for-purpose, then I don’t see a clear human inventor. In between, well, that’s where things get more interesting. It may ultimately depend on differences in national laws on inventorship for who can qualify in this process.
 
Because patent offices generally won’t dispute reported inventorship, this is also not an issue that is likely to come up until a patent is challenged in litigation. Perhaps, if, in a group of once deposing scientists, one of them makes a statement to the effect of, “oh, the AI generated that design, the patent lawyers said we needed someone on the application and that I said I’d do.” Or, it may come up if a development partner or competitor had an inside view into a company’s R&D process. Or, it may come up because some companies have been making public statements to the effect that they have inventive AIs.
 
Tom: This is where I feel like a chat show host. Ryan, we simply must talk about your book. Could you tell us a little bit about the general thesis of The Reasonable Robot?

Prof. Abbott: Well, if you insist… The book looks at the phenomenon of AI stepping into the shoes of people and doing the sorts of things that only people used to do in the past. For example, functionally operating as a human artist or inventor, or even driving a car or operating a cash register at McDonald’s.
 
Current IP laws were not designed to accommodate this. We have two different legal standards for work done by a person and work done by an AI that tends to result in negative outcomes. We would be better off as a society if we had laws that encouraged AI automation where it is more efficient.
 
Or, if we look to torts… we have a different set of liability laws for accidents caused by AI and those caused by people. But, if you have AI and people doing the same basic thing, like driving an Uber, it's not clear to me why, as a passenger or pedestrian, I would want one or other to have a higher or lower standard of liability. I just don't want to be run over. We should have a standard of liability that doesn't artificially discriminate against a safer option. The book looks at similar issues in depth in the context of tax and criminal law, too.
 
Imagine a law firm using AI to write a patent application. Even if the result is not a patent application that you directly file, by doing a first pass it may take an associate five hours to complete the application vs. taking ten hours from scratch. If Kilburn & Strode started using it, it could effectively replace one associate with AI - not that the firm should, we are strictly hypothetical here! But the point is, if Kilburn & Strode did do this, they would receive artificial tax incentives. Among other things, the firm has to make National Insurance contributions for human associates, but not for an AI that is replacing an associate. Our tax policies encourage companies to automate, even where it isn’t necessarily more efficient otherwise, and as a country we’re losing a substantial amount of tax revenue.
 
The book argues a principle that should be guiding AI regulation is “AI legal neutrality”: the idea that the law shouldn't discriminate between AI and human behaviour without good reason.
 
Tom: To clarify, you are not advocating for private practice law firms to get rid of associates and replace them with AI, then?

Prof. Abbott: I'm definitely very against that, to be clear. (Tom looks visibly relieved). I’m even more against AI being used to replace law professors. Every other profession is in trouble, though.

​Please contact Tom, Alex or your usual Kilburn & Strode advisor if you have any questions.

Let us keep you up to date. If you’d like to receive communications from us, ranging from breaking news to technical updates, thought leadership to event invitations, please let us know.

Connect with us

Cookies improve the way our website works. By using this website you are agreeing to our use of cookies. For more information see our cookie policy I accept