The EU AI Act is here: what is it, and how might it affect my IP?

The EU AI Act is here: what is it, and how might it affect my IP?
Picture1-5.png 
 


The EU AI Act has entered a new phase in it’s implementation: as of 2 February 2025, AI systems that pose “unacceptable risks” are now banned in the EU.
 
While this milestone has been reached, the enforcement mechanisms and specific product bans are still being developed. The AI Act mandates the establishment of entities such as the AI Office and national competent authorities to oversee implementation and enforcement. As these bodies take shape and the regulatory framework becomes fully operational, more detailed guidance on compliance and potential AI product restrictions is expected.
 
In this article we set out a reminder on some key things you should know about the AI Act: which AI systems are prohibited, the disclosure and the transparency requirements that certain AI systems must meet, and how this Act might affect those both inside and outside the EU.

 

Recap: What is the EU AI Act?

The EU AI Act is a comprehensive legal framework which aims to ensure the safety and trustworthiness of AI systems and their uses. After its publication in the Official Journal of the European Union, the EU AI Act officially entered into force on 1 August 2024. The Act defines risk categories and sets out different rules for different risk levels, primarily setting its sights on AI systems that are considered to be “high risk” or of “unacceptable risk”.
 

What Is an Unacceptable Risk AI System?

These systems are prohibited as of 2 February 2025. These are AI systems which negatively affect safety or fundamental rights without providing an offsetting socio-economic benefit. Examples of such systems include:
 

  • AI systems designed for social scoring by governments or private entities;

  • Biometric categorization systems that infer sensitive characteristics such as political views or religious beliefs;

  • AI-based manipulative techniques that exploit vulnerabilities of individuals;

  • Real-time remote biometric identification in public spaces, except in limited law enforcement cases;

  • Predictive policing systems based solely on profiling.

 
Violation of these prohibitions will come at a heavy cost: non-compliers may be subject to fines of up to €35M or 7% of total worldwide annual turnover, whichever is higher.

 
What Is a High Risk AI System?

Under Article 6 of the Act, an AI system that is intended to be used as a safety component of a product, or is itself a product that is covered by certain pieces of existing EU legislation, is classed as high risk. Annex III explicitly lists other types of AI systems that are considered high risk, such as those that are intended to be used:

  • as safety components in the management and operation of critical digital or physical infrastructure;

  • in education, such as for determining admission, evaluating learning outcomes and detecting prohibited behaviour during tests; and

  • for the recruitment or selection of employees.

Notably, medical AI tools—such as diagnostic algorithms, AI-assisted treatment planning, and robotic surgery systems—fall under the Act’s "high-risk" category.
 

What requirements are prescribed for “high risk” AI systems?

High-risk AI systems will be subject to a number of requirements surrounding safety, transparency and disclosure.
 
A risk management system, for example, should be put in place to identify and analyse known and reasonably foreseeable risks that the AI system can pose to health, safety or fundamental rights. Implementing such a risk management system is not a one-off task: the Act requires that it be run throughout the entire lifecycle of the AI system, with regular review and updating.
 
There’s also a requirement that before a high-risk AI system is placed on the market or put into service, technical documentation must be drawn up that demonstrates that the system complies with the requirements of the Act. It must also include information on the intended purpose of the AI system, and technical details such as the general logic of the AI system and its algorithms, and how the system interacts with hardware and software. The full list of requirements for technical documentation is set out in Annex IV of the Act.  
 
An interesting requirement given the recent explosion in popularity of generative AI tools and chatbots such as ChatGPT and Claude is the transparency requirement of Article 50. In particular, providers of AI systems which are intended to interact directly with users will have to ensure that they are designed and developed in such a way that the user is informed that they are interacting with an AI system. Notably, this requirement applies to providers outside of the EU whose AI systems produce output intended to be used in the EU.
 
Whilst not as hefty as the penalty for violating the prohibition of systems of unacceptable risk, providers and deployers of non-compliant high-risk AI systems may see fines of up to €15M or 3% of total worldwide annual turnover, whichever is higher.
 

Who does the Act affect?

The requirements of the EU AI Act apply to providers and deployers of AI systems, a ‘provider’ being an entity that develops or has an AI system developed, and a ‘deployer’ being an entity that puts an AI system to use.
 
As hinted above, the EU AI Act is extraterritorial in that it applies not only to providers and deployers inside the EU, but also to those outside the EU who produce or use AI systems the output of which is intended for use in the EU. Innovators outside of the EU that make extensive use of AI – increasingly in sectors such as MedTech – should therefore take heed of the EU AI Act’s extensive scope and its requirements!

 

Will the Act affect my patent strategy?

The EU AI Act has not yet been fully implemented. It won’t be until 2027 when all provisions of the Act are in force. It’s still not completely clear how the Act might impact patent law and patent filing strategies.
 
For us as patent attorneys, some interesting questions remain:

  1. Is there a benefit to patenting AI systems where a product or service on the EU market would be classed as an “unacceptable risk”? Many such systems would face no obstacle for patentability before the EPO, but the commercial benefit of a granted European patent is likely to be significantly reduced.
     

  2. Could the novelty of an invention be jeopardised by the disclosure requirements for high-risk AI systems?
     

  3. Could details disclosed about an AI system in a patent application be taken into account when assessing its risk level?

 
Our team at Kilburn & Strode will be keeping a close eye on any developments in these directions. 
 
For any advice about patent filing strategies in the field of artificial intelligence and its applications, please contact Jake Horsfield, Tom Hamer, or your usual advisor at Kilburn & Strode LLP.

Let us keep you up to date. If you’d like to receive communications from us, ranging from breaking news to technical updates, thought leadership to event invitations, please let us know.

Connect with us