The AI Regulatory Framework

The European Union Artificial Intelligence Act (AI Act) introduces a pioneering regulatory approach to AI, based on a risk-based framework. This framework classifies AI systems into four categories of risk, structured like a pyramid: the higher the risk, the stricter the regulation.

Unacceptable Risk

All AI systems considered a clear threat to the safety, health and fundamental human rights of people are prohibited. The AI Act prohibits eight practices, namely: 

1. harmful AI-based manipulation and deception 

2. harmful AI-based exploitation of vulnerabilities 

3. social scoring 

4. Individual criminal offence risk assessment or prediction 

5. untargeted scraping of the internet or CCTV material to create or expand facial recognition databases 

6. emotion recognition in workplaces and education institutions 

7. biometric categorization to deduce certain protected characteristics 

8. real-time remote biometric identification for law enforcement purposes in publicly accessible spaces, with limited exceptions 

High Risk

High-risk AI systems pose serious risk of causing harm if they malfunction or are misused. Therefore, the EU AI Act subjects them to strict obligations designed to safeguard health and safety, protect fundamental rights, and prevent adverse impacts on society or the environment.

The AI Act identifies two main categories of high-risk AI systems:

1. AI systems that are part of, or themselves constitute, products already regulated under EU harmonization legislation and that require third-party conformity assessment. Examples include:

  • AI-enabled medical devices
  • AI features in toys or consumer products affecting safety
  • AI functions in machinery, radio equipment, or vehicles

2. AI systems deployed in sensitive or critical application areas. Examples include:

  • AI used in law enforcement, such as systems assisting in the assessment of criminal activity
  • AI used in migration, asylum, and border control
  • AI supporting decisions in education, employment, or worker management

Limited Risk

This refers to the risks associated with a need for transparency around the use of AI. The AI Act introduces specific disclosure obligations to ensure that humans are informed when necessary to preserve trust. For instance, when using AI systems such as chatbots, humans should be made aware that they are interacting with a machine so they can take an informed decision.

Moreover, providers of generative AI have to ensure that AI-generated content is identifiable. On top of that, certain AI-generated content should be clearly and visibly labelled, namely deep fakes and text published with the purpose of informing the public on matters of public interest.

Minimal Risk

The AI Act does not introduce rules for AI that is deemed minimal or no risk. The vast majority of AI systems currently used in the EU fall into this category. This includes applications such as AI-enabled video games or spam filters.