Classification Rules for High-Risk AI Systems

Classification Rules for High-Risk AI Systems

High-Risk AI Systems: Article 6 outlines the criteria for classifying AI systems as high-risk. These systems are subject to stricter regulations due to their potential impact on safety and fundamental rights.

Classification Criteria

1. Reference to Annex I: Irrespective of whether an AI system is placed on the market or put into service independently of the products referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions are fulfilled:

Condition a)

The AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonization legislation listed in Annex I

Condition b)

The product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into service of that product pursuant to the Union harmonization legislation listed in Annex I.

2. Reference to Annex III: AI systems listed in Annex III are automatically considered high-risk. Includes systems used in areas such as:

1

Biometrics, remote biometric identification systems, biometric categorization, emotion recognition

2
Critical infrastructure
3
Education and vocational training
4
Employment, workers management and access to self-employment
5
Access to and enjoyment of essential private services and essential public services and benefits
6
Law enforcement, in so far as their use is permitted under relevant Union or national law
7
Migration, asylum and border control management
8
Administration of justice and democratic processes

Exemptions

An AI system listed in Annex III is not considered high-risk if it does not pose a significant risk to:

Health

Safety

Fundamental Rights

AI systems that perform profiling of naturea persons are always considered high-risk, regardless of the above exemptions icon caution

Requirements for High-Risk AI Systems (Articles 8-15)

Article 8: Compliance with the Requirements

Article 8: Compliance with the Requirements

Integration of required testing, documentation, and procedures to streamline compliance with both AI regulation and relevant Union harmonization legislation (Annex I) and avoid duplication.

Article 9: Risk Management System

Article 9: Risk Management System

Purpose: Mitigate risks associated with high-risk AI systems

Key Points:

  • Implement a comprehensive risk management system.
  • Continuously identify, analyze, and mitigate risks throughout the AI system's lifecycle
Article 10: Data and Data Governance

Article 10: Data and Data Governance

Purpose: Ensure data quality and governance.

Key Points:

  • Data must be relevant, representative, and of high quality.
  • Implement robust governance measures for data used in training, validation, and testing
Article 11: Technical Documentation

Article 11: Technical Documentation

Purpose: Provide clear documentation for compliance assessment.

Key Points:

  • Maintain up-to-date technical documentation.
  • Documentation should be comprehensive for assessment by authorities
Article 12: Record Keeping

Article 12: Record Keeping

Purpose: Ensure traceability and accountability.

Key Points:

  • Keep logs automatically generated by the AI system.
  • Logs should support traceability and accountability
Article 13: Transparency and provision of information to deployers

Article 13: Transparency and provision of information to deployers

Purpose: Ensure user understanding and safe operation.

Key Points:

  • Provide clear instructions for use, including system capabilities and limitations.
  • Ensure users understand how to operate the AI system safely
Article 14: Human Oversight

Article 14: Human Oversight

Purpose: Enable human control over AI systems.

Key Points:

  • Implement measures for human oversight.
  • Allow for intervention and control over the AI system
Article 15: Accuracy, Robustness, and Cybersecurity

Article 15: Accuracy, Robustness, and Cybersecurity

Purpose: Ensure system reliability and security.

Key Points:

  • AI systems must be accurate, robust, and secure against manipulation or interference