For example, AI systems that allow “social scoring” by governments or companies are considered a clear threat to people's fundamental rights and are therefore banned.
High Risk
High-risk AI systems such as AI-based medical software or AI systems used for recruitment must comply with strict requirements, including risk-mitigation systems, high-quality data sets, clear user information, human oversight, etc.
Limited Risk
Systems like chatbots must clearly inform users that they are interacting with a machine, while certain AI-generated content must be labelled as such.
Minimal Risk
Most AI systems such as spam filters and AI-enabled video games face no obligation under the AI Act, but companies can voluntarily adopt additional codes of conduct.
The guidelines on the AI system definition explain the practical application of the legal concept, as anchored in the AI Act.
By issuing guidelines on the AI system definition, the Commission aims to assist providers and other relevant persons in determining whether a software system constitutes an AI system to facilitate the effective application of the rules.