For example, AI systems that allow “social scoring” by governments or companies are considered a clear threat to people's fundamental rights and are therefore banned.
High Risk
High-risk AI systems such as AI-based medical software or AI systems used for recruitment must comply with strict requirements, including risk-mitigation systems, high-quality data sets, clear user information, human oversight, etc.
Limited Risk
Systems like chatbots must clearly inform users that they are interacting with a machine, while certain AI-generated content must be labelled as such.
Minimal Risk
Most AI systems such as spam filters and AI-enabled video games face no obligation under the AI Act, but companies can voluntarily adopt additional codes of conduct.
These guidelines provide an overview of AI practices that are deemed unacceptable due to their potential risks to European values and fundamental rights.
The AI Act, which aims to promote innovation while ensuring high levels of health, safety, and fundamental rights protection, classifies AI systems into different risk categories, including prohibited, high-risk, and those subject to transparency obligations.