For example, AI systems that allow “social scoring” by governments or companies are considered a clear threat to people's fundamental rights and are therefore banned.
High Risk
High-risk AI systems such as AI-based medical software or AI systems used for recruitment must comply with strict requirements, including risk-mitigation systems, high-quality data sets, clear user information, human oversight, etc.
Limited Risk
Systems like chatbots must clearly inform users that they are interacting with a machine, while certain AI-generated content must be labelled as such.
Minimal Risk
Most AI systems such as spam filters and AI-enabled video games face no obligation under the AI Act, but companies can voluntarily adopt additional codes of conduct.