The European Union's Artificial Intelligence Act (EU AI Act) introduces a comprehensive framework for the regulation of AI technologies, with specific provisions for general-purpose AI systems. These systems are defined and regulated to ensure they are safe, transparent, and compliant with existing laws.

‘General-purpose AI model’ means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.

Basic Requirements and Obligations for all General-Purpose AI Models

All providers of general-purpose AI models must adhere to several obligations to ensure responsible development and use (Article 53):

1

Maintain technical documentation:

  • Updated technical documentation on the model’s training, testing, and evaluation that Includes all information listed in Annex XI.
  • Must be able to provide this documentation to the AI Office or national authorities upon request.
2

Share information with AI system developers:

Provide up-to-date documentation to providers who integrate the model into their own AI systems and respect intellectual property rights (IPR) and trade secrets while doing so.

Documentation must:

  • Explain the capabilities and limitations of the model so that integrators can meet legal obligations.
  • Contain at least the details listed in Annex XII.
3

Comply with EU copyright law

  • Establish a copyright compliance policy.
  • Ensure identification and respect of copyright reservations (as per Article 4(3) of Directive (EU) 2019/790).
  • Use state-of-the-art technologies for this compliance.
4

Publish a summary of training data

  • Make a public summary describing the content used to train the model.
  • Follow a template provided by the AI Office.

Exemptions

The documentation obligations 1-a and b do not apply if:

  • The model is released under a free and open-source license that allows access, use, modification, and redistribution.
  • The model’s parameters, architecture, and usage information are publicly available.

Exception: This exemption does not apply if the model poses systemic risks.

 icon High Risk If a High Risk AI System uses a general-purpose AI model then the provider of the system MUST adhere with all the obligations of the High Risk AI Systems.

When a General-Purpose AI Model Is Considered to Pose Systemic Risk?

When a General-Purpose AI Model Is Considered to Pose Systemic Risk?

A general-purpose AI model is considered to pose systemic risk when it has the potential to cause significant adverse effects on public safety, security, or fundamental rights due to its scale or capabilities.

This classification applies when any of the following conditions are met:

  • The model demonstrates high-impact capabilities*, as evaluated using appropriate technical tools, indicators, and benchmarks

or

  • The European Commission, acting on its own or in response to a qualified alert from the scientific panel, determines that the model has equivalent capabilities or impact, based on the criteria set out in Annex XIII.

*A model is presumed to have high-impact capabilities if the total computation used for its training exceeds 10²⁵ floating point operations (FLOPs).

Note: The European Commission may revise these thresholds, benchmarks, and indicators through delegated acts to reflect technological advancements, such as improvements in algorithms or hardware efficiency

Obligations for General-Purpose AI Models with Systemic Risk

In addition to the obligations listed in Articles 53, providers of general-purpose AI models with systemic risk shall:

1

Model evaluation:

Perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks.

2

Mitigate Systemic risks:

Assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, the placing on the market, or the use of general-purpose AI models with systemic risk.

3

Documentation

Keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent authorities, relevant information about serious incidents and possible corrective measures to address them.

4

Cybersecurity

Ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and the physical infrastructure of the model.