Skip to main content
Artificial Intelligence Act

Status: In draft

  • Commission’s proposal published 21 April 2021
  • Council adopted its position on 6 December 2022
  • Parliament adopted its position on 14 June 2023
  • Currently undergoing interinstitutional negotiations ‘trilogues’: 6 December 2023 is expected to be the last political trilogue where all open topics (especially regulation of Foundation models/Generative AI, list prohibited practices, governance and AI definition) are planned to be agreed upon
  • Final adoption expected in Q4 2023/Q1 2024
  • Application expected in Q1 2026

Summary

The AI Act introduces EU-wide minimum requirements for AI systems and proposes a sliding scale of rules based on the risk: the higher the perceived risk, the stricter the rules. AI systems with an ‘unacceptable level of risk’ will be strictly prohibited and those considered as ‘high-risk’ will be permitted but subject to the most stringent obligations. It is envisaged to include foundation models and generative AI systems to the regulation along with a specific set of obligations.

Scope

Applies in varying degrees to providers, users, end-product manufacturers, importers or distributors of AI systems, depending on the risk.

Key elements

  • Risk-based approach to AI systems: the higher the perceived risk, the stricter the rules. AI systems with an ‘unacceptable level of risk’ to European fundamental rights, like social scoring by governments, will be strictly prohibited. ‘High-risk’ systems, like automated recruitment software, will be subject to the most stringent obligations and limited-risk systems, like chatbots and deep fakes, will be subject to transparency rules. Free use of minimal-risk systems like AI enabled video games or SPAM filter.
  • Specific regulation on foundation models and generative AI has been introduced by the Parliament.
  • Developers of high-risk AI systems must conduct a self-conformity assessment. High-risk AI systems and foundation models must be registered in an EU database.
  • Fines up to a range of 2% - 7 % of the total worldwide annual turnover and 10 to EUR 40m, whichever is higher, are in discussion.

Challenges

  • Legal uncertainty from self-conformity assessment
  • High administrative burden from documentation obligations, including: 
    • Risk management system
    • Registration of stand-alone AI systems in EU database
    • Declaration of conformity needs to be signed
    • For generative AI: Sufficiently detailed summary of copyrighted material training data, safeguards to ensure legality of output
  • Overlap with GDPR / redundancies

Blogs