Find a lawyerOur capabilitiesYour careerSearch
Locations
Our capabilities
News

Select language:

Locations
Our capabilities
News

Select language:

hamburger menu showcase image
  1. Our thinking
  2. Tech, data and AI: The digital frontier
  3. EU Digital Strategy
  4. Artificial Intelligence Act
Artificial Intelligence Act
Status: In force
  • In force since 1 August 2024

  • Application on 2 August 2026 (2 years after its entry into force) of generally all rules of the AI Act including obligations for high-risk systems defined in Annex III (list of high-risk use cases), with certain exceptions where obligations are applicable earlier/later. Exceptions in detail:

    • 2 February 2025: Member States shall phase out prohibited systems and companies need to comply with the AI literacy requirement;

    • 2 August 2025: obligations for new general purpose AI models become applicable;

    • 2 August 2027: obligations for high-risk systems defined in Annex I (list of Union harmonisation legislation) apply.

Level II legislation and guidance

Published/Expected:

  • Commission guidelines on the definition of ‘AI system’, published on 6 February 2025, and on prohibited practices, published 4 February 2025
  • Code of Practice for providers on General Purpose AI (incl with systemic risks): first draft published on 14 November 2024, second draft published on 16 December 2024, third draft published 11 March 2025, final draft expected in May 2025

  • Commission template for training content summary of GPAI models: expected in July 2025

  • Commission guidance on serious incident reporting for providers of high-risk AI systems: expected on 2 August 2025

  • Harmonised standards for requirements of high-risk AI system to be published by European standardization organization CEN-CENELEC: by end of 2025

  • Commission guidance on the classification of high-risk AI systems: expected in February 2026

No date yet:

  • Commission guidelines on:

    • obligations for high-risk AI systems and obligations along the AI value chain,

    • the transparency obligations for certain AI systems (AI systems directly interacting with natural persons, AI systems generating synthetic audio, image, video or text content, emotion recognition systems and biometric categorisation system, deep fakes, and AI systems generating or manipulating public interest texts),

    • the provisions related to substantial modification, and

    • the interplay of the AI Act and the product safety legislation listed in Annex I of the AI Act.

  • Codes of Practice for providers and deployers of AI systems on the obligations regarding the detection and labelling of artificially generated or manipulated content.

  • Commission templates on post-market monitoring plan and fundamental rights impact assessment.

Summary

The AI Act introduces EU-wide minimum requirements for AI systems and proposes a sliding scale of rules based on the risk: the higher the perceived risk, the stricter the rules. AI systems with an ‘unacceptable level of risk’ will be strictly prohibited and those considered as ‘high-risk’ will be permitted but subject to the most stringent obligations. The AI Act is also regulating foundation models and generative AI systems under the label of ‘General Purpose AI’ with a specific set of obligations.

Scope

Applies in varying degrees to providers, users, end-product manufacturers, importers or distributors of AI systems, depending on the risk.

Key elements

  • Risk-based approach to AI systems: the higher the perceived risk, the stricter the rules. AI systems with an ‘unacceptable level of risk’ to European fundamental rights, like social scoring by governments, will be strictly prohibited. ‘High-risk’ systems, like automated recruitment software, will be subject to the most stringent obligations and limited-risk systems, like chatbots and deep fakes, will be subject to transparency rules. Free use of minimal-risk systems like AI enabled video games or SPAM filter.
  • Specific regulation on General Purpose AI (foundation models), tiered approach with baseline obligations for all General Purpose AI (GPAI) systems and models, and add-on obligations for GPAI models with ‘systemic risks’.
  • Developers of high-risk AI systems must conduct a self-conformity assessment. High-risk AI systems and foundation models must be registered in an EU database.
  • Fines:
    • up to €35m or 7% of global annual turnover for infringements on prohibited practices or non-compliance related to requirements on data;
    • up to €15m or 3% of global annual turnover for other requirements or obligations of AI Act, including the rules on general-purpose AI models;
    • up to €7.5m or 1% of global annual turnover for providing incorrect information, incomplete or misleading information.

Challenges

  • Legal uncertainty from self-conformity assessment
  • High administrative burden from documentation obligations, including: 
    • Risk management system
    • Registration of stand-alone AI systems in EU database
    • Declaration of conformity needs to be signed
    • For generative AI: Sufficiently detailed summary of copyrighted material training data, safeguards to ensure legality of output
  • Overlap with GDPR / redundancies

EU Digital Strategy Hub
Data Governance Act
Data Act
European Data Spaces
Cyber Resilience Act
Digital Markets Act
Digital Services Act
NIS2 Directive
AI Act
AI Liability Directive
DSM Directive
European Media Freedom Act
eIDAS 2.0
Political Advertising Regulation
Digital Operational Resilience Act (DORA)
Related capabilities
Artificial intelligence
Automotive
Data, privacy and cyber security
Fintech
Industrials
Life sciences
Technology
The EU’s proposed AI Regulation

The AI Regulation has a wide reach:

Actors. The AI Regulation will apply to various participants across the AI value chain, covering both public and private actors inside and outside the EU as long as the AI system is placed on the EU market or the output produced by the system (such as content, predictions, recommendations, or decisions) is in the EU. Strict requirements may apply inter alia to providers, users, end-product manufacturers, importers or distributors, depending on the risk associated with the AI system.

Broad-brush definition of AI. An AI system is defined as software that is developed with machine learning, logic- and knowledge-based or statistical approaches which can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

The remit of the AI Regulation goes beyond modern machine learning systems that learn to make decisions themselves, also capturing systems that operate according to hard coded rules, which have long been embedded in a wide variety of applications (from flight control to pacemakers to industrial settings). The Commission’s expansive approach means virtually all systems that currently do, or which may in future, use AI would fall within scope – from personalised pricing, advertising and feed algorithms, to connected IoT systems, self-driving cars, or applications used to support recruitment and other business processes.

Blogs

Blogs

View more
Blog
Mar 7 2025
EU AI Act unpacked #24: European Commission releases critical AI Act implementation guidelines (Part 2) - Prohibited AI Practices
On 4 February 2025, the European Commission published guidelines on prohibited practices under the EU’s AI Act (the Guidelines). These...
Blog
Mar 6 2025
EU AI Act unpacked #23: European Commission releases critical AI Act implementation guidelines (Part 1) - Definition of AI systems
The European Commission has taken a significant step forward in clarifying the EU AI Act by releasing new implementation guidelines in...
Blog
Feb 26 2025
EU AI Act unpacked #22: Key considerations for employers as deployers vs. providers under the EU AI Act
The EU AI Act introduces a new regulatory framework that distinguishes between, among others, deployers and providers. Employers will...
Blog
Feb 25 2025
The rise of audits as a regulatory tool for tech
As technology evolves, so do challenges in effectively regulating it. In an era where there is increasing focus on effective oversight of...
Blog
Feb 12 2025
Lights, Cameras, AI Action
Energised by our attendance at the recent Tortoise Responsible AI Forum, Paris AI Action Summit and City & Financial AI Regulation...
Blog
Feb 11 2025
The Responsible AI Forum 2025: Companies are facing growing regulatory and litigation risks over the use of AI, and not just from the new laws grabbing the headlines
Freshfields is a Knowledge Partner on the Responsible AI Forum 2025, hosted at Spencer House by Tortoise Media, in partnership with the...
Blog
Feb 5 2025
German Election #2: Digital Policies in the 2025 Election Campaign – How Germany’s Political Parties Want Germany to Catch Up on Digitalisation
On 23 February 2025, almost 60 million German voters will elect a new federal parliament in snap elections after the collapse of the...
Blog
Jan 31 2025
EU AI Act unpacked #21: The AI Act starts biting – AI literacy and prohibited practices rules now in effect
The first substantive parts of the EU’s AI Act are now binding on businesses in the EU – and beyond.  Although most of the new law’s...

Contacts

Contacts

Düsseldorf, Frankfurt am Main
Theresa EhlenPartner
Düsseldorf
Christoph WerkmeisterPartner
Vienna, Düsseldorf
Lutz RiedePartner
London
Giles PrattPartner
London
Rachael AnnearPartner
Vienna
Matthias HoferPrincipal Associate
London
Zofia AszendorfSenior Associate
London
Tochukwu EgentiAssociate
Related capabilities
Artificial intelligenceAutomotiveData, privacy and cyber securityFintechIndustrialsLife sciencesTechnology
Related capabilities
FIND US IN
All locations
NAVIGATE TO
About usYour careerOur thinkingOur capabilitiesNews
CONNECT
Find a lawyerAlumniContact us
NEED HELP
Fraud and scamsComplaintsTerms and conditions
LEGAL
AccessibilityCookiesLegal noticesTransparency in supply chains statementResponsible procurementPrivacy

© 2025 Freshfields. Attorney Advertising: prior results do not guarantee a similar outcome

Select language: