Skip to main content

Data trends 2024

Chapter 1: What to consider when adding data to the AI revolution

By Richard Bird, Julian Boatin, Brock Dahl, Theresa Ehlen, Beth George, Adam Gillert, Giles Pratt, Katie Sa, Max Smith, Satya Staes Polet and Christoph Werkmeister

IN BRIEF

Artificial intelligence (AI) is of growing importance to businesses and in the next few years businesses are widely expected to explore opportunities presented by Generative AI (GenAI). GenAI is capable of processing and analysing large amounts of data and generating new output based on it. Many companies have access to troves of data, from which they may wish to extract additional value or efficiencies by using AI. In this article we highlight why businesses developing or implementing AI in the future should:

  • give ample consideration to ensure that any personal data is used and protected in accordance with applicable (and potentially conflicting) global privacy laws;
  • pay particular attention to emerging AI-specific regulation in various jurisdictions—which will often overlap with those privacy laws; and
  • develop guidelines and strong governance processes for dealing with AI.

GenAI and privacy

GenAI models are trained on large volumes of data, which may include personal data, and will also often rely on the processing of personal data as part of their operation.

In Europe, the EU’s General Data Protection Regulation (GDPR), and the UK’s GDPR, apply to the use of GenAI to the extent this includes the processing of personal data. For example:

  • The collection and use of personal data for training purposes is subject to heightened privacy requirements.
  • Organisations using personal data to train AI systems must ensure that personal data in AI training data is accurate in-line with the EU and UK GDPR’s requirements.
  • Decision-making based solely on automated processing is prohibited in many cases under UK and EU privacy laws (with limited exceptions). Data subjects must also be given certain additional information about many types of automated decision-making, including meaningful information about the logic involved. As explained in this blog post, the UK government has proposed reforms that would liberalise the UK’s regime in relation to automated decision making, which may allow greater opportunities to use AI in the UK in coming years. Nonetheless, automated decision-making which results in significant decisions for individuals will remain particularly regulated.
  • Various trade-offs may arise in the development of AI, and it is important to find the right balance between aspects such as accuracy, privacy and responsibilities to be able to explain the AI and its output in ways that make sense to people (often called ‘explainability’).

An increasing number of countries have privacy laws that are similar to the EU’s GDPR or which impose other challenging requirements. In the US, companies must be conscious of the state data privacy laws that indirectly influence AI options. Such laws typically contain a range of requirements, such as purpose limitations, data minimisation rules, disclosure limitations, notice and consent obligations, and key sections on automated decision-making. Companies must pay particular attention to these requirements in executing their own AI strategies and designing AI systems.

AI-specific regulation

Many jurisdictions are also in the process of developing laws that specifically target AI. Those laws often overlap with the requirements of privacy laws as well as other legislation (such as those governing copyright, product liability or equalities).

The rapid evolution of AI capabilities and applications, and the ever-expanding regulatory frameworks governing them, suggests the need for building adaptable compliance frameworks that can manage cross-border complexity.

Brock Dahl
Partner

The EU is seen as a leader in this regard and will set out various requirements for the use of AI in the AI Act and the AI Liability Directive. Once they enter into effect, those AI regulations may not only apply to providers but may also affect users of AI within the EU. The multitude of obligations for providers include:

  • governance (eg, developing a risk management system);
  • transparency (eg, vis à vis the users);
  • accountability (eg, generating technical documentation explaining the AI model);
  • fairness (eg, implementing safeguards for AI); and
  • self-certifying compliance.

Non-compliance may result in a fine of up to €40m or 7% of the total worldwide annual turnover, whichever is higher. A final draft of the AI Act is expected by the end of 2023 at the earliest, which will likely be followed by an implementation period of around 24 months.

Given the extensive time and investment required to build an AI system, it is vital that AI providers and other impacted businesses begin to consider the implications of the EU’s pending AI laws. Businesses should keep an eye on possible changes to the draft laws as they complete their legislative journeys. This is even more true given that the EU, together with tech companies, is currently working on a so-called ‘AI-Pact’ to bridge AI governance until the AI Act becomes effective.

Several other jurisdictions, including (among others) Canada, Brazil and China, have either introduced or are planning to introduce AI-specific laws.

Other countries are taking a less direct approach to AI regulation, but businesses will still need to keep abreast of emerging regulator-led initiatives, and potentially a more complex patchwork of applicable laws.

 

Unlike the EU, the UK is not planning to introduce any new AI-specific regulations or laws. Instead, the government has proposed a ‘pro-innovation’ framework based on five overarching principles to guide the development and use of AI: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. It is envisaged that existing regulators in the UK would be responsible for applying these five principles in practice across sectors. The idea is that the framework should be sufficiently flexible to keep pace with the fast-moving technology involved. The five overarching principles underpinning the UK AI White Paper are broadly aligned with the principles outlined in the UK and EU GDPRs.

The UK government is taking an agile and iterative approach to regulating the use and development of AI, so we advise clients to keep a watching brief of how this develops. Guidance published by UK regulators will be a key resource in the first instance to understand how they intend to apply the five principles in practice.

Maxwell Smith
Associate

Similar to the UK, the US government (at the federal level) has taken a variety of steps to signal its interest in AI issues; but neither it nor the US Congress have yet pursued legislative requirements. For now, AI applications are more typically governed less directly through the increasingly proliferating state data privacy laws.

At the US federal level, the White House has issued an Executive Order that, if fully implemented, will establish a range of regulatory requirements pertaining to AI. These will include:

  • requiring the National Institute of Standards and Technology set new market standards for AI safety and security;
  • requiring reporting to the government regarding dual-use foundation models;
  • starting the regulatory process for requiring reporting to the government regarding certain infrastructure as a service transactions;
  • incorporating the AI risk management framework into critical infrastructure guidelines (and potentially making those formal regulatory requirements); and
  • establishing new content labelling and identification standards for the federal government, and more.

Looking ahead

We look at legal risks along the cycle of an AI use case: input, operation of the model and output. That allows us to address the risks when and where they come up and find appropriate mitigation measures.

Theresa Ehlen
Partner

As explained above, many privacy principles and requirements will be pertinent in considering the development or deployment of AI where personal data is used. Privacy and AI-specific laws are just one part of a legal jigsaw of issues which those developing or using AI should consider. Other matters may include:

  • Intellectual property (IP) rights in the inputs or outputs of the AI–for example, IP issues have arisen where copyright materials have been used to train an AI model.
  • The risk that the AI may cause some damage or harm to third parties, and related liability issues.
  • Risks that AI systems without appropriate safety mechanisms during their training and deployment may behave in ways that may create or heighten existing bias and toxicity issues. For example, AI models can learn existing biases from training data and have the potential to result in discriminatory or unfair outcomes.

The opportunities of using AI in the workplace are as fascinating as the challenges it may trigger, given the variety of legal areas that it involves.

Satya Staes Polet
Counsel

Further background on those broader matters is available in our blog post: Generative AI: Five things for lawyers to consider.

A business will often be required to take difficult decisions when deciding how to proceed with AI. Accordingly, it is important for companies using AI to implement strong governance arrangements to ensure a robust process is in place for documenting key decisions and achieving appropriate outcomes where AI is developed, implemented or used.

In relation to privacy, this should include companies considering the implications of using AI as part of existing data privacy and information security assessments. This may include addressing explainability, considering any novel security risks, and ensuring meaningful human review of decisions.

Back to top.