Skip to main content

Artificial intelligence

AI technology: a lawyer's guide

By Giles Pratt and Sam Hancock

As AI becomes more prevalent in our lives the law will need to adapt. However, lawmakers have been struggling to draft a definition that covers exactly what AI is. This may be partly due to AI itself being an imprecise concept, and partly due to the complex technical nature of the field.

As a result, AI is often used as an umbrella term to cover a variety of underlying computing technologies. In this post we seek to examine these underlying technologies that are usually thought of as ‘AI’, and look at how the regulators have so far tried to capture ‘AI’ in words.

What is AI technology?

The problem of producing a legal definition of AI is perhaps unsurprising, given that even AI experts have differing views on what the technology is. In fact, the term ‘AI’ is often used to describe a basket of different computing methods, which are used in combination to produce a result but which aren’t necessarily AI by themselves. Five methods that are integral to current AI systems are listed below. Click on the links below for an explanation.

How can AI be legally defined?

The need to regulate AI is clear. Citizens need to know who will be liable if a driverless car knocks them down; and businesses need to know who owns the IP in products designed by their in-house robots. But to regulate AI we must first define it. Even trickier: that definition must be future-proofed, so as to cover any changes in AI technology. The attempts so far have been mixed.

In the UK, the House of Lords’ Select Committee on AI recently released a report that used this definition:

‘Technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation.’

This is a problematic definition because it tries to define AI by reference to human intelligence, which is itself notoriously hard to define. Also, this definition omits a key feature of many of AI’s most useful advances: applying the huge processing power of computers to achieve tasks that humans can’t.

Meanwhile, the EU Commission has suggested this definition of AI:

‘[S]ystems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals.’

And in the US, the Future of AI Act – which sets up a federal advisory committee on AI – defines AI as:

‘Any artificial system that performs tasks under varying and unpredictable circumstances, without significant human oversight, or that can learn from their experience and improve their performance… In general, the more human-like the system within the context of its tasks, the more it can be said to use artificial intelligence.’

The EU and US definitions have the same problem of defining AI by reference to human intelligence. The EU Commission’s wording introduces the concept of ‘autonomy’, which might be a useful approach for future legislation.

For now, we’re still some way off an agreed legal definition, and the better approach is probably to look at the context in which the law might intervene. For example, if we ask how AI should be regulated, our terminology will need to take into account the impact of the AI and the respective responsibilities of those who introduced it into the world. In particular, we can expect regulators to look beyond autonomy to its creators. For now, it at least feels like the EU has the right mindset, though these legislative debates would probably have made Alan Turing smile – as he put it: ‘We can only see a short distance ahead, but we can see plenty there that needs to be done.’

Machine learning

The basic premise of machine learning is that it involves programming that teaches itself. It’s an algorithm that begins by being unable to produce the desired output but, after enough ‘training’, it learns to produce that output. The training is given by supplying the algorithm with large datasets and using a mechanism that feeds back whether the algorithm has processed each data point correctly or incorrectly.

This training can be done manually in a process called supervised learning. This generally requires the training dataset to be manually ‘labelled’ by humans. For example, humans might manually label all the images within a database that contain a road sign. The machine learns the common features of those images and can then recognise when there is a road sign in a new image. The problem with this process is that it relies on the quality of the human trainers, because it’s their classifications that the computer tries to replicate. For more complex tasks, like classifying an obligation in a contract, this might be problematic.

This training can also be done automatically, by either unsupervised learning or reinforcement learning.

In unsupervised learning, there is no human classification of the training dataset. Instead, given a large enough dataset, an algorithm can recognise ‘clusters’ of certain features. If the algorithm is given a new item, it can say that the item is likely to be similar to some other item based on the clustering of certain features that they share.

Reinforcement learning is where the algorithm improves how it processes data by trying new actions in conjunction with already well-performing actions in order to perform as well as possible by reference to a certain measurement. It will keep trying, and thus improving, through various iterations so as to achieve the best process against a target that was set for it. Reinforcement learning has been the key to many recent advances in AI, in particular in complex decision-making, like Google DeepMind’s Go-playing AI.

Deep learning

The defining characteristic of deep learning is that it takes an input and produces an output, and then this output is used as the input for the next layer of processing. It is ‘deep’ because of its many layers, each layer being a separate algorithmic function. On the one hand, this means that there can be ‘deep learning’ systems that don’t appear to be AI at all. On the other hand, this technique has allowed for advances in computing that produce ‘intelligent’ behaviours: behaviours like image classification and text recognition that previously were performed only by humans.

An advantage of deep learning is that it can structure and weight values appropriately. Each successive layer aggregates the outputs of the previous layers and can adjust weightings of each previous layer accordingly so that the desired result is reached. This is useful in conjunction with machine learning, as it can use large datasets to adjust its own weights in a way that is beyond the ability of human operators. Deep learning also mimics human intelligence in that it replicates the way we make decisions: prioritising the most important factors over lesser ones.

As an example, image classification works by the first layer’s algorithm classifying the individual pixels in an image based on their colour. This itself is not particularly meaningful information; it does not allow a computer to determine whether there’s a particular object in that image. If the same object appeared in two pictures, but was placed slightly differently in one in comparison to the other, then the pixel values at each location in each image would be different and so the first layer would not by itself be able to classify the image.

Deep learning applies a second layer, so that a second algorithm recognises the relationships of certain pixels to each other. Together these two layers will be able to recognise certain features of an object – for example the ear of a cat.

Other layers are then used to recognise other features, with further layers used to recognise when these features are correctly positioned in relation to each other (i.e. for a cat you need two ears on the top of its head, rather than two ears anywhere). The final layer aggregates the outputs of all previous layers, to decide whether the combination of those outputs means the object is in the image.

Machine learning is usually used in conjunction with this deep learning process in order to train each layer to recognise when the desired feature is present.

Artificial neural networks

Artificial neural networks are computer systems that try to emulate certain characteristics of biological neural networks – in other words, a brain. In artificial neural networks, a piece of code will represent a ‘node’, which is fed a certain input and gives a certain output. Generally, decisions made by these nodes will be fairly simple, such as a binary outcome based on whether the input met a certain threshold value. These computer nodes are connected to multiple other nodes, so the output of one will be the input of others. It is this feature that makes the network mimic a brain; the nodes are equivalent to neurons, and the connections mimic the dendrites and axons in a human brain that connect neurons to other neurons.

There are two other unique features of artificial neural networks. First, each artificial neuron can both store and process information at the same time. In contrast, a traditional computer’s computer processing unit (CPU) can only process data. Therefore it has to send the results of that processing to a computer’s data storage if the result is to be kept, so storage and processing are separate.

Second, because storage and processing are done at the same time in each artificial neuron, all the neurons can be processing information simultaneously. This increases the processing power immensely. In contrast, in a traditional computer system each piece of information that is input into the CPU has to be processed sequentially.

Because of these features, artificial neural networks are generally run on graphics processing units (GPUs), which have the physical architecture to run multiple processes simultaneously. Quantum computing may also be well suited to running artificial neural networks because of its particular ability to process large amounts of data in parallel. Further development is needed, though, before practical use cases become available.  

Search algorithms

Search algorithms retrieve information stored in a particular search space that meets set criteria. They are used in AI to find the path of least resistance from a start state to a defined end state, that path being via certain allowed states only.

This is well illustrated by the use of this AI technique in a board game: the start state is the allowed configuration of pieces on a board, and the end state is a configuration of pieces that is considered victorious under the rules of the game. A search algorithm will map out all possible board states, with each transition between states corresponding to a legal board move, and then find the quickest path from the start state to the end victorious state.

This method is simple in principle, but can soon run into problems: chess has a potential number of board states that is estimated to be around 10120, whereas the number of particles in the observable universe is a comparatively tiny 1080. So it’s well beyond the processing power of computers to evaluate every single board state to find the best path to victory. As a result, search algorithms are used in conjunction with the other AI techniques. This combination of techniques allows the program to take effective shortcuts, like using probability or machine learning to approximate the best move at any time, considering only a limited number of board states ahead.

Games are a good illustration of search algorithms, but this AI technique is also used in more significant real-world cases. A simple example is searching a database – perhaps not something we usually consider to be AI. Search algorithms also underpin more complex programs, like Google Maps.  

Natural language processing

Natural language processing broadly covers the ability of computers to understand human speech and interact in response. It’s often considered a core component of AI. The ability of a computer to successfully process natural language is the foundation of the Turing test, which was an early test for whether a computer exhibited intelligent behaviour.

Advances in this area have produced impressive results: there are a number of chatbots that can hold a conversation with a human, and artificial assistants like Siri understand human speech and act in response. However, these results don’t necessarily indicate intelligent behaviour. The original attempts at natural language processing involved humans exhaustively writing specific rules for recognising the meaning of language in each sentence.

The more recent attempts have instead used supervised machine learning, where computers analyse vast amounts of pre-labelled text to make their own rules. And the use of machine learning in conjunction with deep learning has produced better results, with the training concept of machine learning enhanced by the weighting of rules that deep learning allows.

By Giles Pratt and Sam Hancock

Giles Pratt, Partner

Giles heads our intellectual property and technology group. He also leads our data practice in London, and co-heads the firm's digitization initiatives including our Freshfields Digital platform.

Sam Hancock, Trainee Solicitor

Sam is a trainee and has spent time working with our IP, Commercial, Corporate M&A and Disputes, litigation and arbitration teams, retaining a focus on technology throughout.  He has Bachelor’s degrees in Law and Chemistry.

Read more