Skip to main content

Artificial intelligence

Can we trust machines that ‘think’?

By Jessica Steele

The European Commission is consulting on a framework designed to ensure artificial intelligence (AI) acts in accordance with fundamental rights. But the need to program AI to make the ‘right’ choices highlights some tricky ethical issues, writes Jess Steele.

When the machines take over, how do we ensure that they respect our fundamental rights and values? It’s a question gaining prominence in a number of fields, from driverless cars to predictive algorithms and facial recognition.  

At the end of 2018 the European Commission’s high-level expert group on AI published for consultation its draft ethics guidelines for trustworthy AI. The final version is due in March 2019.

The guidelines propose the same kind of broad, intuitively sensible rules about what constitutes ‘trustworthy’ AI as those put forward by a number of other organisations, including Microsoft and Google. What are its conclusions? First, that AI should respect fundamental rights, societal values and five core ethical principles:

  • ‘beneficence’ (do good);
  • ‘non-maleficence’ (do no harm);
  • human autonomy;
  • justice; and
  • ‘explicability’ (transparency).

Second, that AI imbued with these principles should be reliably designed and developed, that there should be human oversight, and that there should be accountability and redress when things go wrong.  

In other words, ‘trustworthy’ AI makes the right choices, every time.

The guidelines propose the same kind of broad, intuitively sensible rules about what constitutes ‘trustworthy’ AI as those put forward by a number of other organisations, including Microsoft and Google.

The guidelines also include a list of ‘critical concerns’ raised by AI. The expert group notes that the choice of critical concerns was controversial, and seeks feedback from consultees on the extent to which each concern is a real threat.

The group’s list – mass surveillance, ‘covert’ AI posing as human, ‘citizen scoring’ by governments, and autonomous weapons – is intriguing. It encompasses concerns about the line between AI and human intelligence, the risks of mass data collection and processing, and – in autonomous weapons systems – returns to the crux of the issue – how, as AI systems gain the ability to make choices, we ensure that they do the ‘right’ thing; and that a ‘real’ person is accountable when they do not. The group argues that AI should make choices based on its five ethical principles, which are drawn from European and international human rights law.  

It’s not as easy as that, of course: the five principles are broad, and it is possible that they could conflict in real-world situations. In such cases, human decision-makers in all likelihood would make different ‘right’ choices. See, for example, Moral Machine, an MIT research project which collected data from 233 countries and territories on how users thought a driverless car should behave in an ethical quandary. The study found significant variations in ethical decision-making between different cultural and geographic ‘clusters’.

So what ‘right’ choices should we teach AI? It’s not hard to imagine a world in which regulators set out the principles programmers must teach AI systems in their jurisdictions, based on their own fundamental rights legislation. Alternatively, the choice might be given to consumers, who could, for example, program their driverless car to prioritise the safety of children over other road users. This raises interesting questions as to who is at risk of legal action if a programmed AI goes wrong.

AI opens up the possibility of completely programmable decision-making: it allows us to select in advance and at our leisure what, for a human decision-maker, might be a subconscious or split-second choice. However, that very possibility will force lawmakers, manufacturers and consumers to confront the question of what constitutes the ‘right’ choice when ethical principles conflict.  

This piece first appeared on Freshfields’ human rights blog.

Jessica Steele

By Jessica Steele

Associate, Freshfields

Jess is an associate in our EU disputes team. She advises on competition and regulatory investigations and competition and commercial disputes.

Read more