From bias to malicious use and job destruction, we take a brief look at AI’s risks.
The rise of AI
The rise of AI
Seven decades since Alan Turing wrote about the creation of a ‘human-like computer’ in his paper ‘Computing Machinery and Intelligence’, AI is now an everyday reality, thanks to:
At its heart, AI is computer programming that learns and adapts. It can’t solve every problem, but its potential to improve our lives is profound… We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come.
Sundar Pichai, Chief Executive, Google
Flawed programming and unrepresentative data leading to unfair or unethical practices (eg racially biased crime prediction or commercial discrimination against disadvantaged groups in areas such as credit and insurance).
Many AI systems are ‘trained’ on, and work with, large data sets in order to identify patterns. This poses legal challenges where personal or sensitive information is used in AI systems without sufficient regard to data privacy and other requirements. This occurred in DeepMind's NHS collaboration, where real patient data was processed during the testing phase of the Streams app.
Loss of control
AI can behave in unpredictable and irrational ways, as with algorithm-driven ‘flash crashes’ and selling spirals that are already affecting financial markets.
AI has been used to manipulate public opinion in the US, Europe, Asia and Latin America via social media ‘bot swarms’ and ‘dark posts’. AI could be harnessed to make cyberattacks faster, more precise and more disruptive.
Unemployment and inequality
AI could threaten large numbers of jobs (particularly manual occupations).
This technology will enhance us - so instead of artificial intelligence, I think we'll augment our intelligence.
Ginni Rometty, Chief Executive, IBM
The AI patent boom
By Dr. Sonja Mroß and Wolrad Prinz zu Waldeck und Pyrmont, Freshfields
Product liability in the AI age
By Andrew Austin, Freshfields
Who owns the output?
By Giles Pratt and Emily Rich, Freshfields
Inside Europe's AI strategy
By Eugene McQuaid, Freshfields
Can we trust machines that 'think'?
By Jess Steele, Freshfields
How will AI be controlled?
By Professor Ryan Calo, University of Washington
AI technology: a lawyer's guide
By Giles Pratt and Sam Hancock, Freshfields
By Timandra Harkness, Author of Big data: does size matter
Harnessing AI to reduce risk
Freshfields case study