Inside Europe’s AI strategy
By Eugene McQuaid
The EU is behind the US and China when it comes to artificial intelligence (AI). But it has set out a plan that it hopes will bridge the gap, writes Eugene McQuaid.
At the beginning of its mandate in 2015, the European Commission launched a strategy for the creation of a Digital Single Market to, among other things, ‘help Europe hold its position as a world leader in the digital economy’. This strategy, in hindsight, contained some very forward-looking initiatives, including on the role of platforms, the free flow of data and cybersecurity. However, it is interesting to note that the strategy contained no specific initiatives in relation to AI.
Now, as the Commission heads into the final stages of its five-year mandate (the European Parliament elections will take place in May and a new Commission team will take office in November), it seems to have woken up to the importance of AI and the need to do something (or at least to be seen to do something).
In May 2018, amid fears that the EU was falling further and further behind China and the US when it comes to the deployment of AI, the Commission presented its EU strategy on AI. Building on recommendations from the European Parliament and the Council (Member States), the strategy sets out plans to free up around €20bn by 2020 to boost the EU’s technological and industrial capacity and AI uptake across the economy, as well as address socio-economic changes brought about by AI such as anticipated changes in the labour markets.
The strategy sets out plans to free up around €20bn by 2020 to boost the EU's technological and industrial capacity and AI uptake across the economy, as well as address socio-economic changes brought about by AI such as anticipated cahnges in the labour markets.
However, when it comes to positioning itself as a ‘world leader in the digital economy’, it is its goal of ensuring an appropriate ethical and legal framework that the Commission sees as the EU’s greatest competitive edge. Indeed, the strategy notes that the EU is ‘well placed to lead this debate on the global stage. This is how the EU can make a difference – and be the champion of an approach to AI that benefits people and society as a whole.’
To implement this strategy, the Commission committed to adopting ethical guidelines by March 2019. In order to do so, it created two groups. Firstly, the high-level expert group (HLEG) on AI, a group of 52 experts comprising representatives from academia, civil society and industry, which is responsible for developing the guidelines; and secondly, the EU AI alliance, a platform through which a broader group of stakeholders (including myself) can feed into the work of the HLEG.
On 18 December 2018, the Commission published a first draft of the guidelines and opened them to public comment (via the EU AI alliance) until 1 February. The draft guidelines, which aim to set out a framework for the development of ‘Trustworthy AI’, are discussed in more detail here.
Following the planned adoption of the final guidelines in March, the HLEG is expected to come forward with policy and investment recommendations in May. Thus, when the new Commission takes office in November, we can expect that AI will feature at the forefront of its key priorities.
The AI patent boom
By Dr. Sonja Mroß and Wolrad Prinz zu Waldeck und Pyrmont, Freshfields
Product liability in the AI age
By Andrew Austin, Freshfields
Who owns the output?
By Giles Pratt and Emily Rich, Freshfields
Inside Europe's AI strategy
By Eugene McQuaid, Freshfields
Can we trust machines that 'think'?
By Jess Steele, Freshfields
How will AI be controlled?
By Professor Ryan Calo, University of Washington
AI technology: a lawyer's guide
By Giles Pratt and Sam Hancock, Freshfields
By Timandra Harkness, Author of Big data: does size matter
Harnessing AI to reduce risk
Freshfields case study