Skip to main content

How is AI being regulated?

Read on for an overview of how different authorities are trying to shape the development of AI around the world.

There are three policy domains affecting AI:

AI-specific policies

AI-specific policies

specifically oriented towards governing AI-based technologies


  • Driverless car regulations
  • Proposed ‘AI councils’ and state co-ordinating agencies, eg Federal Robotics Commission (US), Centre for Data Ethics and Innovation (UK), AI Ethics Council (Singapore)
  • Specific legal liability for harm caused by AI

Indirect AI policy

Indirect AI policy

broader rules that de facto govern AI but may need updating


  • Intellectual property
  • Data privacy
  • Freedom of information and data transparency
  • Product liability

AI-relevant policy

AI-relevant policy

domains such as education, welfare and urban planning in which AI is plausible


  • Social welfare response to automation-driven unemployment, eg Universal Basic Income
  • AI ‘extension services’ to help widen access
  • Re-skilling for an AI world

Recent government initiatives

Recent government initiatives
Germany

Germany

Ethics Commission develops world's first guidelines for automated driving (2017). Germany unveils its Digital Strategy in late 2018, which includes €3bn of investment by 2025. The government wants to make ‘AI made in Germany’ an international trademark for modern, safe AI applications developed in the public interest. The German AI Observatory will explore the impact of AI in areas such as employment and social interactions.

France

France

Digital Republic Bill (2016) gives citizens the right to information about algorithm-based decisions that affect them. In 2018 the government publishes its AI Plan (which proposes a strategy on research, education and innovation); a report on social and regulatory issues; and the Mission Villani report (which looks at policy, skills and diversity in AI research).

UK

UK

Digital Charter (2018) includes principles for parity of online and offline rights, fair sharing of AI benefits and a commitment to a government centre for data ethics.

The House of Lords Select Committee on AI publishes ‘AI in the UK: ready, willing and able?’, a report that sets out a route to the UK becoming a world leader in artificial intelligence by putting ethics at the centre of AI development.

 

European Union

European Union

EU is developing a legal framework for cyber-physical systems that could include rules for registering robots, product standards and criteria for AI experiments.

Member states sign Declaration of Co-operation on AI (2018). The EU’s proposed Digital Europe programme would foster the use of AI across the economy and society.

United States

United States

Drones are regulated federally. Autonomous vehicles are loosely governed by an emerging framework that includes the 2017 Self-Drive Act (not yet law), guidelines from the Department of Transportation and National Highway Traffic Safety Administration, and state-specific rules (eg California, Arizona, Nevada and Michigan).

Future of AI Act introduced in late 2017 (but has not passed).

New York City passes bill to provide transparency over use of algorithms by city government (December 2017).NYC uses AI for decisions including bail, student placement in public schools and identifying Medicare fraud.

Select Committee on Artificial Intelligence is formed (2018) to advise on R&D priorities and consider government partnerships with academia and business.

South Korea

South Korea

Issues Robot Ethics Charter (2012) covering human control over robots, manufacturing standards, preventing illegal use and protecting data through tools such as encryption.

Taiwan

Taiwan

Cabinet-level Office of Science and Technology approves AI action plan, including regulatory easing (2018).

Japan

Japan

Issues Robot Strategy (2015) covering policy, ethics and safety standards.

China

China

Publishes Next Generation AI Development Plan (2017)envisioning use of AI ‘to improve social management capacity’ and pledging research on AI laws covering civil and criminal liability, privacy and IP, information safety, accountability, design ethics, risk assessment and emergency responses; commits to participate in AI global governance.

Plan is designed to ensure that by 2020 the country’s AI is keeping pace with the most advanced technologies worldwide; by 2025 AI is the driving force for economic restructuring; and by 2030 China will be the world’s leading AI innovator.

India

India

NITI Aayog, a policy think tank of the government of India, publishes its National Strategy for Artificial Intelligence #AIforAll (2018), a discussion paper that covers AI development recommendations (including establishing ethics councils; instituting a data privacy legal framework; creating sectoral regulatory guidelines on privacy, security and ethics; and encouraging international collaboration).

There are AI systems built over the last decade that early pioneers would have been astonished at.

Mike Wooldridge, Professor of Computer Science, Oxford

The regulatory challenges?

The regulatory challenges?

The regulatory challenges?

Stifling innovation

Countries are competing for global leadership in AI, which will have economic, social and military ramifications. If too onerous, regulations could hold back domestic companies and allow foreign competitors – and countries – to take the lead.

Defining AI

AI is not a singular technology but rather a multitude of techniques deployed for different commercial and policy objectives. This means it lacks a clear and consistent definition for regulatory purposes. Laws over the use of predictive AI in policing or judicial decisions, for instance, are unlike – and have very different risks to – those used in autonomous transport. There is also no consensus on what constitutes the threshold beyond which a system qualifies as truly ‘intelligent’ – and whether this constitutes a legally relevant watermark.

Fragmentation

Risk of patchwork developing as different jurisdictions develop their own rules governing AI. This is already happening in autonomous vehicles in the US, which are regulated by a variety of national guidelines and state-specific regulations. Internationally, there is also no unified ethical framework for AI, and related efforts, such as updating international humanitarian and self-defence law for the cyber age, have so far come to naught.

Moral relativism

There is no obvious programming technique for morality and ethics, which vary by culture and country. Public attitudes around issues such as data privacy, the appropriate reach of government data gathering, or the use of AI in public service decisions such as policing or the military, vary by country, culture and even socio-economic group.

Unintended consequences

Regulations requiring transparency around the workings of algorithms that make decisions affecting consumers or citizens could be gamed by fraudsters and criminals. AI can also be adapted and appropriated from legal and/or legitimate contexts to malicious, unethical or unregulated ones, eg gender identification algorithms for biometric ID used for marketing purposes.