Skip to main content

Artificial intelligence

How will artificial intelligence be controlled?

By Professor Ryan Calo

Every society-shaping technology is regulated. But we shouldn’t expect blanket laws for AI. Rules will be developed from the bottom up—and governments may deploy other tools to ensure their citizens are protected, says Professor Ryan Calo.

The idea that a machine might ‘think’ is at once exciting and terrifying. Proponents of artificial intelligence (AI) see a world in which AI frees humans from error, bias, and material constraints. Sceptics doubt the technology’s transformative potential or, at the other extreme, worry that it could be humankind’s ‘last invention’. In my view the reality is probably somewhere in between: AI will change many aspects of human life in ways prosaic and profound, without ever ceasing to be a human tool. More importantly, the reality of AI is contingent since the full benefits are unlikely to materialize, let alone be evenly distributed across society, without careful channelling of the technology by government, industry, and civil society.

AI is a decades-old concept best defined as a set of techniques aimed at approximating aspects of human or animal cognition using machines. Today we tend to emphasize a subset of AI known as machine learning (ML). ML performs various functions, from translation to facial recognition, through a two-phase process. First, humans train ML on large volumes of data to generate a model. In the ‘inference’ phase, the model is exposed to new data in an effort to predict or infer some characteristic of that data. So, for example, a system might process thousands of pictures of moles that dermatologists have labelled benign or malignant in an effort to extract the features that characterize malignancy. The system can then recognize malignancy in a new mole that was not part of the original training set.

Although human expertise is needed to develop these models, the resulting AI might outperform experts during the inference or application phase. But this should not be confused with the ability of AI to achieve full human intelligence, i.e., replicate or surpass all aspects of human cognition. Arguing that ML will replace doctors because machines can diagnose specific skin conditions would be like arguing a toddler can write a novel because they learned to recognize the alphabet as well as Margaret Atwood.

Arguing that ML will replace doctors because machines can diagnose specific skin conditions would be like arguing a toddler can write a novel because they learned to recognize the alphabet as well as Margaret Atwood.

Even without the capability to approximate all aspects of human thinking, AI raises novel and important questions. If proponents are correct that AI will ‘change everything’, then there will be corresponding changes to law and to legal institutions, my field of study. No force remakes a society without touching its laws and governance.

The challenges raised by AI are varied but many can be grouped into two categories: (1) situations in which machines now do something only a person did previously; and (2) new capacities that are beyond what a person is able to do.

In the first category, think of autonomous surgery. For society to feel comfortable with a surgeon operating on a patient, the doctor must attend medical school, do a residency, and pass their professional exams. What are the comparable mechanisms for autonomous surgical robots? Or consider the challenges already raised by algorithmic decision-making around a criminal defendant’s likelihood of recidivism for purposes of sentencing or bail. When a person makes this decision, we can require reasons. There is a mechanism, at least, by which to challenge perceptions of racial or other bias. But an algorithm—especially a proprietary one protected by trade secret law—is not amenable to the same interrogation.

In the second category, think of the ability of AI to make predictions about behaviour, or to figure out a person’s hidden characteristics. With so much processing power and so many sources of data, AI is increasingly able to derive the intimate from the available. Thus, facts about us that we might allow to be observed or even willingly share can, under the scrutiny of AI, yield up more and more private information we would never expect or desire to be known. AI thus provides corporations and governments with a kind of Sherlock-Holmes-at-Scale, threatening human privacy.

Note that these substitutions and new affordances are hardly binary. Before fully autonomous surgery, doctors will have experienced varying levels of assistive robotics such as the Da Vinci robot which many hospitals already own. Long before there were deep neural nets to make predictions about people, there were other statistical methods. But as more and more tasks tip into the category of substituting for or extending human capabilities, law and legal institutions will need to adapt.

AI is not a singular technology that you can point to—such as, for instance, medical devices, airplanes, or nuclear power—for the purposes of passing regulations. It is, rather, a set of techniques and methodologies that get applied to different domains.

Law-making in the AI era

AI is not a singular technology that you can point to—such as, for instance, medical devices, airplanes, or nuclear power—for the purposes of passing regulations. It is, rather, a set of techniques and methodologies that get applied to different domains. In autonomous vehicles, for example, regulators might focus on issues such as the number of hours of testing required before vehicles can be let loose on the roads. Healthcare AI poses different regulatory questions, such as protocols on data privacy in the collation of training data sets; algorithm-based predictive policing and credit scoring pose different challenges again, such as how to avoid implicit or tacit racial, ethnic, gender, and other biases.

A single ‘omnibus’ law for AI is not coherent. But that does not mean that lawmakers should do nothing. Given how quickly AI performance is improving, and how many governments and companies are using it or thinking of using it, regulations and laws have to evolve. As with any game-changing technology, government has an obligation to channel AI in the public interest and to help ensure that the costs and benefits of AI are evenly distributed across society.

AI firms themselves are now engaging in self-policing and ‘code of conduct’ efforts to mitigate the downsides and risks of their endeavours. This is a welcome step and likely the right solution in the near term. But self-regulation alone cannot safely ensure that AI works in the public interest. Regulation will be increasingly critical.

Laws are already changing around drones and autonomous vehicles, which affect citizen safety and public spaces. Where such AI-specific regulations emerge, they will tend not to be ‘big bang’ reforms but a continual, iterative process of incremental change that could touch on many areas including consumer protection, antitrust, privacy, and tort liability. These regulations in turn may have to change as good and bad actors adapt to the role of AI in our lives.

But AI regulation is not just direct—the field is also de factoregulated under wider legal frameworks governing the digital economy—the EU’s General Data Protection Regulation, for example, through which citizens can obtain information about AI-based decisions affecting them. Public opinion plays a role here. If people—as citizens or consumers—express concern about the application of AI to any domain, companies will be affected either reputationally as they try to build successful and well-regarded businesses, or by governments responding to those public pressures.

Beyond regulation, governments can also shape AI in material ways such as procurement. Governments should only purchase AI-based solutions if, for instance, the vendor aligns with the expectations of citizens. Governments can also influence AI through funding decisions, shaping innovation just as they have in areas like medicine and the internet in the past.

To act effectively, either as regulators or innovation-shapers, states need sufficient technical clout. From procurement decisions to industry claims as to safety, government agencies need to understand AI in order to make wise decisions about it. Such technical capacity could be achieved by constructing new technology assessment agencies and beefing up (or re-funding) existing ones.

To act effectively, either as regulators or innovation-shapers, states need sufficient technical clout. From procurement decisions to industry claims as to safety, government agencies need to understand AI in order to make wise decisions about it.

Adequate technical expertise will also help governments to think creatively and pragmatically about the ways AI achieves social policy goals. Why not, for instance, use AI to improve access to the justice system? Perhaps AI can quicken trial processes. As Justice Cuéllar of the California Supreme Court in the US has observed, AI-based translation services could remove barriers to participation among litigants who don’t speak the local language. Similarly, as we think through how to administer a driving test to a driverless car, so might we ask whether autonomous transportation invites a fresh look at the prospects of public transportation and the configuration of cities.

Transformative technologies constitute an invitation to re-think the way we want to live. As human capacities begin to shift, we have a corresponding unique opportunity to take inventory of our common values and ask how technologies like AI can help us further them.

The era of artificial intelligence is a time for concern, but also for imaginative, if cautious, optimism.

By Professor Ryan Calo

Professor, University of Washington

Professor Ryan Calo is an associate professor of law at the University of Washington. He is the editor and author of several publications on cyber law, privacy, robotics, and torts, including Robot law and Artificial intelligence policy: a primer and roadmap.