Skip to main content

AI regulation

Turing’s child

In 1950, Alan Turing wrote a piece for Mind, a Quarterly Review of Psychology and Philosophy.

‘Computing Machinery and Intelligence’ described the Turing Test, though he didn’t call it that, by analogy. An interrogator tries to find out which of two people in the next room is a man, and which a woman, by asking a series of questions.

Turing begins by asking the reader, ‘can machines think?’ Attempting to answer, he describes the digital computer as being a machine that can ‘mimic the actions of a human computer very closely.’ He also describes the idea of a computer programme, using the example of a mother instructing her child:

Suppose Mother wants Tommy to call at the cobbler’s every morning on his way to school to see if her shoes are done, she can ask him afresh every morning. Alternatively, she can stick up a notice once and for all in the hall which he will see when he leaves for school and which tells him to call for the shoes, and also to destroy the notice when he comes back if he has the shoes with him.

It’s pretty obvious that Turing had no children, and spent far too much time with very reliable people and machines.

That apart, it’s a good description of how an algorithm works. An algorithm is just an ordered set of instructions, which can include conditional, IF, instructions. If you’ve ever seen a flow chart, that’s just an algorithm designed to be read by a human being.

Turing also mentions Charles Babbage’s Analytical Engine, and Pierre-Simon Laplace’s view ‘that from the complete state of the universe at one moment of time … it should be possible to predict all future states’, before making his own prediction, that by the end of the century it will be acceptable to talk of machines thinking. Then he tackles various objections, including the question of the soul, of consciousness and ‘Lady Lovelace’s objection’ that the Analytical Engine cannot originate anything, and can only do what it is told.

It’s remarkable how comprehensively Turing lays down problems that the field of Artificial Intelligence is still working on today. He’s not convinced that a machine can never produce original results: he thinks a powerful enough machine could leap ahead of his limited calculations and surprise him. He agrees that it would be impossible to lay down ‘rules of conduct’ to tell a machine how to respond under any conditions, but suggests that instead ‘laws of behaviour’ could be found that govern the machine, just as ‘if you pinch him he will squeak’ applies to a man.

Turing even imagines machines that don’t function only in absolute, yes/no terms, but can work with a range of answers, using probability to decide which are more likely to be true.

He also proposes that a machine designed to learn for itself, as a child does, could develop into something approaching an adult human brain.

It will not be possible to apply exactly the same teaching process to the machine as to the normal child. It will not, for example, be provided with legs, so it could not be asked to go out and fill the coal scuttle. Possibly it might not have eyes. But however well these deficiencies might be overcome by clever engineering, one could not send the creature to school without the other children making excessive fun of it.

It’s rather touching to think of Turing worrying about his little robot child being bullied at school for being different. Would it still remember to check at the cobbler’s to see whether its mother’s shoes were ready? It’s also odd to find that Turing could imagine a world of thinking machines, but not one with central heating.

The term artificial intelligence, AI, wasn’t coined until 1956, at a conference in New Hampshire. Researchers were very optimistic about how easy it would be to recreate general intelligence in a machine. In 1965, a program called ELIZA carried on remote conversations, making it the first with the remotest potential to pass the Turing Test.

In the same year, Turing’s wartime assistant, mathematician I. J. Good, suggested that the last invention human beings need to create is the first ultra-intelligent machine. From then on, the machines can design even better machines, and so on. This idea, of the machine that thinks better than any human, is often called the singularity today. And not everyone is so optimistic about how things would turn out if it ever came to pass.

Don’t panic, though: I can’t see that we’re any closer to achieving it than we were 50 years ago.

Many AI researchers will tell you what you mainly learn by trying to build machines that think like a human is just how many different types of thinking a human being can do.

Imagine the first hour of your typical day.

If, like me, you’re not a morning person, a lot of what you do is performed on autopilot. I’m not really conscious that I have showered, made a cup of tea and so on. Those are now habits, automatic sequences of actions. I don’t even need a sign on the hall door. Nevertheless, I can still do them if circumstances change. If my flatmate’s in the shower before me, I can change the order of tasks and make tea first. I can wash up a mug if there aren’t any clean.

For a machine, simply telling the difference between a mug and a milk carton can be a problem, let alone deciding if it’s clean. Knowing what is happening in the world, and making a decision about changing the order of tasks is at least two problems. Being able to pour tea AND climb stairs is a combination of motor skills beyond most robots. Even a robot waterproof enough to survive 10 minutes in the shower.

And that’s just the routine stuff. At the same time, I am listening to the radio, composing brilliant arguments against whoever is on the Today programme that I may possibly tweet but more likely will just shout at the radio. Then I have to read the emotions in the face of my flatmate who got out of the shower while I was shouting at the radio, and possibly apologise for startling him.

In parallel I’m remembering what I have to do that day, weighing up how likely it is that the cobbler will have my shoes ready, pondering whether it would be worth having children just to run errands for me, and feeling a pang of gratitude to whoever invented central heating so I don’t have to light a coal fire before I start work.

Nobody has yet managed to instil feelings of gratitude, or of any other emotion, in a machine. And just getting one AI to switch between two types of task with anything approaching the fluency of a human being is still a monumental task. So I am not one of those worrying about the singularity and the triumph of the super-intelligent robots.

But I have met a number of very smart people who told me not only that a machine with super-human intelligence was possible, but that it was already here.

This piece is an extract from Big Data: Does Size Matter? © Timandra Harkness (2016; Bloomsbury Publishing plc).

By Timandra Harkness

Timandra Harkness is the author of Big data: does size matter?, published by Bloomsbury Sigma. She is a presenter on the BBC Radio 4 series FutureProofing and How To Disagree. Timandra has also presented on documentaries Data, Data Everywhere, Supersense Me and The Singularity, and was a resident reporter on Radio 4's social psychology series The Human Zoo, more an exhibit than a zookeeper.