Humans

Channel 4’s new series Humans, which broadcast its first episode on Sunday, June 14, visits the familiar topic of artificial intelligence and whether machines can be conscious. It approaches the subject from the perspective of what it implies for the relationships of a supposedly typical family.

What surprised me most was that the first episode was so derivative: the robots are lined up in a warehouse just as they are in the screen version of I, Robot, but viewed from the back and looking much more humanoid than those in the Will Smith version. One robot in this phalanx waits for the only human present to leave and then moves in a way intended to indicate some sort of consciousness (by gazing up at the moon). Soon we are sharing the dilemma of another human who knows that some of the robots have been vested with consciousness and intelligence, and are consequently more valuable and dangerous than others. Another old man has become sentimentally attached to his robot “carer”, half a million of which have been purchased by the health service to look after housebound people. Another supposedly conscious robot has hidden herself in a brothel and is forced to endure all sorts of indignities without betraying the fact that she has feelings.

None of this is remotely new. The only thing the series appears to be attempting to do is to explore the interior relationships of a family into which a very pretty robot is introduced, That the version purchased comes with a special “18+” option is presumably telling us that we may look forward to something a bit more spicy later in the series.

What makes this all somewhat frustrating is that genuine advances in AI are being made, frequently sponsored by apparently unlikely corporate giants such as FaceBook and Google who need better capacities to analyse the unimaginable quantities of data they are collecting. And this goes far beyond their already-impressive face-matching capabilities that you may have noticed can identify and name almost anyone in almost any photograph provided a named picture has been uploaded anywhere. And the capacity extends to faces that are partially hidden or in shadow.

So where are we now in the search for artificially intelligent life? And to what extent are we tempted to hide beyond a “soul of the gaps” argument that has replaced the “god of the gaps argument” that used to be used to explain the unexplained in science?

The Soul of the Gaps

“Computers will never be able to be conscious because they are not human, and they are not human because they do not have souls.” So we often hear the argument running. But accept for a moment that humans do not have souls either (and only a legacy of Plato’s philosophy that insinuated itself into Christianity and we have found it remarkably difficult to shake off suggests that we do), and the question demands a more sober answer. “Can machines think?” Alan Turing asked, proposing his famous test as a possible way to determine the answer. But Turing was also a victim of lingering positivism, a reluctance to accept that something could be the case even if we were unable to tell that it was the case. Whether a robot passes a Turing Test or not is not a sufficient reason to treat it as being able to think; I am sure many human beings could not pass a Turing Test either.

On May 9th, 2015, The Economist published a perceptive article called “Rise of the machines” which critiques the AI industry now blossoming under the patronage of FaceBook and Google and draws attention to the growing number of well-informed people who are increasingly concerned about it as one of several “existential risks” (a phrase coined by Nick Bostrom) that could threaten human survival.

The crucial part of the article draws attention to the lack of evidence for the brain being anything other than a machine; there simply isn’t a “ghost” or a “vital spark” that is necessary to make it operate. That being so, there is nothing in principle to prevent us from manufacturing another object that works a little bit like a brain but in another material, and reproducing in it everything that a brain can do. Difficult, certainly; impossible: probably not.

But would such an object be conscious? Could it be? These are the questions we like to ask in self-defence. But are they the right questions? Do we even know sufficiently clearly what consciousness is?

It may be easier to begin with the question of intelligence. There have been all sorts of definitions over the years but my own is that a measure of intelligence is best conceived as the capacity to solve new problems, in which case the variety of the problems a particular person can solve gives a sense of the scope of their intelligence, and the difficulty of those problems a measure of the depth. Some of us are very good at solving very easy problems across a wide range of circumstances; others are good at solving difficult problems in a relatively narrow set of circumstances; not many of us are good at both in a way that makes us capable of solving difficult problems across a wide range of different circumstances.

Computers tend to be of the second sort: highly effective at solving problems in a very narrow range. So they can play chess; they can recognise faces; they can translate in real time with reasonable facility if little grace. Nobody should suppose that they achieve all this by methods that human brains use, but that is irrelevant: if you can simulate the outcomes it is not necessary to emulate the methods. And one of the issues of concern to the doom-sayers in AI is that once we do not understand what computers are doing to solve problems we are out of our comfort zone in knowing what they are doing at all. Maybe, just maybe, they are doing something that could be construed as thinking but in a different kind of conceptual world where thought is another thing from human thought.

This is one reason why I do not buy the “brute force” argument used in the article mentioned. Saying a computer solves a problem by brute force that a human can achieve easily ignores the extraordinary computing power of the human brain which is also in some senses solving face-recognition problems by “brute force”; it is just that the brute force is so quick and the algorithm so complex and unconscious that we think it is “easy”. Until we try to tell someone how we do it.

So if this is intelligence, what is consciousness? I recently (May 16th, 2015) Tweeted, slightly tongue-in-cheek, “What we call ‘the soul’ is just the result of a recursive algorithm through which the brain monitors its own inside-looking-out-ness” and in essence I think that is the right answer. Put rather more elegantly, I wrote many years ago that “Consciousness is a metaphor for the self”. The two are not quite equivalent, but both point in the same direction”: consciousness is the brain looking at itself. As I wrote over twenty years ago in my article “Information and Creation” published in the proceedings of the European Society for the Study of Science and Theology, and repeated in my God and the Mind Machine, consciousness is the inside-looking-out-ness of the brain, and arises automatically from having a particular body with a certain kind of brain suitably situated, a body-with-a-brain-in-the-world.

There is no real need to be able to answer the question whether a computer could ever be conscious because whatever kind of consciousness it had would inevitably be different from our kind of consciousness. A more important or at least significant question is whether a computer should ever be considered to be alive and to have rights. The question, in other words, is ethical. Which brings us back to C4’s Humans.

The supposition in most of the literature from Philip K. Dick’s Do Androids Dream of Electric Sheep? through Asimov’s Robot series and Kubrick’s Blade Runner to the present series is that robots are just machines and that we can as a result treat them like a toaster or a vacuum-cleaner, without any feeling or empathy, concern or affection. We may in some sense or other become sentimentally attached to them, but that is a feature of our psychology not their rights (or so the theory goes). But suppose for a moment that we are wrong about this. Kubrick certainly suggests that we are: the robots which are sent into hostile environments to mine things too dangerous for humans to mine need to be so smart to solve the problems they encounter there that they have to be allowed to learn for themselves, to adapt to new situations and to solve new problems. They have to be intelligent, in the terms elaborated before. And once they are intelligent they have to be able to monitor and evaluate their performance, and once a capacity for that recursive process is put in place the step to the emergence of something like human consciousness albeit based upon a completely different mechanism in a different material is made much smaller.

If AI poses a genuine existential threat to human well-being or survival, then it will be immeasurably increased by those who want to rely either on “the soul of the gaps” or the “this is just brute force” arguments to defend the uniqueness of human beings, and I fear that relying upon some version of a Turing Test as a criterion of demarcation will not save us either because the kind of intelligence that will eventually evolve in AI machines will almost certainly not match the kind of intelligence the Turing Test is looking for. The scenario envisaged in The Matrix of machines that one day take over and use human beings only to generate electrical power is only genuinely unlikely in the second part of its conjecture; that the processes are already in train that may lead to the development of AI systems more and differently intelligent than us are probably already being developed. It is not a question of how many blue- or white-collar jobs they may replace; it is a question of whether they may make all of us redundant and whether, given the unimaginable destruction we have wreaked on almost every other living thing on our own planet, that would be altogether a bad thing.

How to get into United World College, Dilijan – I

How to get into United World College, Dilijan - I

This is the old mountain road through a small village called Semyonovka, rendered largely redundant by a new but unspeakably ugly tunnel, but for those who venture this way, spectacular. Semyonovka once marked the boundary between the imperial Russian empire in the time of Catherine the Great and the marauding Turks and Persians (or so I am told).