In 1997 I published a little book called God and the Mind Machine in which I presented, among other things, the claim that the so-called mind-body problem is a pseudo-problem that evaporates under one simple redescription. All that is needed to eliminate the bogus notion that human beings have a quality called a “soul” that explains the experience of being oneself is to understand the difference between observing a body and being a body. When we observe a body or a brain as an object in the world “from outside”, we see it as an organic system that can no more explain the concept of mind than is the case when we observe a computer or an AI engine “from outside”. All we can do is describe what it does and how it works; the notion that it might additionally “be like something” to be that body or computer makes no sense except with reference to our own existence as observers who are bodies and brains and who believe that it is indeed “like something” to be those bodies and brains. To be a particular kind of body with a particular kind of brain is to experience the “inside story” that constitutes being like something: we know what it is like to be a body (with a particular kind of brain) because we are such bodies.
So our difficulty in understanding whether it could be like anything to be a computer, an AI, arises from our lack of first-hand experience of being an AI. Of course, in one sense we are precisely that, an AI; it is just that we are a biologically organic AI rather than a digital, silicon AI. The complaint “but computers are just binary circuits; how can that be aware, how can it be ‘like anything’ to be such a thing?” fails under the parallel complaint “but human beings are just neurological cells and fibres; how can that be aware, how can it be ‘like anything’ to be such a thing?” In other words, our perplexity at the AI question in relation to awareness is not a different problem from our perplexity about whether other organic entities can be aware; it is just rendered more difficult because of our lack of first-hand experience of what it is like to be such electronic, digital entities. We could have, and some of us do have, the same difficulty over other organic species: how can is possibly be like anything to be a gorilla, a dog, a cat, a bat (Thomas Nagel’s famous example), an ant or an amoeba? And where do we stop? Good question. Not long ago white men denied souls to blacks and women, too.
This issue has been rejuvenated by the developments in AI during 2016, and especially of machine-learning, even if the science goes back earlier. Stephen Wolfram thinks we can meaningfully put the behaviour of clouds alongside the behaviour of brains and that the substantive different arises from the developmental dimension of brains, that they have a history that cannot be run any faster than it is run. More of that in a moment. First we need to be clear about what is not (or at least should not be) an issue: the presence or absence of “souls”.
The pre-scientific history of the debate has essentially four heroes: Aristotle, Spinoza, Collingwood and, for rather different reasons, Wittgenstein. I am here regarding Alan Turing as the beginning of the history of AI rather than as part of its pre-history, and Thomas Nagel comes after him. Aristotle, Spinoza and Collingwood were all essentially the pioneers of versions of the double-aspect theory that is the only plausible solution to the soul or mind-body dilemma. For Aristotle the answer came through hylemorphism, that the soul is in the body as sight is in the eye; in other words, the soul isn’t a “thing” at all, but a property the body has when one is a body, just as sight is a property the eye has when something is an eye. (Adding “and a brain attached to it through an optic nerve” does not alter Aristotle’s point.) Spinoza essentially picked up on this with the first more or less explicit attribution of the term “double aspect theory” to what Aristotle had suggested; Collingwood makes the point even more strongly by speaking of the “inside” and the “outside” of an event in The Idea of History.
A fortiori when something “is” a suitably sophisticated digital computer there is no as one might say ontological reason to doubt that it is at least possible that it is “like something” to be that digital computer. All arguments to the effect that it cannot be “like something” to be a computer because a computer, by definition, has no “soul”, collapse. The question “merely” becomes whether it is as a matter of fact “like something” to be that particular computer, or perhaps to what extent it is like something to be that particular computer, not whether it is possible for it to be like something to be that computer.
Human beings, when confronted with scientific information about other human beings like Fred and Jane, are generally ready to grant that it is “like something” to be those other human beings, that there is an inside story to be told about Fred or Jane and that Fred and Jane are in large measure the only ones who can tell it. We are less ready – some of us, at least – to take further steps and to credit gorillas, dogs and cats, insects like ants and single-celled creatures like the amoeba with the same quality, viz. that it is in however rudimentary a sense “like something” to be each of those things. Very few people are ready to accept the same for what we might call “trans-speciation”, the attribution of an inside story or at least its possibility to a digital computer operating on entirely non-biological principles. Fewer still, one suspects, would want to extend this further and say that it is like something to be a rock or a cloud or a star; Theilard de Chardin would be an exception, and perhaps so would Stephen Wolfram, although I think he is talking about something rather different when he generalises the concept of intelligence to include, apparently, everything. (https://www.youtube.com/watch?v=giuVfY-I-p4)
Much of this was in God and the Mind Machine. Progress since then until 2015 had been relatively slow. But in 2016 something remarkable has happened, and enlivened the whole debate again. First, a machine – AlphaGo – won a serious Go competition 4-1 against one of the world’s best players five years earlier than anyone imagined that it would be possible. And not only did it win: it won despite the fact that those who created it were not themselves better than amateur players. And it achieved this by machine-learning. In other words, AlphaGo learned how to beat the best human player without being trained or programmed by expert players; “all” it did was learn the rules of the game and play against itself, iteratively improving its strategy against its own best previous play.
That was at the start of 2016. At about the same time the Google Brain team made a significant breakthrough thanks to work by Quoc V. Le and Mike Schuster in associating machine-learning with language-acquisition, and within a few months the previous attempt to drive Google Translate using more specific programming based upon parsing sentences had been abandoned, the attempt permanently suspended, and a new approach based around machine-learning was quietly and unobtrusively rolled out to the public in November. The resulting improvement in the translation was epoch-making. AI in the shape of machine-learning had come of age.
In the journalistic babble associated with Brexit and Donald Trump the significance of this development went largely unnoticed. Eventually, the New York Times published a very long (enormous by journalistic standards) article about it by Gideon Lewis-Kraus called “The Great A.I. Awakening” on December 14th, 2016. Nobody really took much notice. Perhaps it is just that most people are unaware of the significance of the developments taking place around us; perhaps we rely too much on a mistaken belief in our own uniqueness, that our souls make us special. Perhaps Trump’s election is partly to do with just that: white supremacist thinking has grown out of fundamentalist Christian thinking, and the far right in the USA is more closely tied to its deformed and defaced version of Christianity than many like to admit.
The key question is this: if AI can acquire through machine-learning the capacity to perform certain actions such as language translation to a standard that is indistinguishable from that done by humans (and in many respects superior to what can be achieved by most humans), is there any reason to doubt that it understands the languages concerned? The Google team describe their machines’ capacity to translate between languages indirectly – that from knowledge of Japanese-English and Korean-English it can perform Japanese-Korean translation to a high standard – as an “interlingua”; in effect, they are saying that their machines understand languages as well as being able to perform translations on them.
Are they right? Is it even the right question? At first many of us would want to cry “foul!”: no, that isn’t the result of understanding; it is just the result of clever manipulation of words. But this may be a mistake, too: maybe understanding isn’t all it’s cracked up to be.
Wittgenstein taught us to believe that “the meaning is the use”, so if a machine that uses language correctly and efficiently in a way that communicates accurately can be said to be able to “use it”, what more is there to the requirement that there be also something called “meaning” associated with language? This is not – at least superficially – the human experience: introspectively we think that there are two processes: one consists of understanding meaning; the other consists of giving meaning expression in language. Being able to express the same thought in more than one language suggests that this is a correct view: one meaning, many modes of expression, so the two are not the same. Ergo the meaning is not the use.
But press “pause” a moment. Is our introspection accurate, correct, robust? In what respects exactly do we have access to meaning in the absence of language? Aristotle observed that “the mind never thinks without an image” (De Anima), and Wittgenstein would say that the mind never thinks without a language, so our subjective impression that we have access to things called “meanings” independently of things called “languages” is an illusion.
The subject is sufficiently important to merit further comment. We subjectively have a sense of the pregnancy of thought; we intuit that something is coming, and it is coming in an area of interest we have studied and reflected upon in a process that Michael Polanyi liked to call “indwelling”: we immerse ourselves in things and, if we are rewarded for our pains, interesting and sometimes important thoughts just “come to us”. But in what clothing or guise do they come? Usually either in words or in images, or perhaps in music or movement. And if they do not “come” in any of these guises, in what sense could they be said to have “come” at all? It is like a joke I used to use with my philosophy classes: “I am the world’s greatest concert pianist, as would immediately be apparent to you were I ever to have taken the trouble to learn to play”.
The point is that what we call perception of “meaning” is in reality an awareness of further potential, a sense of the pregnancy of our minds or a field of study or a situation, a sense that “something [else] will come”. It is an emotional state, not a cognitive state; it has as such, while it remains an intuition of potency, no shape, no content; it is more like a signpost pointing towards the unknown than a landscape that lies in view; and as soon as it acquires content it also acquires form, and the form will be linguistic, artistic, musical or kinaesthetic.
So the AI question changes: given that AI engines can as a matter of fact produce content in the form of language, art, music and movement, and given in particular that they seem to be able to translate between two languages without an explicit association of one with the other except through a third language, should we be ready to credit those machines with the same capacity to perceive the potential of their cloud of (un)knowing that we call human intuition and the perception of meaning? For this would be what an “interlingua” amounted to unless it were a different kind of thing altogether. And the latter is the most interesting possibility of all: the presence among us of what Garry Kasparov famously described as “A new kind of intelligence”.