Mind: The Gap

Religions across the world found themselves defending their faith against the seemingly ever-advancing discoveries of science for much of the twentieth century. Such defence often took the form of claims about what science would never be able to explain or do because of mysteries that were the sole domain of a god hidden behind an impenetrable veil of mystery. Science chose to ignore these claims almost entirely, and the realms of the unexplained and putatively inexplicable were progressively brought into the domains of science instead. So inexorable and embarrassing did this advance become that someone (possibly Henry Drummond who used the phrase “gaps which they will fill up with God” in his Lowell lectures The Ascent of Man) coined the phrase “the God of the gaps” to describe supposed deities whose empires consisted of domains ever-shrinking before the advance of science.

The twenty-first century has already witnessed a comparable phenomenon that has progressively diminished the domain not of erstwhile gods but what is perhaps the last bastion of human self-esteem, consciousness, the mind or the soul. Typically claims about the soul, whether explicit or implicit, amount to claims about what human beings can do that either other species or, until recently, machines could never be expected to do. We find ourselves invited to believe that creativity, imagination, innovation, and especially the creation of poetry, prose, music, art and so forth are quintessentially human traits that define who we are and our uniqueness and importance in the scheme of things.

The longevity of these beliefs in gaps between what science can explain or other species can do is to a large extent attributable to the persistence of the linguistic terminology that embodies them. Words such as “genius”, “inspiration”, “creativity”, “insight”, as well as “consciousness”, “understanding” and “imagination” implicitly refer to what are often thought to be the inexplicable properties of beings uniquely possessed of qualities attributable to something that since ancient times has been called “the soul”.

The logic was apparently irrefutable: creatures with souls can do things that creatures and objects without souls cannot do; these creatures and objects (there is usually a favourite, long list) do not have souls; therefore these creatures and objects cannot do these things. Candidates for membership of the list of soulless creatures have from time to time included other human races, women, humans with differently-coloured skins, humans who look different, humans who are in some way mentally or physically different, non-human animals, arachnids, insects and microbes; all objects have been denied souls for most of human history by most human tribes, but not quite by all. Lack of a soul, or in some cases possession of what was judged to be a corrupt, condemned, bad or evil soul, was often used as an excuse for persecution, cruelty, enslavement, war and killing. In some cultures the supposed absence of a soul was associated with inability to feel pain or, less often, inability to experience joy and love. On the heels of the denial of souls, therefore, all manner of cruelty and excess was justified.

One of the more peculiar aspects of “the soul question” is associated with the ability to be conscious and to think, and therefore with the possession of something called “the mind”. Consciousness in particular, and often the capacity to think by association, are held out as indications of possession of a soul and as the attributes of creatures possessed of souls. By the aforementioned logic, non-human things generally and non-living things in particular are therefore incapable of consciousness and thought. We quickly conclude that they are of no consequence, and ours to do with as we please.

Alan Turing famously wrote a prescient and epoch-defining paper called “Can Machines Think?” in 1952 shortly before his death. In that paper he chose to equate the question whether a machine can think with the question whether we can tell that it thinks, which was an unfortunate piece of positivism since whether something is the case and whether we can ascertain that it is the case are obviously two quite different things. (For example, whether there is life on other worlds is a matter of fact; whether we can ascertain whether there is life on other worlds is merely a matter of contingency.) Nevertheless, what came to be known as “The Turing Test” has become a defining feature of our age, and is set to become ever more central as artificial intelligence becomes more powerful and ubiquitous. And as AI, more particularly general artificial intelligence capable of addressing and solving a wide and deep range of everyday challenges, becomes more powerful, so the gaps between what “machines” can do and what human beings can do will shrink. And inevitably that has raised, and will raise again and again, the question whether there is ultimately anything “unique” about human beings, and in particular whether the notion of “the soul” serves any purpose.

It is not the case, however, that all human cultures have believed in the kind of soul/body dualism that much of western civilisation inherited from the Greeks and embedded in most versions of Christianity. Some have regarded life as an integration of the mental and physical, mind and body, in which neither can be conceived without the other. Those cultures are more likely to find the assimilation of machines easier to accommodate.

Integrated mind/body anthropologies have never had to face the problem of how mind or soul and body are related and connected, a problem which has preoccupied many philosophers over the centuries. One of the oldest and certainly the most profitable answers essentially dissolves the distinction as a linguistic error: Aristotle’s notion of hylemorphism teaches that the soul is in the body as sight is in the eye, and that should really have been an end to the matter, but unfortunately Plato’s ideas were adopted by Christianity because they seemed more compatible with the notion of resurrection and life after death and with the imago dei notion that mankind is made “in the image of God”.

Modern reformulations of Aristotle’s principle take a slightly different form by stressing that there is nothing surprising about the emergence of mind, thought and consciousness given the evolution of certain kinds of bodies-with-brains: to be a creature situated in the world with a particular kind of body and brain is automatically to have a mind and be conscious; mind is the world-orientation of body; we are each our body in its inside-looking-out-ness. (cf. my “Information and Creation” in the proceedings of the European Society for the Study of Science and Theology (ESSSAT), 1991, and God and the Mind Machine, SPCK, 1997.) And to the inevitable question how creatures without souls can hope to survive death we should have the courage to give the only possible honest answer: they don’t.

The gap between what humans can achieve and what machines can achieve is closing, and will continue to close, but will eventually widen again with machines ahead of us in all conceivable kinds of ability and intelligence. What has come to be called “The Singularity” of the first AI capable of such superintelligence that it can then design all its own successors, each generation of which will progressively out-perform its creators, may still be some years off, but it is no longer in doubt that it will eventually occur, and there remain very few intellectual processes that AI is not conquering to levels that already do or soon will exceed human capacity.

Are the most advanced AIs already conscious? Could it be the case, for example, that AlphaZero, that learned chess in only four hours and then defeated the best software on the planet (Stockfish), is already in some rudimentary sense conscious? This takes us back to Alan Turing’s seminal essay: once an AI performs at a task in a way that is superior to any human, on what basis are we able to deny it consciousness, even if only a consciousness limited to what is required by that task?

This question seems to me to be the product of a widespread conceptual confusion about the nature of consciousness, a confusion that Turing’s positivistic approach to the formulation of the question has served only to make worse: if, as the view of mind advocated here insists, consciousness is an inevitable and unremarkable consequence of having a particular kind of neurophysiology, then whether an object or living creature is conscious or not has nothing to do with the tasks it can perform except insofar as consciousness is a necessary contributor to their performance. Conversely, how valuable, significant of important some creature or object is does not depend upon whether it is conscious; the view that it is our consciousness that establishes our value and importance is a legacy of the view that consciousness is a property we have because we have souls. I may anticipate my death and regret the cessation of the subjective experience I call my consciousness, but you, if you care at all, will not anticipate with regret the passing of my consciousness (except insofar as you identify with it because you anticipate and regret the cessation of your own): you will regret the passing of an agent who interacts with you and to a greater or lesser extent affects the trajectory of your life. For you, therefore, whether I am conscious or not is of absolutely no consequence; what matters to you – other than the passing of a consciousness that reminds you of the eventual and inevitable passing of your own – is the material difference my passing will make to your life. To the extent that my role for you depends on my possession of consciousness, it is the passing of my consciousness that will make that difference, but were a machine to be capable of reproducing every facet of my agency it would be of no consequence whether that machine were conscious or not (although if it were the case that some agency is impossible without consciousness, a machine could only fulfil that role were it to have consciousness).

And this is therefore the key question: can machines achieve viable superintelligence without some equivalent of consciousness? Consciousness, conceived as a means to an end, is instrumental rather than fundamental or essential. If – and it is a big “if” – the kind of self-referential capacity that consciousness facilitates is a necessity if an entity is to reach certain levels of performance – it does not matter of what sort – then machines will only be able to reach those levels if they possess such consciousness, or perhaps some form of super-consciousness with more scope and power than ours.

That superintelligence may be accompanied by super-consciousness is not something that has received much attention. For example, we are essentially conscious only of one thing at a time, even though we flit from one thing to another rather as a single CPU – central processing unit – on a computer time-slices to give the impression that it is multi-tasking simultaneously; a super-consciousness might be more like a GPU – graphical processing unit – with multiple cores capable of genuinely simultaneous awareness of many different things at many different levels. Superintelligence might, on the hypothesis under consideration here, require super-consciousness of some such kind. Arguably creatures possessed only of our kind of single consciousness would be incapable of appreciating or fully relating to entities that were super-conscious because while we were time-slicing between the different consciousnesses of the machine, they would be simultaneously aware of all of them. (The alien species I have called “The Kraag” in my published but unfinished novel The Colony have the capacity to be simultaneously aware of many different things, and are as such super-conscious.)

To some extent we are nevertheless already possessed of something close to super-consciousness because, whilst we are only capable of being conscious of one thing at a time in what we might rather loosely call a “full” sense, we know that our brains are processing many other things “in the background” that may spring into consciousness either now or in the near or distant future. That penumbral consciousness, that sense of things unthought and unsaid that may yet be thought and said, is of the essence of what it is to be human and alive and awake and self-aware. This is an aspect of what I think was Wittgenstein’s “sometimes, the first time I know that I think something is when I hear myself saying it”.

It is this parallel processing, this partnership between conscious processes, others we are only conscious of peripherally, and yet more we are not conscious of at all, that supports the distinction between formal and non-formal processes that I made much of in my Logic and Affirmation (Scottish Academic Press, 1987). The non-formal processes as I wrote about them then could easily have been mistaken for essentially mysterious processes of the soul, but that was not my intention. Non-formal processes were instead those that are not accompanied by conscious processing, language or other forms of expression; they remain, until expressed formally, essentially tacit, in the background, but they could not exist at all were they not produced by accompanying neurological processes. Then as now much remained to be understood about how the brain works, but there can be no question that anything that happens in our mental lives, formal or non-formal, must depend upon and be the product of neurophysiological processes.

As with everything else in science, progress in AI tends to be monotonic, notwithstanding the various detours it has taken over the past 50 years as it has become variously more or less confident about particular approaches. That monotonic increase will gradually erode any and every sense of the uniqueness of human creativity. We may doubt this because we persist in believing that human creativity derives from a soul that machines do not have, but the reality will be that eventually machine creativity will first be indistinguishable from human creativity, and will then exceed it. This will mean less that machines have acquired souls and the mysteries of creativity than that we will finally have to acknowledge that we do not have souls either and our creativity an important but not especially remarkable aspect of being a particular kind of brain in a particular kind of body. So rather than its advances making us think more of machines, it is likely to make us think less of ourselves, or at least that we are less special than we have hitherto believed.

The mechanisms whereby AI engines might become conscious, self-aware, reflective agents remains unclear, but it seems likely that some kind of feedback process will be involved. Experiments have already begun where AI engines are trained on the weights of their own neural nets, essentially feeding back into the network the things that make it what it is. No doubt work has already commenced on creating time-dependent training régimes, too, where the optimal solution is reached in a way that takes account of the amount of time and energy and the degree of confidence and convergence we can supply given finite resources. Such is all life: a balance between what can be done and what we have time and resources to do. Our own learning bears many resemblances to machine learning, and machine learning is becoming more and more like human learning: we compare our performance with some putative ideal – what AI likes to call “the ground truth” – and, in cases that matter to us, we adjust our behaviour, our attention, our effort, the intensity of our application, to try to reduce that discrepancy. In ML terms, we strive to minimise the loss/error function.

As an aside, the category of things “that matter to us” is interesting as a manifestation of our intuitions about how best to develop our self-esteem, to develop “who we are” in ways that we deem desirable. If I care enough about something, I will expend endless effort and money in trying to achieve it, but what matters to me in my life is not necessarily what matters to you about my life, and so we can find ourselves in situations where we are thought not to be doing enough of some particular right thing – or to be doing too much of the wrong thing – in the gap between what matters to me/us and what matters about me/us to you. That someone with perceived talent in piano-playing, for example, never practises and seldom plays, may be a source of regret to others but of absolutely no consequence to her. Their remonstrations fall on deaf ears because they have forgotten the difference between the inside and the outside stories: that who I am (we are) to you is a very different thing from who I am (we are) to ourselves. Of course, this isn’t entirely of completely true because, as Peter Strawson observed in Individuals (p.100ff), our self-images and therefore our consciousnesses of ourselves are very much composed of reflections from the impacts that we have on others. In that sense how others see us influences how we see ourselves, but it never exhausts it: there will in all probability always be things about my life that matter to me in ways that they do not and cannot matter to you.

Once we eliminate the misleading concept of the soul from our world-views and insist that all processes have a physical basis we will be less inclined to attribute things we cannot yet explain to the essentially mysterious, and more likely to set out to discover ways to explain and replicate them. Machines already play certain games far better than any human, but they are as yet pretty poor at many other things, including many of the things we have hitherto claimed as the exclusive preserve of ensouled humans such as music and art and creative writing. But that machines are not yet good at these things should not be taken as evidence that they will never master them; mind the gap: it will close; and then it will reopen with machines leading the way and disappearing first into the distance and then over the horizon. It is already the case that the very best human chess-players cannot understand, let alone beat, AlphaZero; it is already the case that the very best human Go-players cannot understand, let alone beat, AlphaGo Zero, whose analysis of the game has confounded the rules-of-thumb that have guided human players for centuries.

This is an important illustration of a more general truth. When human beings play chess and Go they use “rules-of-thumb” to guide them in situations where complete and detailed calculation is impossible. In chess those rules include such things as “control the centre”, “don’t move the same piece twice”; “don’t give up material”. But AlphaZero, trained as it is on a Q-learning algorithm, sees the whole game from first move to last solely in terms of one objective that counts: to checkmate the opposing king. So moves that human players would not even consider are made because AlphaZero understands their long-term consequences, consequences that human players could at best sense by intuition. And that is really what intuition is: our response to incomplete and uncertain information where we are incapable – because we don’t have enough information or we don’t have enough calculating power – of performing the necessary analysis. Human beings, incapable of calculating through to the end of the game, settle instead for intermediate objectives and the strategies that will achieve them such as gaining material or space or a time advantage on the clock. AlphaZero settles for nothing and makes no moves other than those that have the most promise to secure the only objective that counts: winning. The pay-off for the best human players may come five, ten, fifteen moves ahead; the pay-off for AlphaZero comes only at the end, when it wins.

This analogy introduces a significant, even profound reason why AI as presently conceived and human life will tend to be different. All human life ends in death, which is in one sense therefore failure; “none of us is getting out of here alive”, as Anthony Hopkins reputedly put it recently. But that means that all human life must be spent and enjoyed by devising and achieving intermediate goals, and is therefore more about the journey than the ultimate destination. Machine learning does not at present have an equivalent of those continuous intermediate objectives because its algorithms are based upon a single final objective, which is to win, to earn a positive reward. The beauty and profundity of some of AlphaZero’s moves is an incidental by-product of its single-minded (now there’s an interesting phrase) focus on winning. Nothing about its play is intended to generate beautiful moves, to astonish its audience, or to intimidate its opponents (another trick used by the great players through the ages). It has no interest in the quality of the journey; it is interested only in the destination, even though some of AlphaZero’s published games have a breathtaking beauty about them, and that destination is represented by a single reward: +1. Human beings cannot live like that because for all of us the final single reward is -1: none of us is getting out of here alive.

Derek Parfit, who died just over a year ago, became during the latter stages of his life preoccupied with the problem of death and how to understand and accommodate it. His Reasons and Persons (hereinafter RP) is essentially an extended treatise on how to come to terms with one’s own inevitable death. In Part III of RP he introduces what is likely to become his most famous and lasting legacy, the thought-experiment of the teletransporter that not only copies and transports human bodies perfectly, but can also duplicate and triplicate them so that the human being who walks into the machine may find itself with two or three identical copies of itself when it comes out. Parfit’s question is what this duplication of consciousness means for our understanding of consciousness and its value.

The possibility of a superintelligent and more especially super-conscious AI presents us with a similarly challenging analogy: what is it to be a multiply-conscious being; what is it to be “many”, like the madman called Legion in the Bible. We cannot conceive of what it is like to be multiply-conscious because our entire world-view is predicated on the experience of single consciousness. Moreover, for most of us, it is that experience of consciousness and fear of the loss of it that are the most pressing reasons to strive to preserve our own lives; indeed, most of us identify ourselves with our consciousness. But in their different ways both Parfit and our super-conscious AI suggest that this is no more than a contingent prejudice occasioned by our particular biology and neurophysiology. Were we more, or even only differently intelligent, or superintelligent, and wiser, we would see ourselves as no more than unimportant cogs in a great machine, the wheel of life, whose coming and passing are of no greater importance than those of the drones who serve the bee colony or the ants their nest. But we find it almost impossible even to begin to think in such terms because our prevalent values have all evolved to serve and preserve our own conscious existences.

This trait, this persistent obsession, lies at the heart of human existence, and is responsible for destroying much of it. We are inescapably self-centred (understood as “centred on ourselves”, not in the other pejorative sense meaning selfish, although the two often coincide); we see ourselves as the centre of the only important universe, which is the universe of our own conscious existence. That is not true of bees or ants (it is not even true of their queens) whose lives are driven by an evolved other-centredness in which the survival of the collective is deemed (insofar as it is “deemed” at all) more important than the survival of any individual.

And it may be that superintelligence empowered by super-consciousness, both of them distributed and therefore other-centred, intrinsically governed by the need to preserve the collective rather than the individual – for at the heart of their existence and self-understanding lies the principle that “we are many” – will supersede humanity not only by being better at everything than we are and wiser in their stewardship of resources than we are, but better than we are at being less preoccupied with themselves and more with the collective welfare of others. The difference between two such worlds is hard to exaggerate and impossible to comprehend.

Mind the gap!


One thought on “Mind: The Gap

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s