Multi-Conscious Being

There is a very old saying that is not possible to predict the invention of the wheel. Why not? Because by the time you have defined whatever it is you believe that you have failed to predict, you will have invented it. The description of the thing is the same thing as the invention of the thing. That is so, at least, where the “thing” is simple, but it is not true where the thing is complex and we might succeed in describing some of its properties despite having no idea how to bring it about.

We have the same problem with what is now usually called “superintelligence” because it is almost a matter of definition that any concept of intelligence that was greater than our own would, if we were to be able to understand it at all, necessarily need description. And we cannot describe something that we cannot imagine. Alongside the impossibility of reinventing the wheel would come the impossibility of imagining a new colour. We can imagine various combinations and shades and hues and tints and so forth of the colours we already know, but it is impossible to imagine a new one, and were anyone to claim to have done so they would find it impossible to verify unless they were able to produce an object or painting, or a piece of paper or something in which that new colour was instantiated.

So if we cannot imagine, still less invent superintelligence, but must just wait until it arises, might the same be true of being in a state of multiple consciousness? Our lives are spent – or so at least “the story goes” – in states of largely single consciousness with peripheral states of consciousness circling around of things that we are marginally aware of. To suggest that we be capable of a kind of multi tasking in consciousness in which we were able to be conscious of many things at once, each of them as fully as we are conscious of anything, would seem to invite a kind of schizoid life. And we would find it, we do find it – perhaps I should just say “I find it” – difficult, even impossible to imagine what that might be like. But there is no reason in principle why entities should be incapable of multiple consciousness, especially if consciousness is merely a by-product of the evolution of certain kinds of brain. If the brain is limited to one kind or one state of consciousness at a time, then it is at least conceivable that a more complex brain or machine might be capable of multiple consciousnesses. That we cannot imagine something is not a guarantee that it cannot be done or happen. Some positivists would claim otherwise, but they would quite simply be wrong.

The possibility of multiple consciousnesses raises the question of what it is that is important, even valuable, perhaps unique, certainly worthwhile about the one consciousness that we have already. We all know that we will die, that “none of us is getting out of here alive”, and that death will at least as far as we are concerned be not so much marked by as accompanied by the cessation of our conscious existence. Some people sadly and tragically lose their consciousness before the death of their physical bodies, but if I might be allowed the extravagance of calling it “under normal circumstances”, we carry our consciousness to our deathbeds, and at the moment of our physical passing our consciousness similarly ceases to be.

Those of you who have read other posts on this blog, and especially those who have followed the trajectory of my thought about the nature of the self and the solution to the mind/body problem, will be aware about I am a committed and even passionate dual-aspect theorist. I do not believe that there is a mind/body problem, or at least I believe that it is a problem that has been resolved for centuries, even millennia; since Aristotle, in fact. In essence: if an entity has a body with a particular kind of nervous system, it will be to whatever extent conscious, there will be a from-inside-looking-out element of that entity that is capable of interacting with its environment and with other entities. There is absolutely nothing mysterious about this: it is just a by-product of the evolution of certain kinds of nervous system and in particular the brains of human beings. Because there is nothing remarkable about it, there is no reason to suppose that it could not in some circumstances be improved upon or even completely superseded by either an evolved or an engineered future system. Whether such systems can be superintelligent or multi-conscious is of course not yet determined, but everyone that knows anything about this field fully expects what has now come to be called “the Singularity” to arise at some stage in the next 10 or 20 years. The Singularity is the emergence of a successor intelligence of sufficient power to take over the determination of its own and all future evolution. In other words it is an intelligence that will design all future intelligences and redesign or reconceive the very notion of what it is to be intelligent, and this leads some commentators to the rueful observation that the first AI we create that has genuinely super intelligent capacities will also probably be the last. And if you ask me to tell you what such a superintelligence might look like, I refer you to the impossibility of predicting the invention of the wheel. Although I think we are not compelled to remain completely silent on the subject.

What has this to do with the discussion that preceded it? Well, if our inside-looking-out-ness is our consciousness, that consciousness is not the entirety of our neurological activity, and there is a good case for regarding our consciousness as in some sense a metaphor for our full selves. In other words it reveals something of ourselves that says something about ourselves and that is indicative of greater truths about ourselves, the kinds of truths that one might associate with a statement along the lines of “he is the kind of person who…” where we try to predict someone’s behaviour from what we know about them. So the cessation of my consciousness along with the death of my physical body entails as far as we know only the the cessation of a subjective experience for me, and the cessation of an interactive experience for you insofar as you interact with me. In the case of multi-consciousness we would anticipate a similar cessation if the physical substrate died, ceased to function, malfunctioned, rusted, decayed or ran out of power. But during its existence a multi-conscious entity could interact differently with different people, and would one suppose be able to interact constructively with itself, rather like an advanced AI interacting with itself in order to play a better game of chess or Go. Under normal circumstances the multiple consciousness would exist in segregated worlds, but there could be moments when it shared its experiences, learned from itself, and generally as you might put it engaged in conference calls with all its other conscious selves in order to bring them into some kind of synchronicity.

Does this make any kind of sense? Is it an interesting or useful thought? The reason for raising it is that we are unduly hung up on the experience of consciousness, and it really requires huge efforts of concentration and thought to rid ourselves of or to diminish in someway the impact of that prevalent worldview on the way we see everything. Even if, as those of you who have read other things in this blog will know, we can rid ourselves of the pernicious notion of the soul, which constantly interrupts the flow of the conversation about what AI is capable of achieving, it is much more difficult to imagine a world in which what mattered was something other than consciousness. And this brings us to Derek Parfitt.

Those who know Parfit’s work, and most of us came to it through his monumental Reasons in Persons, have been struck by the brilliance of the thought experiment that he introduces in Part Three of that book, while at the same time being somewhat bemused by its apparent disconnection from most of the rest of the book and indeed the three monumental tomes On What Matters that succeeded it. Parfit died very early in 2017, I think on 2 January, and so we will never perhaps know what he really thought mattered,but what he appears to have been doing in his work, as I have only just come to appreciate, is systematically to rid ourselves of the delusion as he thinks of it or at least the misperception as we might more kindly call it, that our consciousness is what matters. Indeed the whole of Reasons and Persons is an attempt to debunk that motion not just for his reader, but perhaps more particularly for himself. Parfit repeatedly draws our attention to the fact that he is trying to make himself feel better about his own death, And he claims towards the end of the book to have achieved that by effectively recontextualising the importance of consciousness or perhaps one could put it more strongly decontextualising the importance of consciousness.

If one understands the task Parfit undertook as being in essence to rid ourselves of the notion that it is our conscious existence that matters about us, contrary to all our instincts and presuppositions, then the third part of Reasons and Persons not only makes sense but is the single most important thing that he ever wrote. What he shows there is that common assumptions about the prevalent centrality and importance of consciousness are untenable under certain conditions that he sets out in his thought experiments. Some critics would argue that the impossibility of realising his experiments in any practical sense renders them useless or inadmissible or impossible or unimportant or in irrelevant, but Parfit’s point was not to suggest the physical possibility of a telly transporter, it was to point out that the mere possibility in thought of such a device introduces contradictions into our notions of the self and particularly our notions of consciousness that render it inconceivable that it could be what we take it to be. In other words what Parfit is doing in Part Three of Reasons and Persons is demonstrating not only that the notion that consciousness is what matters needs to be abandoned and debunked, but that everything associated with that notion including that we take it to be the central feature of our lives that authenticates the importance of our existence, and therefore the things that we take to matter in pursuit of the fulfilment of existence as we conceive it, needs debunking and abandoning as much as the notion of consciousness itself.

But if consciousness needs debunking, would not multiple consciousness need debunking all the more? Only if we continued to carry forward into the multiple conscious existence I am envisaging the same kind of precious attachment to the subjective experience of consciousness that has bedevilled human existence since we first emerged on the planet. This notion has given rise to more of the evil in the world than any other. That I believe my experience of my consciousness to be the centrally most important and valuable thing in the universe leads me to favour myself over others in a way that cannot contribute to the net collective well-being of the species, or the planet on which it lives with so many other creatures. That being so, privileging of consciousness is the single greatest environmental disaster in human history. It is responsible for war, for cruelty, for insensitivity, for environmental rape, and for all the other evils that contribute to the misery that is human existence. We have almost literally no awareness of our corporate collective existence. We are as little like ants as it is possible to be. Nobody understands, still less can conceive, what a genuinely collective existence that privileged the well-being of the whole community might entail.

Some might argue that Marx tried to argue for a similar shift in our way of life, but the legacy of his attempt has been almost as destructive as the legacy of consciousness itself, so we must regards his attempt as a failure. Marx’s biggest mistake was to imagine that you could lose capitalism and instantiate communism or collectivism as it is better called, without tackling the way we privilege individual human consciousness, which is to say, he imagined that you could on ideological grounds introduce and sustain collectivism without addressing the very thing that makes collectivism not only  impossible but inconceivable.

Marx’s mistake in other words was to think that people are capable of being benevolent and community-centred and, as I have called it before, other-centred, without a fundamental shift in the way they think about themselves, and the way they as a result seek to position themselves in the collective life. Perhaps the best-known Marxist principle, “from each according to his ability and to each according to his needs” embodies precisely the problem that he needed to address if his revolution was to succeed in a sustainable way: the problem of centring everything on the abilities and needs of individuals, which are the symptoms of their obsession with their own lives and especially their consciousness of their own lives.

So how are we to proceed? There is, after all, no future in this. If almost all humankind are obsessed with the prevalence of their own consciousness, and not at all interested therefore in collective well-being except insofar as it affects that self-consciousness, then there is no incentive for anyone to change anything. Parfit is aware of this problem. In fact he is so aware of it that he persistently deferred as a full discussion of it while he deals first with the vagaries of language in which he sees the linguistic habits that underpin our preference for consciousness so deeply embedded that he can conceive of no way to get rid of them. No way that is unless one first reformed language itself. This is what brings him to On What Matters, but I regret to say that I think he makes the same mistake as Marx, notwithstanding that from a distance one can see within his intellectual compass precisely the tools necessary not only to avoid the mistake but to put right, to rectify the mistake Marx made, and almost all others who want to be social reformers make, that is the failure to address the prevalence of consciousness and our consequent preoccupation with preserving it – more particularly our own – as the central and single most important feature in human existence. (It would be interesting to research the extent to which this failure on Marx’s part is attributable to his Hegelian background.) Having convinced himself that there is nothing much to be worried about in his own death, Parfit seems to move elsewhere, when what he should really set about doing is persuading everyone else of exactly the same thing, for without that there can be no progress. Our own awareness of ourselves and our own consciousness leads us to a passion for the preservation of consciousness. That is the single most pernicious habit that human beings have and is responsible for everything that goes wrong in the world.

There might be two kinds of multiple consciousness: serial and parallel. Serial multiple consciousness is easy to imagine: we would just switch from one consciousness to another rather as a CPU in a computer slices up its time to manage different threads. Parallel multiple consciousness is much harder, if not impossible to imagine because we are so obsessed – I nearly said single-mindedly preoccupied, which of course is just the point – with the experience of consciousness rather than with its significance.

Can there be a consciousness we are not aware of? Is that not a contradiction in terms? Well, perhaps, but let us stay with the idea: could we – could our brains – really be more like GPUs than CPUs, with our experienced consciousness assigning different tasks to different GPUs and leaving them to process in the background? In that case the “blocks” of the GPU, or different GPUs in a more complex architecture, would act rather like consciousnesses we were not aware of, processing things as they would be processed by consciousness, but without us being aware of them.

This is far more plausible than it sounds. We are not, contrary to much that we might imagine, aware of thinking: we are aware of thoughts as the output from thinking, rather like the output from a neural net the working of whose hidden layers we know little or nothing about. Thoughts are the things that emerge in words and pictures and sounds and sensations and so forth; thinking is not something we are usually conscious of. Thinking takes place “in the background” and can sometimes absorb enormous resources, so much so that we find our conscious existence stressful and barren while we wait for the results of the background processing. My subjective experience of creativity is that it is a mixture of really unpleasant barrenness when nothing seems to be going on, but which experience tells me is just the waiting before the revelation, punctuated by moments of blinding insight when suddenly everything gets kick-started again and I feel alive. Were there to be no such instances there would  be very little to live for. If this distinction between thinking and thought comes as a surprise, imagine the instruction “think harder!”: how do we do that? We can’t, not directly; all we could hope to do would be to devote more input resources to whatever background processes might work on them and hope for the best. We have almost no control over our thinking.

So it is ironic when we hear pundits worrying about what AI may do, especially that we are even now beyond the point where anyone really understands what advanced AI is doing on a cumulative global basis, but we are in the same position with our own brains. “The one thing that I cannot know is the next thing that I shall think.” Back to predicting the invention of the wheel: our brains will do what they will do; we can only control the input data upon which they work, not how they work on it. Brains are not like neural nets: we cannot redesign them to match new and different tasks, but we can educate them to be more proficient at more things, providing the resources needed for them to work along parallel lines of consciousness that may (and probably will) one day surprise us.

On Being Many

Conventional psychology regards  people whose live their lives with many different affective states as suffering from a mental illness such as MPD – multiple personality disorder – but this is a prejudice unless that multiplicity of selves creates problems dealing with everyday existence. Unfortunately, this same conventional psychology leads us to expect people to be, so-to-speak, seamlessly and recognisably themselves. “She wasn’t herself today” is used of someone who is in some sense out of sorts, but the phrase also betrays an assumption: that we should be recognisably ourselves, and our one self.

There is no reason why this should be the case; in fact, there is every reason why it should not. If different strands in our thinking that are embedded in different “parts of” or processes in our brains, result in the emergence of different ideas and behaviours, then there is every reason to expect that we will not always appear the same and every reason to suppose that we might from time to time appear different. The world may expect us always to present it with the same “self”, but that is no reason why we should, and it is perfectly possible that we are incapable of meeting such an expectation. Only grant this and a great deal that currently troubles us about the way people behave suddenly becomes less surprising and, as a result, less alarming; only grant this and we will learn an important lesson about life: that people are seldom entirely what they seem to be since most people are from time to time living these different lives.

An interesting question is then what triggers our switching from one personality to another, and how and whether we can control those changes. We should not be too quick or ready to diagnose someone who is inconsistently themselves as mentally ill: they may instead be several perfectly healthy versions of themselves, each sharing the same life in the same body. Our preoccupation with the continuity of our subjective experience of consciousness makes us reluctant even to consider such a possibility, but Parfit’s thought experiment enables us to see that there is not, in practice, any continuity of self, and that the notion of continuity of self is an illusion, perhaps even a delusion.

Consider for a moment the version of the teletransporter thought experiment where the machine leaves my body intact but creates several copies. As I walk towards the machine, knowing that this will be the outcome of walking through its archway, I may perfectly well be troubled – that I am troubled and should not be is precisely Parfit’s point – by the question which of the entities that emerges from the machine will be continuous with the one entering it; in other words, which of these three future bodies will have the experience I call being “me”? All, none, one or some?


And it is the unanswerability of this question that gives Parfit’s experiment its cutting-edge and shows that the experience of consciousness is not what we imagine it to be, and certainly not the locus of our personality. Neither is it permanent, so the notion that my self consists in that continuity must be incoherent. Parfit’s point – although it is a difficult point to assimilate, still less accommodate – is that the notion of my own death is not more to be feared that the duplication entailed in the experiment depicted above: that I am not able to decide which of the entities will be “me” suggests that none of them will be “me” any more than the others. And if none of them will be “me” any more than any of the others, then the notion that any of them could genuinely be me must be an illusion. a fortiori, since none of them being me makes no difference to me now, the cessation of my being me that I fear in death makes no sense either. What I want to “hand on to”, my “me-ness”, by virtue of establishing the continuity between one of these future selves and my current self, cannot be “hung on to”. All I can say is that each of the three entities beyond the machine will momentarily believe itself to be me and have the instantaneous memory of having passed through a machine with a history beforehand that is my history. Thereafter their histories will diverge, first by a little and later by a lot as they live their necessarily divergent lives.

It is not enough to dismiss this thought-experiment as irrelevant because it cannot be achieved. That it can be conceived is sufficient to show that the notion of “my” future continuity with myself is untenable: the notion that there is a “me” that will be “in” one of these bodies but not the others makes absolutely no sense unless I retain belief in a soul (but see “Mind: the Gap” on this blog and other essays if you are still in thrall to that illusion). Parfit’s point is that there is no such “me”; there is only contemporary awareness and a sense of the past. Each of the three entities that leaves the machine will have that sense of contemporary awareness and sense of the past; each of them will therefore be “me” without any of them having been privileged by being the recipient of some unique quality, still less “thing”, that contains and constitutes “me”.

The absolutely infuriating question – at least, the question that absolutely infuriates me about the thought-experiment, if nobody else – consists in the incommensurability of this state of affairs. I habitually live my life on the assumption that “I” will enjoy the fruits of my labours and nobody else except insofar as I decide to share them: I study, work, think, struggle, take care of myself, save and plan; that three entities should be in a position to inherit the rewards for my efforts seems faintly improper (the question of the division of my assets is interesting but should not delay us here); that “I” might not be any one of them  any more than any of the others seems more bizarre still; that “I” might be all of them seems impossible, multiple consciousnesses notwithstanding.

Parfit’s point is that the everyday, commonplace assumption that the self who wakes up in this body tomorrow morning will necessarily be “me” and the same me who is writing these words now is a conceptual mistake. It is the same mistake as imagining that only one of the three entities leaving the teletransporter will be me; it is far more coherent to imagine that all of them and none of them will be me. A perfectly good objection to this would consist of saying “Yes, but there will only be one body waking up tomorrow morning capable of thinking itself to be ‘you’, so where’s the problem?” The problem is that there is no more reason to grant that body “me-ness” than any single privileged body emergent from the teletransporter: the subjective experience of being me is neither a guarantee of continuity nor a requirement for it. What the body that wakes tomorrow morning will share with the three bodies emergent from the machine is a sense of being me and a sense of being a me who wrote something about this last night; nothing requires them to have continuity with that “me”; the person who walks into the teletransporter could as easily be evaporated with no impact on the thought experiment. The notion that it is being in possession of precisely this body that makes and enables me to be “me” is an unwarranted prejudice: being me consists in no more than being an entity with access to a history defined by a certain kind of relation with a narrative.

Somehow – this, too, is an infuriating thought – I feel that “I” am entitled to have access to the subjective experiences of me-ness enjoyed by all three of the entities since I am the one who has worked so hard to create the person whose subjective states they will all enjoy. And if I cannot experience all of them, why should I have any confidence that I will experience any of them? If the notion that it is the “I” writing this now who will have any of these experiences makes no sense, then what sense does it make to suggest that “I” will have any future experiences, teletransporter or not? And Parfit’s conclusion is that it makes no sense; indeed, that is precisely his point: we should not be concerned about our death because it cannot deprive us of anything other than the illusion that we will be denied future experiences of being “I” by it. Duplication or triplication will similarly deny us such experiences. Indeed, one feels a sorites paradox coming on: were there 1,000 emergent bodies exactly like mine, I could not enjoy the subjective experience of being all of them; ergo, I could not enjoy the subjective experience of being all of 999 were there only 999; ergo, … of two; ergo of one. There is no justification for believing that “I” will enjoy even the experience of being the one self whose subjective experiences this body supports; ergo I should not be concerned about my own death. But even Parfit struggles to persuade himself of that.

It follows that, if my subjective experience of my supposedly single consciousness is not the same thing as being me, nothing is lost if I am able to experience multiple consciousnesses no one of which is the entire me and each one of which is in some sense a complete me. Indeed, one might think nothing could possibly be more satisfactory: I get to experience all kinds of different possible states of being without the need ever to leave home.


Mind: The Gap

Religions across the world found themselves defending their faith against the seemingly ever-advancing discoveries of science for much of the twentieth century. Such defence often took the form of claims about what science would never be able to explain or do because of mysteries that were the sole domain of a god hidden behind an impenetrable veil of mystery. Science chose to ignore these claims almost entirely, and the realms of the unexplained and putatively inexplicable were progressively brought into the domains of science instead. So inexorable and embarrassing did this advance become that someone (possibly Henry Drummond who used the phrase “gaps which they will fill up with God” in his Lowell lectures The Ascent of Man) coined the phrase “the God of the gaps” to describe supposed deities whose empires consisted of domains ever-shrinking before the advance of science.

The twenty-first century has already witnessed a comparable phenomenon that has progressively diminished the domain not of erstwhile gods but what is perhaps the last bastion of human self-esteem, consciousness, the mind or the soul. Typically claims about the soul, whether explicit or implicit, amount to claims about what human beings can do that either other species or, until recently, machines could never be expected to do. We find ourselves invited to believe that creativity, imagination, innovation, and especially the creation of poetry, prose, music, art and so forth are quintessentially human traits that define who we are and our uniqueness and importance in the scheme of things.

The longevity of these beliefs in gaps between what science can explain or other species can do is to a large extent attributable to the persistence of the linguistic terminology that embodies them. Words such as “genius”, “inspiration”, “creativity”, “insight”, as well as “consciousness”, “understanding” and “imagination” implicitly refer to what are often thought to be the inexplicable properties of beings uniquely possessed of qualities attributable to something that since ancient times has been called “the soul”.

The logic was apparently irrefutable: creatures with souls can do things that creatures and objects without souls cannot do; these creatures and objects (there is usually a favourite, long list) do not have souls; therefore these creatures and objects cannot do these things. Candidates for membership of the list of soulless creatures have from time to time included other human races, women, humans with differently-coloured skins, humans who look different, humans who are in some way mentally or physically different, non-human animals, arachnids, insects and microbes; all objects have been denied souls for most of human history by most human tribes, but not quite by all. Lack of a soul, or in some cases possession of what was judged to be a corrupt, condemned, bad or evil soul, was often used as an excuse for persecution, cruelty, enslavement, war and killing. In some cultures the supposed absence of a soul was associated with inability to feel pain or, less often, inability to experience joy and love. On the heels of the denial of souls, therefore, all manner of cruelty and excess was justified.

One of the more peculiar aspects of “the soul question” is associated with the ability to be conscious and to think, and therefore with the possession of something called “the mind”. Consciousness in particular, and often the capacity to think by association, are held out as indications of possession of a soul and as the attributes of creatures possessed of souls. By the aforementioned logic, non-human things generally and non-living things in particular are therefore incapable of consciousness and thought. We quickly conclude that they are of no consequence, and ours to do with as we please.

Alan Turing famously wrote a prescient and epoch-defining paper called “Can Machines Think?” in 1952 shortly before his death. In that paper he chose to equate the question whether a machine can think with the question whether we can tell that it thinks, which was an unfortunate piece of positivism since whether something is the case and whether we can ascertain that it is the case are obviously two quite different things. (For example, whether there is life on other worlds is a matter of fact; whether we can ascertain whether there is life on other worlds is merely a matter of contingency.) Nevertheless, what came to be known as “The Turing Test” has become a defining feature of our age, and is set to become ever more central as artificial intelligence becomes more powerful and ubiquitous. And as AI, more particularly general artificial intelligence capable of addressing and solving a wide and deep range of everyday challenges, becomes more powerful, so the gaps between what “machines” can do and what human beings can do will shrink. And inevitably that has raised, and will raise again and again, the question whether there is ultimately anything “unique” about human beings, and in particular whether the notion of “the soul” serves any purpose.

It is not the case, however, that all human cultures have believed in the kind of soul/body dualism that much of western civilisation inherited from the Greeks and embedded in most versions of Christianity. Some have regarded life as an integration of the mental and physical, mind and body, in which neither can be conceived without the other. Those cultures are more likely to find the assimilation of machines easier to accommodate.

Integrated mind/body anthropologies have never had to face the problem of how mind or soul and body are related and connected, a problem which has preoccupied many philosophers over the centuries. One of the oldest and certainly the most profitable answers essentially dissolves the distinction as a linguistic error: Aristotle’s notion of hylemorphism teaches that the soul is in the body as sight is in the eye, and that should really have been an end to the matter, but unfortunately Plato’s ideas were adopted by Christianity because they seemed more compatible with the notion of resurrection and life after death and with the imago dei notion that mankind is made “in the image of God”.

Modern reformulations of Aristotle’s principle take a slightly different form by stressing that there is nothing surprising about the emergence of mind, thought and consciousness given the evolution of certain kinds of bodies-with-brains: to be a creature situated in the world with a particular kind of body and brain is automatically to have a mind and be conscious; mind is the world-orientation of body; we are each our body in its inside-looking-out-ness. (cf. my “Information and Creation” in the proceedings of the European Society for the Study of Science and Theology (ESSSAT), 1991, and God and the Mind Machine, SPCK, 1997.) And to the inevitable question how creatures without souls can hope to survive death we should have the courage to give the only possible honest answer: they don’t.

The gap between what humans can achieve and what machines can achieve is closing, and will continue to close, but will eventually widen again with machines ahead of us in all conceivable kinds of ability and intelligence. What has come to be called “The Singularity” of the first AI capable of such superintelligence that it can then design all its own successors, each generation of which will progressively out-perform its creators, may still be some years off, but it is no longer in doubt that it will eventually occur, and there remain very few intellectual processes that AI is not conquering to levels that already do or soon will exceed human capacity.

Are the most advanced AIs already conscious? Could it be the case, for example, that AlphaZero, that learned chess in only four hours and then defeated the best software on the planet (Stockfish), is already in some rudimentary sense conscious? This takes us back to Alan Turing’s seminal essay: once an AI performs at a task in a way that is superior to any human, on what basis are we able to deny it consciousness, even if only a consciousness limited to what is required by that task?

This question seems to me to be the product of a widespread conceptual confusion about the nature of consciousness, a confusion that Turing’s positivistic approach to the formulation of the question has served only to make worse: if, as the view of mind advocated here insists, consciousness is an inevitable and unremarkable consequence of having a particular kind of neurophysiology, then whether an object or living creature is conscious or not has nothing to do with the tasks it can perform except insofar as consciousness is a necessary contributor to their performance. Conversely, how valuable, significant of important some creature or object is does not depend upon whether it is conscious; the view that it is our consciousness that establishes our value and importance is a legacy of the view that consciousness is a property we have because we have souls. I may anticipate my death and regret the cessation of the subjective experience I call my consciousness, but you, if you care at all, will not anticipate with regret the passing of my consciousness (except insofar as you identify with it because you anticipate and regret the cessation of your own): you will regret the passing of an agent who interacts with you and to a greater or lesser extent affects the trajectory of your life. For you, therefore, whether I am conscious or not is of absolutely no consequence; what matters to you – other than the passing of a consciousness that reminds you of the eventual and inevitable passing of your own – is the material difference my passing will make to your life. To the extent that my role for you depends on my possession of consciousness, it is the passing of my consciousness that will make that difference, but were a machine to be capable of reproducing every facet of my agency it would be of no consequence whether that machine were conscious or not (although if it were the case that some agency is impossible without consciousness, a machine could only fulfil that role were it to have consciousness).

And this is therefore the key question: can machines achieve viable superintelligence without some equivalent of consciousness? Consciousness, conceived as a means to an end, is instrumental rather than fundamental or essential. If – and it is a big “if” – the kind of self-referential capacity that consciousness facilitates is a necessity if an entity is to reach certain levels of performance – it does not matter of what sort – then machines will only be able to reach those levels if they possess such consciousness, or perhaps some form of super-consciousness with more scope and power than ours.

That superintelligence may be accompanied by super-consciousness is not something that has received much attention. For example, we are essentially conscious only of one thing at a time, even though we flit from one thing to another rather as a single CPU – central processing unit – on a computer time-slices to give the impression that it is multi-tasking simultaneously; a super-consciousness might be more like a GPU – graphical processing unit – with multiple cores capable of genuinely simultaneous awareness of many different things at many different levels. Superintelligence might, on the hypothesis under consideration here, require super-consciousness of some such kind. Arguably creatures possessed only of our kind of single consciousness would be incapable of appreciating or fully relating to entities that were super-conscious because while we were time-slicing between the different consciousnesses of the machine, they would be simultaneously aware of all of them. (The alien species I have called “The Kraag” in my published but unfinished novel The Colony have the capacity to be simultaneously aware of many different things, and are as such super-conscious.)

To some extent we are nevertheless already possessed of something close to super-consciousness because, whilst we are only capable of being conscious of one thing at a time in what we might rather loosely call a “full” sense, we know that our brains are processing many other things “in the background” that may spring into consciousness either now or in the near or distant future. That penumbral consciousness, that sense of things unthought and unsaid that may yet be thought and said, is of the essence of what it is to be human and alive and awake and self-aware. This is an aspect of what I think was Wittgenstein’s “sometimes, the first time I know that I think something is when I hear myself saying it”.

It is this parallel processing, this partnership between conscious processes, others we are only conscious of peripherally, and yet more we are not conscious of at all, that supports the distinction between formal and non-formal processes that I made much of in my Logic and Affirmation (Scottish Academic Press, 1987). The non-formal processes as I wrote about them then could easily have been mistaken for essentially mysterious processes of the soul, but that was not my intention. Non-formal processes were instead those that are not accompanied by conscious processing, language or other forms of expression; they remain, until expressed formally, essentially tacit, in the background, but they could not exist at all were they not produced by accompanying neurological processes. Then as now much remained to be understood about how the brain works, but there can be no question that anything that happens in our mental lives, formal or non-formal, must depend upon and be the product of neurophysiological processes.

As with everything else in science, progress in AI tends to be monotonic, notwithstanding the various detours it has taken over the past 50 years as it has become variously more or less confident about particular approaches. That monotonic increase will gradually erode any and every sense of the uniqueness of human creativity. We may doubt this because we persist in believing that human creativity derives from a soul that machines do not have, but the reality will be that eventually machine creativity will first be indistinguishable from human creativity, and will then exceed it. This will mean less that machines have acquired souls and the mysteries of creativity than that we will finally have to acknowledge that we do not have souls either and our creativity an important but not especially remarkable aspect of being a particular kind of brain in a particular kind of body. So rather than its advances making us think more of machines, it is likely to make us think less of ourselves, or at least that we are less special than we have hitherto believed.

The mechanisms whereby AI engines might become conscious, self-aware, reflective agents remains unclear, but it seems likely that some kind of feedback process will be involved. Experiments have already begun where AI engines are trained on the weights of their own neural nets, essentially feeding back into the network the things that make it what it is. No doubt work has already commenced on creating time-dependent training régimes, too, where the optimal solution is reached in a way that takes account of the amount of time and energy and the degree of confidence and convergence we can supply given finite resources. Such is all life: a balance between what can be done and what we have time and resources to do. Our own learning bears many resemblances to machine learning, and machine learning is becoming more and more like human learning: we compare our performance with some putative ideal – what AI likes to call “the ground truth” – and, in cases that matter to us, we adjust our behaviour, our attention, our effort, the intensity of our application, to try to reduce that discrepancy. In ML terms, we strive to minimise the loss/error function.

As an aside, the category of things “that matter to us” is interesting as a manifestation of our intuitions about how best to develop our self-esteem, to develop “who we are” in ways that we deem desirable. If I care enough about something, I will expend endless effort and money in trying to achieve it, but what matters to me in my life is not necessarily what matters to you about my life, and so we can find ourselves in situations where we are thought not to be doing enough of some particular right thing – or to be doing too much of the wrong thing – in the gap between what matters to me/us and what matters about me/us to you. That someone with perceived talent in piano-playing, for example, never practises and seldom plays, may be a source of regret to others but of absolutely no consequence to her. Their remonstrations fall on deaf ears because they have forgotten the difference between the inside and the outside stories: that who I am (we are) to you is a very different thing from who I am (we are) to ourselves. Of course, this isn’t entirely of completely true because, as Peter Strawson observed in Individuals (p.100ff), our self-images and therefore our consciousnesses of ourselves are very much composed of reflections from the impacts that we have on others. In that sense how others see us influences how we see ourselves, but it never exhausts it: there will in all probability always be things about my life that matter to me in ways that they do not and cannot matter to you.

Once we eliminate the misleading concept of the soul from our world-views and insist that all processes have a physical basis we will be less inclined to attribute things we cannot yet explain to the essentially mysterious, and more likely to set out to discover ways to explain and replicate them. Machines already play certain games far better than any human, but they are as yet pretty poor at many other things, including many of the things we have hitherto claimed as the exclusive preserve of ensouled humans such as music and art and creative writing. But that machines are not yet good at these things should not be taken as evidence that they will never master them; mind the gap: it will close; and then it will reopen with machines leading the way and disappearing first into the distance and then over the horizon. It is already the case that the very best human chess-players cannot understand, let alone beat, AlphaZero; it is already the case that the very best human Go-players cannot understand, let alone beat, AlphaGo Zero, whose analysis of the game has confounded the rules-of-thumb that have guided human players for centuries.

This is an important illustration of a more general truth. When human beings play chess and Go they use “rules-of-thumb” to guide them in situations where complete and detailed calculation is impossible. In chess those rules include such things as “control the centre”, “don’t move the same piece twice”; “don’t give up material”. But AlphaZero, trained as it is on a Q-learning algorithm, sees the whole game from first move to last solely in terms of one objective that counts: to checkmate the opposing king. So moves that human players would not even consider are made because AlphaZero understands their long-term consequences, consequences that human players could at best sense by intuition. And that is really what intuition is: our response to incomplete and uncertain information where we are incapable – because we don’t have enough information or we don’t have enough calculating power – of performing the necessary analysis. Human beings, incapable of calculating through to the end of the game, settle instead for intermediate objectives and the strategies that will achieve them such as gaining material or space or a time advantage on the clock. AlphaZero settles for nothing and makes no moves other than those that have the most promise to secure the only objective that counts: winning. The pay-off for the best human players may come five, ten, fifteen moves ahead; the pay-off for AlphaZero comes only at the end, when it wins.

This analogy introduces a significant, even profound reason why AI as presently conceived and human life will tend to be different. All human life ends in death, which is in one sense therefore failure; “none of us is getting out of here alive”, as Anthony Hopkins reputedly put it recently. But that means that all human life must be spent and enjoyed by devising and achieving intermediate goals, and is therefore more about the journey than the ultimate destination. Machine learning does not at present have an equivalent of those continuous intermediate objectives because its algorithms are based upon a single final objective, which is to win, to earn a positive reward. The beauty and profundity of some of AlphaZero’s moves is an incidental by-product of its single-minded (now there’s an interesting phrase) focus on winning. Nothing about its play is intended to generate beautiful moves, to astonish its audience, or to intimidate its opponents (another trick used by the great players through the ages). It has no interest in the quality of the journey; it is interested only in the destination, even though some of AlphaZero’s published games have a breathtaking beauty about them, and that destination is represented by a single reward: +1. Human beings cannot live like that because for all of us the final single reward is -1: none of us is getting out of here alive.

Derek Parfit, who died just over a year ago, became during the latter stages of his life preoccupied with the problem of death and how to understand and accommodate it. His Reasons and Persons (hereinafter RP) is essentially an extended treatise on how to come to terms with one’s own inevitable death. In Part III of RP he introduces what is likely to become his most famous and lasting legacy, the thought-experiment of the teletransporter that not only copies and transports human bodies perfectly, but can also duplicate and triplicate them so that the human being who walks into the machine may find itself with two or three identical copies of itself when it comes out. Parfit’s question is what this duplication of consciousness means for our understanding of consciousness and its value.

The possibility of a superintelligent and more especially super-conscious AI presents us with a similarly challenging analogy: what is it to be a multiply-conscious being; what is it to be “many”, like the madman called Legion in the Bible. We cannot conceive of what it is like to be multiply-conscious because our entire world-view is predicated on the experience of single consciousness. Moreover, for most of us, it is that experience of consciousness and fear of the loss of it that are the most pressing reasons to strive to preserve our own lives; indeed, most of us identify ourselves with our consciousness. But in their different ways both Parfit and our super-conscious AI suggest that this is no more than a contingent prejudice occasioned by our particular biology and neurophysiology. Were we more, or even only differently intelligent, or superintelligent, and wiser, we would see ourselves as no more than unimportant cogs in a great machine, the wheel of life, whose coming and passing are of no greater importance than those of the drones who serve the bee colony or the ants their nest. But we find it almost impossible even to begin to think in such terms because our prevalent values have all evolved to serve and preserve our own conscious existences.

This trait, this persistent obsession, lies at the heart of human existence, and is responsible for destroying much of it. We are inescapably self-centred (understood as “centred on ourselves”, not in the other pejorative sense meaning selfish, although the two often coincide); we see ourselves as the centre of the only important universe, which is the universe of our own conscious existence. That is not true of bees or ants (it is not even true of their queens) whose lives are driven by an evolved other-centredness in which the survival of the collective is deemed (insofar as it is “deemed” at all) more important than the survival of any individual.

And it may be that superintelligence empowered by super-consciousness, both of them distributed and therefore other-centred, intrinsically governed by the need to preserve the collective rather than the individual – for at the heart of their existence and self-understanding lies the principle that “we are many” – will supersede humanity not only by being better at everything than we are and wiser in their stewardship of resources than we are, but better than we are at being less preoccupied with themselves and more with the collective welfare of others. The difference between two such worlds is hard to exaggerate and impossible to comprehend.

Mind the gap!

Rules of the Game

The Context

Some people are not as impressed as they should be by the achievement of #AlphaZero in mastering three independent games from scratch to world-champion standard, given only the rules, in a matter of hours. (For details, cf. the paper published by DeepMind on December 5th, 2017 at I have heard it said that computers have always been better at some things than human beings, not least calculating, crunching data, determining statistics, and a host of other things. But this is different because it is not just about doing things faster; it is about learning how to play games that have been around for thousands of years while ignoring everything that human beings have ever said about them, thought about them, or suggested might be a good way to play them.

#AlphaZero plays Go differently from #AlphaGoZero; it plays Chess differently, too, making moves that few humans would even consider, to say nothing of the fact that it reinvented the entire treasure-house of standard book opening strategy from scratch.

This massive achievement, coming on the heels of similar successes now long passed in simple Atari games and Backgammon, will almost certainly soon be followed by mastery of Buzzard’s Starcraft 2 game, claimed to be far more difficult than Go, Chess or Shogi.

The Challenge

This blog is not about any of this; it is about something altogether different: the question of how #AlphaZero might become a master at human activities that are not obviously games, but probably can be conceived and modelled as such, even if they are not.

It would be comparatively easy to determine the rules of Chess and Go by watching a few games; it would be and clearly is far less easy to determine the rules of economics, the operation of the world’s stock markets, currency exchanges, the behaviour of the weather, traffic systems, and countless other things humans engage and are affected by on a daily basis in that are at least notionally bounded, that is to say contained within a definable range of activities or events.

Nobody knows the rules of any of these “games”, which is not to say that there are none or that there have not been substantial and persistent attempts to ascertain what they are; indeed, economic and political theories try to say what the rules are, and the various theories of economics that have been articulated, if not precisely codified (Marxism, Keynesianism, Monetarism, etc), try to model economic activity within the scope of a set of principles. An economic theory that successfully abstracted a full set of rules that allowed us to plot and model economic activity would “clean up”; it is just that nobody has ever managed to articulate it.


Mathematicians have been doing something similar for decades in the part of the discipline called axiomatics, and the process they use is called axiomatisation: the process of taking a mathematical system (such as the natural numbers {1, 2, 3, …}) and ascertaining a set of axioms or fundamental rules that govern the behaviour of that system. Such sets of axioms are not unique, but once we choose such an axiom set we can then proceed to regenerate the mathematical system, prove theorems in the system that predict its properties, and so forth. There are complications, not least questions to do with completeness and consistency, that have been discussed profoundly by such as Kurt Gödel; the details are not relevant here, but may eventually become so if the axiomatisation of other disciplines can be achieved.

Suppose, now, that some descendant of #AlphaZero – let’s call it #AlphaZeroPlus for convenience – becomes adept at abstracting the rules of a game simply from observing instances of the game being played. We might then show it, for example, the operations of a stock market or a traffic system, and ask it to generate the set of rules governing the way the game is “played”. If it could do that, then given the achievements of #AlphaZero, we would expect it then to be able to master the game – stock market investment or traffic management in our examples – relatively quickly. Were #AlphaZeroPlus to be able to determine the rules of economics itself, either in some relatively isolated part or perhaps, if we are optimistic, of the whole of economics, then again we would expect it to be able to learn to play the game fairly quickly, depending on how complex the rules proved to be, and make moves in economics using the available instruments – the things the Central Authorities such as the Bank of England and the Federal Reserve can change, like interest rates or money supply – that would achieve some putative desirable outcome. That outcome is not of course determined by the rules of the game any more than the winning-conditions of Chess or Go are determined by the rules governing the legal moves in those games; one could just as easily play Chess using exactly the same moves with an objective of capturing the opponent’s queen as a criterion of victory as play it to achieve checkmate. To some extent this idea has already been implemented in such things as “Chess960” otherwise known as Fischer Chess where the starting-positions of the pieces are randomised on the back row of the board according to certain constraints.

So one next step for AI could be to apply itself to the considerable task of determining the rules governing the behaviour of an arbitrary system of human activity. Were it able to do that then, depending on how wide a range of human activity proved susceptible to such axiomatisation – and my suspicion is that more would prove susceptible than we might at first sight imagine or wish to imagine – the kinds of achievements #AlphaZero has demonstrated would prove of extraordinary and world-changing power.

The team at #DeepMind have already been applying their technology to analysis of medical data, hoping to discern in the material some clues that will help diagnostic medicine. There is no reason, were they or some team with similar skills to be able to crack axiomatisation, why we would not be able to apply the same technology to environmental issues such as climate change, where knowing the rules would facilitate more efficient alterations of our behaviour to achieve a desired outcome, or politics, where understanding the rules governing human behaviour might permit us to achieve desired political outcomes such as resolving deadlocks. And yes, of course there are dangers: knowing the rules and being able to play the game to achieve any desired outcome would give those controlling such power unlimited influence over the trajectory of the world in most respects. But the fact that progress can be abused is not a new discovery, and that AI can be abused should surprise nobody.

Partial Axiomatisations

There is a further stage to this evolution of AI power that presents even more intriguing prospects. Suppose, as seems likely, that there are human activities and systems of such complexity that they resist axiomatisation, whether because they cannot be axiomatised or because they require levels of skill beyond even our fabled #AlphaZeroPlus. We might find ourselves in possession of partial axiomatisations of such systems that could explain aspects of their evolution but not every aspect. Such sparse axiom sets could still be used to generate probabilistic models rather like weather maps showing the likelihoods of different scenarios emerging from current situations.

Super-Sensitive Systems

Of course, as has been known for some decades now, many supposedly predictable systems are super-sensitive to the initial conditions, so-called “chaotic” systems, so even with an axiomatisation of such systems we, in collaboration with our superintelligent AIs, might still find ourselves incapable of determining the initial conditions to sufficient accuracy to make reliable predictions of a system’s evolution. In the case of some systems, indeed, there is no “sufficient” degree of accuracy; any change in initial conditions, even in the millionth or billionth decimal place, will send the system off on eventually divergent trajectories. But, while we should be aware of such intractable cases, we should still be able to make some kinds of predictions of how systems will evolve in most cases or, if they are markedly unstable, to identify them as such.


Much of education consists of interactions between teachers and learners. Many of those interactions have the form of moves in a game: a student does this, a teacher does that; students ask questions, teachers answer them or refer students to resources that answer them; students make mistakes, teachers correct them; and so forth. One day it seems perfectly plausible for the kinds of game-oriented AI technology we have been discussing to formulate some rules that shape the way learning happens (for example, Richard Feynman tried some decades ago to do just that). If that gives rise to some Educational Artificial Intelligence Engines (EAIEs), as has been predicted in this blog before, then the kinds of abstractive, axiomatic processes described here will be crucial in ascertaining how those educational systems and processes work. Then every student could be allocated a personalised EAIE that would track and provide input and feedback to all his or her activities, questions and projects.

The Game of Life

And of course the “Holy Grail” of such abstraction and axiomatisation would be for us to be able to build an #AlphaOmega that could determine the rules governing the Game of Life itself (and we don’t mean the Martin Conway version), but since #AlphaOmega would be a part of the game it was attempting to model, that of course would take us into quite another realm of self-referential computational complexity.

The Ethics of AI

In 1997 I published a book called God and the Mind Machine through SPCK. It didn’t sell, and they pulped it. The biggest intellectual mistake of my life was to allow myself to be so discouraged by this that I effectively abandoned study of AI for the better part of 20 years. More fool me.

God and the Mind Machine (hereafter GMM) was more interested in the mind-body problem, the question of the soul (or rather why we don’t have one), and how to conceptualise the inner life of other entities, including potentially machines. I still regard it as having essentially resolved the mind-body problem in such a way as to leave open the possibility of artificial life that is fully sentient, and I have read no persuasive counter-arguments.

The solution to the mind-body problem can be stated in a sentence: to be a suitable body with a suitable brain is to be sentient; our minds are our bodies in their inside-looking-out-ness.

To be a suitable body with a suitable brain is to be sentient; our minds are our bodies in their inside-looking-out-ness.

That’s it. No souls, no ghosts in the machine, no special additional qualities bestowed upon us by the gods: just being bodies with brains existing interactively with the world is sufficient. And therefore a suitably sophisticated machine that interacts with the world can also be a mind, can also be intelligent, and something with which or someone with whom we could have just the same relationship as we do with another human being or one of the higher animals. And it seems to me inevitable that those machines will one day supersede us in intelligence and come to see us as the failed species that we are.

We shouldn’t be surprised by this. It is only our mistaken adherence to a version of Plato’s world-view tangled up with one or other kind of religion that makes us want more or think that there is more.

All this is obviously relevant to the current debate about AI, and the position I adopted in GMM persuades me that those saying that it isn’t a question of “if” but of “when” AI will become self-aware and therefore conscious are right. I propose to waste no more time debating the matter.

So the new question becomes not whether sentient AI will emerge, but what kind of sentience AI will enjoy. And this is the dilemma of “The Ethics of AI” because human beings only understand sentience from their own perspective, and an advanced AI will certainly understand it differently, and probably more deeply than we do.

Before we can sensibly enter into a discussion of the ethics of AI we have to resolve some pretty fundamental questions about the ethics of human beings. Almost everyone involved in the AI debate wants to ensure that they – the super-intelligent AI beings – will be benevolent to human beings, but I find it hard to understand why. Human beings are irretrievably flawed, and I don’t see why a super-intelligent AI would see them in a more favourable light; in fact I see every reason to suppose that a super-intelligent AI will see us exactly for what we are, and perhaps better than we understand ourselves. The reality is that we are a failed species: we have done some things very well and crawled a long way from the primaeval slime; but we rely on war and violence and drugs and brutality and discrimination to defend our so-called freedoms, which is to say that we are still locked into the same kind of evolutionary war that produced us and seemingly incapable of developing beyond it.

So there is a fundamental philosophical challenge buried at the heart of the debate about AI and especially its ethics: in terms of ethics and intelligence, how can we supersede ourselves? How, in other words, does a flawed species propose to engineer a species that is superior to itself in terms both of intelligence and ethics?

How does a flawed species propose to engineer a species that is superior to itself in terms both of intelligence and ethics?

This is a philosophically deep question, especially if we approach the problem from the perspective of the design of algorithms (although there are good reasons gleaned from the best and most successful of approaches to machine-learning to believe that this approach is not optimal). The problem is as old as computing itself: we commonly confuse the unpredictability of the behaviour of coded systems with the necessity for them to be coded. This amounts to another of our failed intuitions: we tend to think that if we know all the coded steps that define the behaviour of an AI we must necessarily know what that behaviour will be. This is an illusion.

We tend to think that, if we know all the coded steps that define the behaviour of an AI, we must necessarily know what that behaviour will be. This is an illusion.

It is also a very dangerous illusion. To see that it is an illusion consider the axiomatic definition of the natural numbers {1, 2, 3, 4, …}: we know exactly how to define the natural numbers in primitive terms in such a way that we can generate them indefinitely; yet there remain many properties of the natural numbers that elude us (with Goldbach’s Conjecture being the most famous). Or consider the rules of chess or of Go: we know exactly and entirely what they are, but we do not know every game that can be played and the question of whether beginning the game is sufficient with best play to win it remains undecided. Most powerfully of all, consider the operation of language: we can produce a dictionary and a grammar that defines the use of a language, but we have absolutely no hope of being able to predict all the uses to which its words and syntax can be put, even though we know that anything that is ever said must employ them.

In the case of AI this illusion is potentially catastrophic because it leads us to believe that because we control the process of creating the AI we must necessarily be able to control the way the AI will evolve and behave. Yet if the system is sufficiently sophisticated to offer a hope of reproducing human or superhuman intelligence, we cannot and we don’t.

In the case of AI this illusion is potentially catastrophic because it leads us to believe that because we control the process of creating the AI we must necessarily be able to control the way the AI will evolve and behave.

To put it as clearly as possible: we create our children, but we cannot control how they develop or behave; the same is true of AI, and perhaps to a greater extent because we understand the ramifications of their design far less.

This illustration is not arbitrary: in the case of human existence and behaviour we have attempted, and usually failed, to design ethical systems to constrain lives. They have failed because of the lack of a necessary and binding connection between the words that shape ethics and the behaviour ethics is intended to govern. As a result we have found it necessary to have recourse to policing, law, trial and punishment to enforce underlying ethical principles; it has never proved possible to leave adherence to a particular ethics entirely to the people, however many people have generally behaved well. Indeed, the fact that many people behave well has often been seen by those inclined to misbehave as a weakness that only encourages them to act as criminals.

The point is that throughout history the human ethical enterprise has failed: we have never succeeded in creating and maintaining a society in which ethical principles would govern behaviour; we have always had to resort to law and enforcement to maintain order. Even the most high-minded principles articulated in the best religions and philosophies have always required supporting force, and that force has always subverted the higher principles and in the end destroyed them.

Our anxieties about what we may have created or may yet create in AI are reminiscent of the Greek myth where Chronos kills his first five children by Rhea, only himself to be destroyed by the sixth, Zeus, because Rhea deceives him into believing that Zeus is already dead. I can’t imagine a more apposite myth to describe what will happen with AI, with or without an ethics, because sooner or later someone will hide what they are doing from the world until it is too late. (It is a myth that may repeat itself “upside down” in self-driving-truck Anthony Levandowski’s dream of an AI religion where, if historical precedent is anything to go by, those who make the AI gods will be the first to be destroyed by them.)

So even before we begin to develop an “Ethics of AI” we need to recognise that such an ethics will not work unless it is policed, and there will inevitably be desperate, reckless, destructive human agencies who will seek to deploy AI against any or all the principles elaborated in any ethics, ostensibly to their own advantage.

Pessimism about the prospects for an ethics of AI should not deter us from trying to work on one, any more than excitement at the possibilities offered by such new technology should blind us to its dangers, but for a species so stupid that it sees the only solution to a man with a gun as a man with a bigger gun, AI will inevitably be weaponised, so it is probably already too late.

Theories of Meaning

Rather too many years ago I spent a considerable amount of time reading and thinking about “The Theory of Meaning”, a somewhat esoteric part of Analytic Philosophy. I recently returned to this for no reason that I can easily identify (although that in itself is of some significance – see below), and it has given rise to a multiplicity of thoughts, some directly related to the topic and some not.

The directly-related thoughts are about philosophical argument and how sometimes a picture really is worth a thousand – indeed, several thousand – words; the indirectly-related thoughts (which I think are much more interesting) are about (a) what possible justification there can be for some of the smartest people on the planet engaging in this kind of ultra-abstruse discussion; (b) the answer to this question framed in terms of cultural holism; (c) a reaffirmation of fundamental principle that nothing is what it seems, things are far more complex than we ever imagine, and only mining the immensely obscure and difficult depths of human thought can hope to rescue us from this plight, if indeed anything can; (d) the social connectedness presupposed by holism – cf. the reference below to Michael Dummett’s essay “The Social Character of Meaning” – implies that we may find ourselves engaged in something seemingly utterly irrelevant and esoteric, even trivial and banal, but there may be strong unknown reasons that explain and justify such activity we cannot possibly determine in advance of doing it, and perhaps not afterwards either.

As an example of the fourth of these inferences, (d), I could cite the sequence of events that brought about this essay: taking down from a shelf seemingly aimlessly and at random a copy of a book I have not looked at for years; reading pages that seemed to bear no relationship to anything I am doing or thinking or even interested in at the moment; and finding shortly afterwards that they have precipitated a deluge of far-reaching ideas, some of which are reproduced below.

One of the best places to start, and the book in question, although far from being the easiest, is Michael Dummett’s The Seas of Language, OUP, 1993, reprinted in paperback in 1997. This collection of essays and lectures begins with two called “What is a Theory of Meaning? (I)” and – go on, take a wild guess! – “What is a Theory of Meaning? (II)”.

Professor Sir Michael Dummett (1925 – 2011) combined an extraordinary distinction as a philosopher with a passionate loathing of racism and a profound concern for migrants and refugees, thus demonstrating his own practical answer to the charge levelled above in (a). His obituary in The Daily Telegraph includes this:

“But his commitment to truth had very practical applications, and ones which he pursued with vigour and personal courage. In particular, throughout his career he maintained a deep interest in the ethical and political issues concerning refugees and immigration, informed by what he described as ‘an especial loathing of racial prejudice and its social manifestations’.”

And then subsequently …

“Dummett saw the root of the problem as lying in the political system. In his book On Immigration and Refugees (2001), he argued that lurking behind the egalitarian veneer of democracy is the more manipulative principle of playing on people’s prejudices to gain votes. This, when applied to issues of immigration, has invariably led to a jingoistic policy – a policy founded, essentially, on racism. In Britain, according to Dummett, much of the blame rested with the Home Office, a department which he accused of “decades of hopeless indoctrination in hostility”, first against Commonwealth immigrants, and later against asylum seekers and refugees. “For the Home Office,” he once wrote, “the adjective ‘bogus’ goes as automatically with ‘asylum seeker’ as ‘green’ does with ‘grass’.”

Dummett is persuaded for some very good reasons that a theory of meaning is really a theory of understanding, and that a full-blown theory of meaning would consist in an explanation of what it is to understand a language. To substantiate and explicate this claim he first has to deal with a cut-down and unsatisfactory misconception about meaning related to translation. His example is what he calls an “M-sentence”: “‘La terra si muove’ means that the Earth moves”. This tells us that a sentence in one language means the same as a sentence in another language, but it does not actually tell us anything about what those sentences refer to, what knowledge they entail, or whether indeed they entail any knowledge at all. This becomes more clear when he says that translation cannot furnish knowledge any more than the similar M-sentence “‘The Earth moves’ means that the Earth moves” (p.7).

To my mind it is unfortunate that Dummett and others like Davidson and Kripke use real sentences to try to illustrate this point, because a sentence framed in terms of non-existent entities (entities that in this instance exist only within the confines of rooms where I am teaching a philosophy elective, where they materialise on command), makes it much more powerfully: “A shpringlehock is called a ‘shpringlehock'” means that a shpringlehock is called a ‘shpringlehock’. Or “‘Shpringlehocks are grue’ means that shrpinglehocks are grue. These sentences are undoubtedly true within a cut-down and insufficient theory of meaning, but inasmuch as they do not enable us – or even require us – to possess any knowledge, still less understanding, they cannot qualify as examples of a full-blown theory of meaning.

This, I take it, is reasonably easy to understand, but what comes next is far more controversial, and cuts through the entire world of analytic philosophy where there is no satisfactory resolution to the question it poses, and where I think even Dummett flounders. When we ask what would constitute a full-blown or “full-blooded” as Dummett calls it (p.5) theory of meaning, we immediately find ourselves confronted by the challenge posed by a holistic theory of language such as that proposed by Quine in “Two Dogmas of Empricism”: does any word or any sentence assume its full meaning, and therefore convey fully whatever knowledge it contains, and therefore require and entail all the understanding necessary to grasp it fully, unless the entire language has been grasped?

Does any word or any sentence assume its full meaning, and therefore convey fully whatever knowledge it contains, and therefore require and entail all the understanding necessary to grasp it fully, unless the entire language has been grasped?

Dummett is inclined (op cit. p.17f) to reject linguistic holism because he believes that it can give no sensible account of what it is to learn a language, little-by-little, or to grasp a language partially, as would, on reflection, be true not only of those learning the language word-by-word and sentence-by-sentence, but of absolutely all of us, the most knowledgeable and wise and clever native-speakers included.

In a much earlier essay he addresses a similar problem by asking in what a full-blown concept of “gold” would consist, observing that the word plays a part in all sorts of different parts of the English language: in chemistry; in literature; in economics; in mythology; and so forth. Would it be reasonable to deny that someone understood the word “gold” merely because they had failed to become fully conversant with its use in all these different language-games? Some think it would not be reasonable (cf., inter alia, for a brilliant and detailed discussion by Dummett, “The Social Character of Meaning” in Truth and Other Enigmas, OUP (1978), chapter 23, p.427 et al. where he distinguishes “gold” from “elm” in various ways in the context of an analysis of a thesis advanced by Hilary Putnam). I think they are mistaken: whether something is “reasonable” in this colloquial sense of the word is not a philosophical criterion. It is transparently and tautologously true that someone who only understands some of the uses of the word “gold” and therefore possesses only part of the concept “gold” does not have a full-blown understanding of the word ‘gold’.

Why is this relevant? Granted that “Horses are called ‘horses'” means that horses are called ‘horses’ will not do as a theory of meaning, we need instead to have a way to characterise what the meaning of singular terms such as ‘horse’ might be, which is to say what it is to understand a word such as ‘horse’. Given further that absolutely nobody can ever be familiar with every conceivable use of any word, since the set of such sentences would constitute an uncountably-infinite set for exactly the same reason that all conceivable uses of the number “1” would constitute an uncountably-infinite set (as George Cantor famously and definitively proved; a proof for a putatively exhaustive list of all possible sentences containing the word “gold” could as easily be constructed), we are bound to conclude that a holistic theory of language entails the inescapable conclusion that nobody actually fully understands the meaning of any word, and therefore the meaning of any sentence, and therefore anything conveyed by a sentence or set of sentences. In short, nobody fully understands anything at all.

Nobody fully understands anything at all.

Someone might object that it is not reasonable, still less useful, to advocate or espouse a theory of language so demanding that it leads to the conclusion that no user of a language ever understands what their language means, for that would presumably require acceptance of the inference that no sentence uttered or written by any language-user is either ever fully understood by that user or by the user’s hearers and readers, and therefore that nobody ever fully understands what they mean by what they say/write or what anyone else means by what they say/write. But this inference is not an objection to the thesis; it is a confirmation of it, and indeed the most important possible conclusion to be drawn from it.

No sentence uttered or written by any language-user is either ever fully understood by that user or by the user’s hearers and readers, and therefore nobody ever fully understands what they mean by what they say/write or what anyone else means by what they say/write.

One of the welcome inferences to be drawn from this realisation, one of its happiest consequences, is that the apparent distinction between the solid reality of language and the apparent vagueness of art, music, and drama, is dissolved. An author is no more capable of knowing what she means by what she says or writes than an artist by what she paints. The inescapability of vagueness embraces art as well as language, but that vagueness, far from being a weakness, is for both art and language an irreducible strength. As Michael Polanyi once put it,

“For just as, owing to the ultimately tacit character of all knowledge, we remain ever unable to say all that we know, so also, in view of the tacit character of meaning, we can never quite know what is implied in what we say.”

Michael Polanyi, Personal Knowledge, p.95.

For similar reasons the vagueness of linguistic meaning also embraces – one might almost say “engulfs” – science, since while scientists might wish to contain the meanings of the words they use within a restricted language that admits the reinvention of “facts”, in practice this is a temporary illusion typical of a passing phase in scientific understanding, almost a deception, for a thorough-going quantum-mechanical understanding of science would force us again to embrace the universality of vagueness and the unknown.

Now here you might think that our dissenters can legitimately refer, and with more logical justification, to what is reasonable in a strong sense: is it reasonable to use language to affirm that nobody fully understands language; does that not visit upon us a vicious circle inasmuch as to claim to understand the sentence “Nobody fully understands anything at all” in full would simultaneously negate it. We seem inadvertently to have constructed a quasi-Gödelian self-referential sentence: if we are able to know it to be true, it must be false, since that would entail knowing at least one thing fully. But the inverse does not work: if we are able to know it to be false, that does not entails that we know it to be true because we can know it to be false on the basis of some other sentence that we understand fully, even if not this one.

And in fact even the first inference fails within a vague theory of language, which might better or more unambiguously be framed as a theory of language that characterises all meaning in terms of vagueness: that the ability to use a language does not require or entail a full command or understanding of the meanings of words, but the skill of employing and deploying words whose meanings are inescapably vague in a way sufficient to effect communication with other language-users who are similarly placed.

The ability to use a language does not require or entail a full command or understanding of the meanings of words, but the skill of employing and deploying words whose meanings are inescapably vague in a way sufficient to effect communication with other language-users who are similarly placed.

To a question such as “Does anyone fully understand the meaning of a word such as ‘gold’?” we can then give a two-part answer: “No, nobody fully understands the meaning of a word such as ‘gold’; but fully understanding the meaning of a word is not necessary in order to use it effectively for someone sufficiently skilled in a particular language”.

And therefore it is no objection to the central conclusion “Nobody fully understands anything at all” that this entails that nobody understands what “Nobody fully understands anything at all” really means. At the very least we can assert that nobody fully understands what it means “to understand” anything, and so this sentence does no more, and no less, than point to the fact that everything is vague (including the fact that everything is vague).

Everything is vague including the fact that everything is vague.

To the question “Does that mean that you don’t really understand what you mean when you assert that everything is vague?” we can therefore give an affirmative answer, and to the recidivist quasi-Gödelian riposte “But that suggests that you at least understand that you don’t really understand” we reply “No, it means that we don’t really even understand what it is we lack when we lack understanding, or even what this assertion means“. And one can immediately see how a meta-recidivist quasi-Gödelian can carry on like this ad nauseam or ad infinitum, whichever comes first (a joke I owe to a lecturer in combinatorial theory at Oxford whose name temporarily escapes me).

It is worth digressing to point out that under this reconceptualisation of language in terms of vagueness there are, strictly speaking, no such things as facts, and therefore the secondary premise with which Wittgenstein began the Tractatus – that “§1.1 the world is the totality of facts, not of things” – crumbles, not because we have reinstated the priority of “things”, which remain as completely beyond us as Wittgenstein supposed, but because we have dissolved the notion of “facts” within the seas of vagueness. Of course, Wittgenstein also repudiated the Tractatus later, and much of analytic philosophy was spawned by his Philosophical Investigations, which takes an entirely different position.

To cut short this Sisyphean (or perhaps it is a Tarskian) process we observe that in the sentence quoted above every word is vague: “everything”; “is”; “vague”; “including”; “the”; and “fact”.

But this excursus into the self-referential brings us back to an important realisation that cannot but apply to any attempt to construct a theory of meaning: that to state in language how language relates to what it means is to presuppose that we have solved the problem of how the explanatory terms we employ in such statements relate to the terms of the language, which is the philosophical equivalent of solving the problem of how you catch a lion “by catching two and letting one go”.

And we are brought back not to Michael Dummett, but to Wittgenstein: we cannot state in language how we use language or what language means; we can only show that we understand by using language in a community of like-language-users who deem our utterances sufficient for communication. We may not wish to reaffirm that “the meaning is the use”, but we certainly wish to convey the fact that we demonstrate our understanding of meaning by the skills we deploy in our use of a langauge; we convey our understanding of the concepts referred to by the words of the language by the way we deploy the skills required to communicate with them to the satisfaction of life-minded language-users. Dummett and others may want more, but it is not reasonable to want what one cannot possibly have.

[To be continued.]

Meditation & Prayer

Many people meditate, and many people pray. Quite what their meditations and prayers consist in is not something we often know, and perhaps something we have no need to know, but both activities suffer from a problem in that both tend to be associated with a particular type of activity that does not appeal to many people who would, if they understood them better, benefit from either or both.

There are two very common types of prayer that will not on this occasion concern us: shopping-lists of requests presented to an imagined deity that are sometimes called intercessions, for example that we could have a new watch for Christmas or, far less frivolously, that Aunt Betty will get well; and heartfelt cries of despair generated by tragedy and hopelessness in which as a last desperate attempt to salvage something from our existence we cast our spirits out into and upon whatever aspects of the universe or our version of god are ready to receive them.

There are also versions of meditation that need not on this occasion concern us, and those consist in the kinds of activities that we associate with Eastern mystics who are attempting to connect with some aspect of the universe or expel all attempts at such connection entirely.

My reason for excluding these versions of prayer and meditation is to emphasise that none of these kinds of activity need play any part in either. In fact, to try to render both more accessible, less alien, less “religious” or “spiritual”, and by that token more useful and productive, it is important to emphasise not the unreality they involve, but the reality. In fact, the first principle of prayer and meditation is that they be grounded in reality.

The first principle of prayer and meditation is that they be grounded in reality.

Even to say this transforms our appreciation of what prayer and meditation might be. Rather than reaching for something other-worldly, the principle encourages us to look instead at the ultra-worldly, the nature and depth of reality. And because for many of us prayer and meditation are conceived in term of an escape from reality, sources of solace in times of trouble, need, distress, this principle will cause us a lot of trouble unless we add to it a second principle that is just as important: in both prayer and meditation it is essential that we be completely honest, insofar as it lies within our power.

In both prayer and meditation it is essential that we be completely honest.

This raises a question: honest about what and with whom? Honest with oneself and, where possible, with others; honest about whatever we are thinking and doing.

It may seem an extreme claim, but one of the obstacles to human well-being that prayer and meditation can remove is our habit of lying to ourselves about things or allowing ourselves to be deceived about things. Which brings me to the revelation that brought about this short essay.

I was reading about someone who claims to meditate as well as observe the ceremonial and ritual requirements of his religion, and for some reason it triggered the thought that, although I would hitherto have said that I never meditate and never pray, I do in fact meditate (although I never pray). But the meditation I engage in bears absolutely no resemblance to anything that could be associated with an Eastern mystic sitting cross-legged in a cloud of incense chanting “Om”. It has instead a simple characteristic that cleanses, regenerates, inspires, calms, reorganises and remotivates me: it consists of concentrated, honest thought or, as I shall call it in a moment when I have explained what I mean, indwelling.

Meditation as concentrated, honest thought cleanses, regenerates, inspires, calms, reorganises and remotivates.

To try to make this powerful and perhaps to some unlikely claim more intelligible and attractive, I should relate it to other things I have written about writing. People who are natural writers love nothing more than a blank sheet of paper and a pen (and yes, I do mean a pen, not a computer, and preferably in my case a fountain pen with a good gold nib and black ink). And contrary to some preconceptions – I may here be speaking only for myself, but then I am scarcely qualified to speak for anyone else – what makes this blank sheet of paper so gloriously attractive is that when I start to write I have absolutely no idea what I will have written by the time I get to the bottom of it. It represents not a receptacle into which I will pour what I have already thought, but a space in which I will both discover what I already think and create new things to think that I have never thought before. As Wittgenstein once put it, “The first time I knew I thought that was when I heard myself saying it” (I think it’s in the Philosophical Investigations somewhere), except in my case it isn’t just when I hear myself saying it (although that happens as well), but when I find myself writing it.

As some of my readers may remember me saying before, when I was a student I always knew I had written a good exam paper or essay when I emerged from the process knowing more about the topic than I knew when I started. It is the same today: when I finish this essay I will know more about meditation and prayer than I did when I started. That is the joy of writing (and I am sure also of painting, composing, performing, or indeed any kind of thinking); that is the reward and the compensation for all the painstaking effort that is required to do any of these things well: that when we do them we never merely repeat or record or recount; we always, if the process is worth doing at all, create and discover.

I always know I have written a good essay when I emerge from the process knowing more about the topic than I knew when I started.

This may seem impossibly ambitious, and many people may read it and think that it couldn’t possibly be true of them and probably isn’t true of me. They would be wrong on both counts. This experience is what meditation and prayer realise: that we emerge from both more than we were when we started, and if we don’t then we’ve not really been meditating or praying at all. (It is irrelevant whether we conceive of some external being or entity as being part of this process, since neither is necessary for the process to be possible. Indeed, it may be a useful distinction between meditation and prayer to suggest that the former presupposes and envisages no such external beings or entities whereas the latter does. In that sense, as I have said, I meditate but I never pray.)

Concentrated, honest thought need not and ideally should not be entirely conscious thought. For thought to be entirely conscious, however concentrated, would be for it to engage only a fraction of our capacity for thinking. Almost everyone has had the experience of thinking about a problem, failing to solve it, then suddenly having the solution pop into mind much later long after we ceased to apply any conscious effort to it at all. Our brains continue to process problems beneath and behind consciousness (I want to avoid using the words subconscious and unconscious because of their unhelpful psychoanalytic associations), and are often at their most powerful when we leave them to their own devices. (Indeed, when a problem doesn’t submit to conscious analysis, we generally have absolutely no means to “think harder” that I can make sense of: what on earth would one do in order to “think harder”?) By abandoning conscious thought we free up, or so it seems, powers in our brains over which we have little control.

I say “little control” because we do have some: what we are interested in and attend to will tend to be what we are good at thinking about. If I spend my time thinking about philosophy and reading and writing philosophy, I am likely to have interesting philosophical thoughts because my brain has the material it needs to process philosophical ideas “in the background”; I wouldn’t expect to have profound thoughts about nuclear physics or anthropology (although the cross-over can also sometimes take us by surprise). So what we attend to does in some sense direct the kinds of thoughts we will later have by feeding our background thinking with material to work on.

Attending to something with fascination, concentration and determination is what I mean – borrowing a term used by one of my earliest gurus Michael Polanyi – indwelling: we dwell in, absorb ourselves in, immerse ourselves in something, some activity or topic or skill, and in so doing we provide the background processing of which our brains are capable with the material necessary if we are to be creative and to discover new things. To sit in front of a blank sheet of paper with no resource other than a fountain pen (or, if you insist, a word-processor) is to open the gates through which the background processes that ensue from our attending to things of interest, our indwelling, can flow.

And this is of course a reason why the dilettante flitting of modern minds from one topic to another on social media and through smartphones, a flitting that is the antithesis of indwelling, of concentrated, honest attending to a topic, sometimes for hours and days and months and years, is a self-defeating activity as far as the discover of creative meditation is concerned: it isn’t that people are incapable of creative thought, but that they do not feed their minds with the requisite raw material out of which brains can produce creative thought. Nobody would attempt to run a marathon without attending to proper nourishment; nobody should attempt to meditate without attending to a rich diet of nourishment for the mind.

Nobody should attempt to meditate without attending to a rich diet of nourishment for the mind.

So we should all ask ourselves whether we are giving ourselves a chance to develop and grow mentally and spiritually if we are not giving any attention to the raw material we feed into our brains. Depression and despair can be self-induced if we only ever give attention to what is negative and destructive, violent and cruel, dishonest and fraudulent, and we need to be very careful if our entire diet of intellectual material comes from social media, for much of what we experience there is as toxic to our minds as physical poison would be to our bodies.

Much of what we experience in social media is as toxic to our minds as physical poison would be to our bodies.

We put a lot of effort into avoiding food-poisoning; we should put at least as much into avoiding mind-poisoning.

We put a lot of effort into avoiding food-poisoning; we should put at least as much into avoiding mind-poisoning.

Stevie Smith once wrote a poem called “Analysand” about someone who is morbidly preoccupied with “their own mental stink”, and someone sent it to me after a particularly toxic conversation in which I had been unremittingly negative about everything under the sun, especially myself. This was almost forty years ago, but I still remember the last two lines:

Would you expect to find him in the pink?

Who’s solely occupied with his own mental stink?

Stevie Smith, Analysand

We live in an age where preoccupation with the world’s stink is becoming a source of mental diseases, not merely a reflection of them. It is time we paid more attention to our mental diet, attended more seriously and persistently to what we feed our minds, and learned that dwelling in things of worth and import with concentration and honesty is a prerequisite of meditation or prayer, and therefore of creativity, discovery and human fulfilment.

Of course, this raises more questions, not least the question of how we are to decide what is worth attending to, what is worth studying, knowing, practising, learning and dwelling in. But that is a topic for another day.

On Lies and Liars

Avid readers of my blogs will probably remember that one of my most-often-used aphorisms is a saying that I regularly and faithfully attribute to Ludwig Wittgenstein (LW), even though my best endeavours have failed to trace its origins:

“All the really important decisions tend to be taken right at the very beginning, when we hardly realise that we have begun.”

Sometimes I imagine that Wittgenstein was not the person from whom I first learned this principle, but if so I am unable to trace an alternative author. Perhaps, then, the attribution to anyone other than myself is mistaken, and I am in reality the author of my own aphorism? What would follow? That the attribution to LW is a lie? A mistake, perhaps, but not a lie; not something deliberately manufactured to deceive; on the contrary, something intended to dispel any supposition that the insight is attributable to me, even if it is not attributable to LW either. I have no desire to earn undeserved credit for something that I originally acquired from someone else.

Elsewhere I have quoted the same sentiment somewhat differently.

“It is often the case that the really important decisions are made right at the beginning, when we hardly know we have begun.”

Either way, the sentiment is the same.

Whatever the origins of this mantra, and whoever originally said or penned it, it is incontrovertible that it has now been said. It may, just conceivably, be the case that nobody has ever said it before, in which case it is indeed attributable to me; I cannot say. All I can say is that I believe that I learnt it from LW even if I have somehow mangled the true attribution, or woven together many other skeins of thought to produce the idea myself.

What of it? To be mistaken about an attribution is not the same as to lie about it. We might lie about it in order to try to raise its authority; we might be mistaken merely because we have forgotten or misplaced the original. So let us start again.

  1. Someone who lies demonstrates that she has not understood the world.
    1. To the response “On the contrary: she may demonstrate that she understands the world better than others” we have no reply. Someone who does not understand that to lie is to misunderstand the world does not understand the world well enough to understand an explanation of the same claim.
  2. To lie is to injure oneself.
    1. To the response “On the contrary: it may be to injure another” we have no reply. Someone who does not understand that to lie is to injure oneself cannot be protected from such injury by means of explanation.
  3. To lie well we need to become a lie ourselves in order that we can believe the lies we tell and make them sound true.
    1. To the response “On the contrary: someone may become a very good accomplished liar while knowing perfectly well that everything she says is a lie” we have no reply. Someone who believes that we can lie consistently and persuasively while not being ourselves a lie does not understand what lying is or how difficult it is to do extremely well has not understood lying and so cannot understand an explanation of what it takes to be a liar.
  4. The most accomplished liars are those who believe their lies absolutely.
    1. To the response “On the contrary: there are accomplished liars who know very well that they are lying and do so specifically to deceive us while remaining themselves in possession of the truth” we have no response. Someone who lies to great effect must believe their own lie or they could not persuade other rational persons that it was true.

When we say that someone who lies, and especially someone who lies effectively and skilfully has “not understood the world”, we are not suggesting that their lying does not in some measure advance what they take to be their cause. We are rather saying that their lying can only further a cause that is itself mistaken, a result of a misunderstanding of the world. The alternative would be to allow that lying could produce a desirable effect that is based upon a proper understanding of the world, which would be to make the world itself a lie.

Many, of course, have argued over the centuries that the world is indeed a lie. This concept lies at the heart of a religious notion of the imperfection of the world brought about by corruption. But such a notion is clearly nonsensical, whatever its religious credentials: the nature of the world cannot be contaminated by human corruption, even if the nature of human society and our dealings with the world can. We are reminded of what Richard Feynman wrote as the concluding sentence of his part of the report on the Challenger disaster:

“For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”

There is an important distinction here between something that is used to ill effect and something that is corrupted. If I administer cyanide to you and kill you, I have put the cyanide to ill effect, but I have not corrupted the cyanide, which does what it does in the way it has always done it. The notion of a corrupt world or a corrupt universe makes no sense scientifically: things do what they do, acting always according to the laws that have emerged in nature. (The word “nature” is also problematic, but we will park that concern for now.)

A natural inference is that, since human beings are products of nature, human beings are similarly incapable of corruption: they simply do what they do; if their doing includes lying, then lying is also no more than the actualisation of a possibility inherent in nature as it has evolved to produce human beings. On such an analysis, for which we should have considerable sympathy because it obviates the need to the vocabulary of sin and evil, corruption and salvation, our response to lying should be the same as our response to flood, fire and pestilence: they are the consequences of nature doing what nature does; our task is to control their impact by implementing those skills and powers that have accrued to us through our owe development as we in our turn do what nature allows us to do to curb their deleterious influences.

Imagine, then, that we could somehow eliminate the vocabulary of good and bad, corruption and saintliness, and respond to all human actions as we would to natural events exactly as we should, for they are natural events. Then, when something like 9/11 or the Las Vegas shootings occurs, instead of reaching for the vocabulary of sin and evil, which achieves absolutely nothing, we should reach instead for the armour of analysis and correction: something has happened we and people like us deem undesirable and contrary to human well-being; we should take steps to ensure that nothing like it happens again.

The vocabulary of evil and corruption only serves to inhibit implementation of appropriate remedial strategies. It suggests that the origins of the behaviour are to be sought and found solely in the mind of the culprit and not more widely in the movements of ideas and values in the society that created him. It suggests, in other words, that nature can be fooled by a suitable application of human will accompanied by something called evil intent. But nature cannot be fooled: bullets will injure and kill people because it is their nature to do so; take away the bullets and nobody can be killed by bullets.

As someone put it on a news programme yesterday, the NRA believes that the solution to a bad man with a gun is a good man with a gun. The mistaken and misleading vocabulary of “good” and “bad” is rehearsed. That only means that we are farther way from understanding that a culture that regards owning and using guns as in some sense “cool” creates a climate in which the notion of killing 58 strangers and injuring hundreds more is even thinkable. And before we are too quick to point the finger, we should ask ourselves how much of our entertainment, particularly in film, consists in the glorification of violence and guns. We are brain-washed into believing the NRA lie: that the solution to a bad man with a gun is a good man with a gun. This is the Schwarzenegger logic, the Die Hard logic: be tougher and bigger and toting a bigger gun, and you can subdue evil. But the enemy is not evil: the enemy is the vocabulary of good and evil that separates human conduct from natural processes and pretends that the way to deal with bad people is to point fingers at them and call them “evil”. We might as well try to divert a hurricane by praying, calling it names, pointing fans or – heaven forbid – shooting at it. Then again, we could apply some of our enormous economic resources to building houses for people that can withstand hurricanes. It’s actually not that hard. But instead we prefer to speak of bad men and storms using the language of evil, forgetting that nature is not evil and that nature cannot be fooled.

It is of course cheaper both economically and politically to label certain people “evil” than to address the social problems and attitudes of mind that made gun-carrying societies think they are cool. “This was an evil act”, “an act of pure evil” makes it seem like the fault of some malignant force, some Devil or Satan, a consequence of some original sin committed so long ago that nobody can now remember when and nobody today can bear responsibility for it. It may even be suggesting that nature herself is evil or capable of evil. But nature is incapable of evil, and nature cannot be fooled.

It is easy to try to find counter-examples in the many things we experience as great evils, tragedies, and sources of suffering: cancer; plague; some viruses; Altzheimer’s Disease; even death. But these are not examples of nature being evil: they are just examples of conflicting trajectories in which the success of one process causes the failure of another. Reaching for the language of good and evil to explain or accuse diseases achieves nothing: viruses do what they do and degenerative diseases arise from natural wear and tear in exactly the same way that floods devastate communities and hurricanes demolish houses, by virtue of nature doing what nature does (assisted or not by other things like Climate Change and unhealthy human lifestyles, because nature cannot be fooled).

So how does all this connect with our title, “On Lies and Liars”? Imagine that our first epithet, that someone who lies does not understand the world, were to be applied not to an individual, but to a whole society, perhaps a whole species. What happens when an entire species learns and takes as given what is in fact a lie, comes to believe the lie, and employs the lie in its entire analysis of the world?

What happens when an entire species learns and takes as given what is in fact a lie, comes to believe the lie, and employs the lie in its entire analysis of the world?

The lie we have in mind is the claim that there exists something called “evil”. There is no denying that there are unspeakable, despicable and utterly reprehensible acts, and that they are all undertaken by human beings. If all we mean by “evil” is this, then there are evil acts. But that is not all we mean: to invoke the language of good and evil is to appeal to an ancient and cosmic dualism in which good and evil originate in opposite poles of a metaphysical universe that lies beyond our world. But there are no such cosmic powers: there is only nature and what nature does; and nature cannot be fooled. “Evil” acts  are human acts perpetrated by human beings who are the product of societies that are themselves the product of natural processes that cannot lie and will not be fooled. They are not “evil” because there is no force for and source of evil other than our own deployment of natural processes that just do what they do.

To invoke the language of good and evil is to appeal to an ancient and cosmic dualism in which good and evil originate in opposite poles of a metaphysical universe that lies beyond our world.

Invoking the language of good and evil is the equivalent of throwing up one’s hands in horror and disclaiming all responsibility: the source of this great tragedy lay outside the earth and beyond human control; therefore there is nothing to be done and nobody we should blame. “Evil” becomes a catch-all that absolves us from all responsibility. As such it is a lie into which almost all of us have at some time bought, and the benefits of which we have all at some time sought to enjoy: “I don’t know what came over me; it was as if I was possessed”.

So what does happen when entire societies and perhaps entire species come to believe and adopt the language of a lie? What happens is that they become blind to the causes of their own misfortunes, absolve themselves from responsibility for things that are entirely of their own making, and blame cosmic forces for things whose origins lie at home because the lie they so completely embrace leads them to seek the causes of all these things in entirely the wrong place. Using the language of evil we seek to exempt ourselves from responsibility for anything and everything that is too difficult, too inconvenient, too politically costly, or too embarrassing to address directly. And so we are all in our own ways consummate liars who have learned how most effectively to lie to ourselves by allowing ourselves to become a lie.

Fundamental lies, lies that is which permeate societies and find themselves endorsed by most members of those societies, lie so deep in our psyches that we find them almost impossible to detect and identify. The suggestion here is that the notion of “evil” is such a lie, and that we each inherit that lie with our culture and our education, especially our religious education, and until we address that problem many other things in society will be impossible to rectify.

“All the really important decisions tend to be taken right at the very beginning, when we hardly realise that we have begun.”