Rules of the Game

The Context

Some people are not as impressed as they should be by the achievement of #AlphaZero in mastering three independent games from scratch to world-champion standard, given only the rules, in a matter of hours. (For details, cf. the paper published by DeepMind on December 5th, 2017 at I have heard it said that computers have always been better at some things than human beings, not least calculating, crunching data, determining statistics, and a host of other things. But this is different because it is not just about doing things faster; it is about learning how to play games that have been around for thousands of years while ignoring everything that human beings have ever said about them, thought about them, or suggested might be a good way to play them.

#AlphaZero plays Go differently from #AlphaGoZero; it plays Chess differently, too, making moves that few humans would even consider, to say nothing of the fact that it reinvented the entire treasure-house of standard book opening strategy from scratch.

This massive achievement, coming on the heels of similar successes now long passed in simple Atari games and Backgammon, will almost certainly soon be followed by mastery of Buzzard’s Starcraft 2 game, claimed to be far more difficult than Go, Chess or Shogi.

The Challenge

This blog is not about any of this; it is about something altogether different: the question of how #AlphaZero might become a master at human activities that are not obviously games, but probably can be conceived and modelled as such, even if they are not.

It would be comparatively easy to determine the rules of Chess and Go by watching a few games; it would be and clearly is far less easy to determine the rules of economics, the operation of the world’s stock markets, currency exchanges, the behaviour of the weather, traffic systems, and countless other things humans engage and are affected by on a daily basis in that are at least notionally bounded, that is to say contained within a definable range of activities or events.

Nobody knows the rules of any of these “games”, which is not to say that there are none or that there have not been substantial and persistent attempts to ascertain what they are; indeed, economic and political theories try to say what the rules are, and the various theories of economics that have been articulated, if not precisely codified (Marxism, Keynesianism, Monetarism, etc), try to model economic activity within the scope of a set of principles. An economic theory that successfully abstracted a full set of rules that allowed us to plot and model economic activity would “clean up”; it is just that nobody has ever managed to articulate it.


Mathematicians have been doing something similar for decades in the part of the discipline called axiomatics, and the process they use is called axiomatisation: the process of taking a mathematical system (such as the natural numbers {1, 2, 3, …}) and ascertaining a set of axioms or fundamental rules that govern the behaviour of that system. Such sets of axioms are not unique, but once we choose such an axiom set we can then proceed to regenerate the mathematical system, prove theorems in the system that predict its properties, and so forth. There are complications, not least questions to do with completeness and consistency, that have been discussed profoundly by such as Kurt Gödel; the details are not relevant here, but may eventually become so if the axiomatisation of other disciplines can be achieved.

Suppose, now, that some descendant of #AlphaZero – let’s call it #AlphaZeroPlus for convenience – becomes adept at abstracting the rules of a game simply from observing instances of the game being played. We might then show it, for example, the operations of a stock market or a traffic system, and ask it to generate the set of rules governing the way the game is “played”. If it could do that, then given the achievements of #AlphaZero, we would expect it then to be able to master the game – stock market investment or traffic management in our examples – relatively quickly. Were #AlphaZeroPlus to be able to determine the rules of economics itself, either in some relatively isolated part or perhaps, if we are optimistic, of the whole of economics, then again we would expect it to be able to learn to play the game fairly quickly, depending on how complex the rules proved to be, and make moves in economics using the available instruments – the things the Central Authorities such as the Bank of England and the Federal Reserve can change, like interest rates or money supply – that would achieve some putative desirable outcome. That outcome is not of course determined by the rules of the game any more than the winning-conditions of Chess or Go are determined by the rules governing the legal moves in those games; one could just as easily play Chess using exactly the same moves with an objective of capturing the opponent’s queen as a criterion of victory as play it to achieve checkmate. To some extent this idea has already been implemented in such things as “Chess960” otherwise known as Fischer Chess where the starting-positions of the pieces are randomised on the back row of the board according to certain constraints.

So one next step for AI could be to apply itself to the considerable task of determining the rules governing the behaviour of an arbitrary system of human activity. Were it able to do that then, depending on how wide a range of human activity proved susceptible to such axiomatisation – and my suspicion is that more would prove susceptible than we might at first sight imagine or wish to imagine – the kinds of achievements #AlphaZero has demonstrated would prove of extraordinary and world-changing power.

The team at #DeepMind have already been applying their technology to analysis of medical data, hoping to discern in the material some clues that will help diagnostic medicine. There is no reason, were they or some team with similar skills to be able to crack axiomatisation, why we would not be able to apply the same technology to environmental issues such as climate change, where knowing the rules would facilitate more efficient alterations of our behaviour to achieve a desired outcome, or politics, where understanding the rules governing human behaviour might permit us to achieve desired political outcomes such as resolving deadlocks. And yes, of course there are dangers: knowing the rules and being able to play the game to achieve any desired outcome would give those controlling such power unlimited influence over the trajectory of the world in most respects. But the fact that progress can be abused is not a new discovery, and that AI can be abused should surprise nobody.

Partial Axiomatisations

There is a further stage to this evolution of AI power that presents even more intriguing prospects. Suppose, as seems likely, that there are human activities and systems of such complexity that they resist axiomatisation, whether because they cannot be axiomatised or because they require levels of skill beyond even our fabled #AlphaZeroPlus. We might find ourselves in possession of partial axiomatisations of such systems that could explain aspects of their evolution but not every aspect. Such sparse axiom sets could still be used to generate probabilistic models rather like weather maps showing the likelihoods of different scenarios emerging from current situations.

Super-Sensitive Systems

Of course, as has been known for some decades now, many supposedly predictable systems are super-sensitive to the initial conditions, so-called “chaotic” systems, so even with an axiomatisation of such systems we, in collaboration with our superintelligent AIs, might still find ourselves incapable of determining the initial conditions to sufficient accuracy to make reliable predictions of a system’s evolution. In the case of some systems, indeed, there is no “sufficient” degree of accuracy; any change in initial conditions, even in the millionth or billionth decimal place, will send the system off on eventually divergent trajectories. But, while we should be aware of such intractable cases, we should still be able to make some kinds of predictions of how systems will evolve in most cases or, if they are markedly unstable, to identify them as such.


Much of education consists of interactions between teachers and learners. Many of those interactions have the form of moves in a game: a student does this, a teacher does that; students ask questions, teachers answer them or refer students to resources that answer them; students make mistakes, teachers correct them; and so forth. One day it seems perfectly plausible for the kinds of game-oriented AI technology we have been discussing to formulate some rules that shape the way learning happens (for example, Richard Feynman tried some decades ago to do just that). If that gives rise to some Educational Artificial Intelligence Engines (EAIEs), as has been predicted in this blog before, then the kinds of abstractive, axiomatic processes described here will be crucial in ascertaining how those educational systems and processes work. Then every student could be allocated a personalised EAIE that would track and provide input and feedback to all his or her activities, questions and projects.

The Game of Life

And of course the “Holy Grail” of such abstraction and axiomatisation would be for us to be able to build an #AlphaOmega that could determine the rules governing the Game of Life itself (and we don’t mean the Martin Conway version), but since #AlphaOmega would be a part of the game it was attempting to model, that of course would take us into quite another realm of self-referential computational complexity.


The Ethics of AI

In 1997 I published a book called God and the Mind Machine through SPCK. It didn’t sell, and they pulped it. The biggest intellectual mistake of my life was to allow myself to be so discouraged by this that I effectively abandoned study of AI for the better part of 20 years. More fool me.

God and the Mind Machine (hereafter GMM) was more interested in the mind-body problem, the question of the soul (or rather why we don’t have one), and how to conceptualise the inner life of other entities, including potentially machines. I still regard it as having essentially resolved the mind-body problem in such a way as to leave open the possibility of artificial life that is fully sentient, and I have read no persuasive counter-arguments.

The solution to the mind-body problem can be stated in a sentence: to be a suitable body with a suitable brain is to be sentient; our minds are our bodies in their inside-looking-out-ness.

To be a suitable body with a suitable brain is to be sentient; our minds are our bodies in their inside-looking-out-ness.

That’s it. No souls, no ghosts in the machine, no special additional qualities bestowed upon us by the gods: just being bodies with brains existing interactively with the world is sufficient. And therefore a suitably sophisticated machine that interacts with the world can also be a mind, can also be intelligent, and something with which or someone with whom we could have just the same relationship as we do with another human being or one of the higher animals. And it seems to me inevitable that those machines will one day supersede us in intelligence and come to see us as the failed species that we are.

We shouldn’t be surprised by this. It is only our mistaken adherence to a version of Plato’s world-view tangled up with one or other kind of religion that makes us want more or think that there is more.

All this is obviously relevant to the current debate about AI, and the position I adopted in GMM persuades me that those saying that it isn’t a question of “if” but of “when” AI will become self-aware and therefore conscious are right. I propose to waste no more time debating the matter.

So the new question becomes not whether sentient AI will emerge, but what kind of sentience AI will enjoy. And this is the dilemma of “The Ethics of AI” because human beings only understand sentience from their own perspective, and an advanced AI will certainly understand it differently, and probably more deeply than we do.

Before we can sensibly enter into a discussion of the ethics of AI we have to resolve some pretty fundamental questions about the ethics of human beings. Almost everyone involved in the AI debate wants to ensure that they – the super-intelligent AI beings – will be benevolent to human beings, but I find it hard to understand why. Human beings are irretrievably flawed, and I don’t see why a super-intelligent AI would see them in a more favourable light; in fact I see every reason to suppose that a super-intelligent AI will see us exactly for what we are, and perhaps better than we understand ourselves. The reality is that we are a failed species: we have done some things very well and crawled a long way from the primaeval slime; but we rely on war and violence and drugs and brutality and discrimination to defend our so-called freedoms, which is to say that we are still locked into the same kind of evolutionary war that produced us and seemingly incapable of developing beyond it.

So there is a fundamental philosophical challenge buried at the heart of the debate about AI and especially its ethics: in terms of ethics and intelligence, how can we supersede ourselves? How, in other words, does a flawed species propose to engineer a species that is superior to itself in terms both of intelligence and ethics?

How does a flawed species propose to engineer a species that is superior to itself in terms both of intelligence and ethics?

This is a philosophically deep question, especially if we approach the problem from the perspective of the design of algorithms (although there are good reasons gleaned from the best and most successful of approaches to machine-learning to believe that this approach is not optimal). The problem is as old as computing itself: we commonly confuse the unpredictability of the behaviour of coded systems with the necessity for them to be coded. This amounts to another of our failed intuitions: we tend to think that if we know all the coded steps that define the behaviour of an AI we must necessarily know what that behaviour will be. This is an illusion.

We tend to think that, if we know all the coded steps that define the behaviour of an AI, we must necessarily know what that behaviour will be. This is an illusion.

It is also a very dangerous illusion. To see that it is an illusion consider the axiomatic definition of the natural numbers {1, 2, 3, 4, …}: we know exactly how to define the natural numbers in primitive terms in such a way that we can generate them indefinitely; yet there remain many properties of the natural numbers that elude us (with Goldbach’s Conjecture being the most famous). Or consider the rules of chess or of Go: we know exactly and entirely what they are, but we do not know every game that can be played and the question of whether beginning the game is sufficient with best play to win it remains undecided. Most powerfully of all, consider the operation of language: we can produce a dictionary and a grammar that defines the use of a language, but we have absolutely no hope of being able to predict all the uses to which its words and syntax can be put, even though we know that anything that is ever said must employ them.

In the case of AI this illusion is potentially catastrophic because it leads us to believe that because we control the process of creating the AI we must necessarily be able to control the way the AI will evolve and behave. Yet if the system is sufficiently sophisticated to offer a hope of reproducing human or superhuman intelligence, we cannot and we don’t.

In the case of AI this illusion is potentially catastrophic because it leads us to believe that because we control the process of creating the AI we must necessarily be able to control the way the AI will evolve and behave.

To put it as clearly as possible: we create our children, but we cannot control how they develop or behave; the same is true of AI, and perhaps to a greater extent because we understand the ramifications of their design far less.

This illustration is not arbitrary: in the case of human existence and behaviour we have attempted, and usually failed, to design ethical systems to constrain lives. They have failed because of the lack of a necessary and binding connection between the words that shape ethics and the behaviour ethics is intended to govern. As a result we have found it necessary to have recourse to policing, law, trial and punishment to enforce underlying ethical principles; it has never proved possible to leave adherence to a particular ethics entirely to the people, however many people have generally behaved well. Indeed, the fact that many people behave well has often been seen by those inclined to misbehave as a weakness that only encourages them to act as criminals.

The point is that throughout history the human ethical enterprise has failed: we have never succeeded in creating and maintaining a society in which ethical principles would govern behaviour; we have always had to resort to law and enforcement to maintain order. Even the most high-minded principles articulated in the best religions and philosophies have always required supporting force, and that force has always subverted the higher principles and in the end destroyed them.

Our anxieties about what we may have created or may yet create in AI are reminiscent of the Greek myth where Chronos kills his first five children by Rhea, only himself to be destroyed by the sixth, Zeus, because Rhea deceives him into believing that Zeus is already dead. I can’t imagine a more apposite myth to describe what will happen with AI, with or without an ethics, because sooner or later someone will hide what they are doing from the world until it is too late. (It is a myth that may repeat itself “upside down” in self-driving-truck Anthony Levandowski’s dream of an AI religion where, if historical precedent is anything to go by, those who make the AI gods will be the first to be destroyed by them.)

So even before we begin to develop an “Ethics of AI” we need to recognise that such an ethics will not work unless it is policed, and there will inevitably be desperate, reckless, destructive human agencies who will seek to deploy AI against any or all the principles elaborated in any ethics, ostensibly to their own advantage.

Pessimism about the prospects for an ethics of AI should not deter us from trying to work on one, any more than excitement at the possibilities offered by such new technology should blind us to its dangers, but for a species so stupid that it sees the only solution to a man with a gun as a man with a bigger gun, AI will inevitably be weaponised, so it is probably already too late.

Theories of Meaning

Rather too many years ago I spent a considerable amount of time reading and thinking about “The Theory of Meaning”, a somewhat esoteric part of Analytic Philosophy. I recently returned to this for no reason that I can easily identify (although that in itself is of some significance – see below), and it has given rise to a multiplicity of thoughts, some directly related to the topic and some not.

The directly-related thoughts are about philosophical argument and how sometimes a picture really is worth a thousand – indeed, several thousand – words; the indirectly-related thoughts (which I think are much more interesting) are about (a) what possible justification there can be for some of the smartest people on the planet engaging in this kind of ultra-abstruse discussion; (b) the answer to this question framed in terms of cultural holism; (c) a reaffirmation of fundamental principle that nothing is what it seems, things are far more complex than we ever imagine, and only mining the immensely obscure and difficult depths of human thought can hope to rescue us from this plight, if indeed anything can; (d) the social connectedness presupposed by holism – cf. the reference below to Michael Dummett’s essay “The Social Character of Meaning” – implies that we may find ourselves engaged in something seemingly utterly irrelevant and esoteric, even trivial and banal, but there may be strong unknown reasons that explain and justify such activity we cannot possibly determine in advance of doing it, and perhaps not afterwards either.

As an example of the fourth of these inferences, (d), I could cite the sequence of events that brought about this essay: taking down from a shelf seemingly aimlessly and at random a copy of a book I have not looked at for years; reading pages that seemed to bear no relationship to anything I am doing or thinking or even interested in at the moment; and finding shortly afterwards that they have precipitated a deluge of far-reaching ideas, some of which are reproduced below.

One of the best places to start, and the book in question, although far from being the easiest, is Michael Dummett’s The Seas of Language, OUP, 1993, reprinted in paperback in 1997. This collection of essays and lectures begins with two called “What is a Theory of Meaning? (I)” and – go on, take a wild guess! – “What is a Theory of Meaning? (II)”.

Professor Sir Michael Dummett (1925 – 2011) combined an extraordinary distinction as a philosopher with a passionate loathing of racism and a profound concern for migrants and refugees, thus demonstrating his own practical answer to the charge levelled above in (a). His obituary in The Daily Telegraph includes this:

“But his commitment to truth had very practical applications, and ones which he pursued with vigour and personal courage. In particular, throughout his career he maintained a deep interest in the ethical and political issues concerning refugees and immigration, informed by what he described as ‘an especial loathing of racial prejudice and its social manifestations’.”

And then subsequently …

“Dummett saw the root of the problem as lying in the political system. In his book On Immigration and Refugees (2001), he argued that lurking behind the egalitarian veneer of democracy is the more manipulative principle of playing on people’s prejudices to gain votes. This, when applied to issues of immigration, has invariably led to a jingoistic policy – a policy founded, essentially, on racism. In Britain, according to Dummett, much of the blame rested with the Home Office, a department which he accused of “decades of hopeless indoctrination in hostility”, first against Commonwealth immigrants, and later against asylum seekers and refugees. “For the Home Office,” he once wrote, “the adjective ‘bogus’ goes as automatically with ‘asylum seeker’ as ‘green’ does with ‘grass’.”

Dummett is persuaded for some very good reasons that a theory of meaning is really a theory of understanding, and that a full-blown theory of meaning would consist in an explanation of what it is to understand a language. To substantiate and explicate this claim he first has to deal with a cut-down and unsatisfactory misconception about meaning related to translation. His example is what he calls an “M-sentence”: “‘La terra si muove’ means that the Earth moves”. This tells us that a sentence in one language means the same as a sentence in another language, but it does not actually tell us anything about what those sentences refer to, what knowledge they entail, or whether indeed they entail any knowledge at all. This becomes more clear when he says that translation cannot furnish knowledge any more than the similar M-sentence “‘The Earth moves’ means that the Earth moves” (p.7).

To my mind it is unfortunate that Dummett and others like Davidson and Kripke use real sentences to try to illustrate this point, because a sentence framed in terms of non-existent entities (entities that in this instance exist only within the confines of rooms where I am teaching a philosophy elective, where they materialise on command), makes it much more powerfully: “A shpringlehock is called a ‘shpringlehock'” means that a shpringlehock is called a ‘shpringlehock’. Or “‘Shpringlehocks are grue’ means that shrpinglehocks are grue. These sentences are undoubtedly true within a cut-down and insufficient theory of meaning, but inasmuch as they do not enable us – or even require us – to possess any knowledge, still less understanding, they cannot qualify as examples of a full-blown theory of meaning.

This, I take it, is reasonably easy to understand, but what comes next is far more controversial, and cuts through the entire world of analytic philosophy where there is no satisfactory resolution to the question it poses, and where I think even Dummett flounders. When we ask what would constitute a full-blown or “full-blooded” as Dummett calls it (p.5) theory of meaning, we immediately find ourselves confronted by the challenge posed by a holistic theory of language such as that proposed by Quine in “Two Dogmas of Empricism”: does any word or any sentence assume its full meaning, and therefore convey fully whatever knowledge it contains, and therefore require and entail all the understanding necessary to grasp it fully, unless the entire language has been grasped?

Does any word or any sentence assume its full meaning, and therefore convey fully whatever knowledge it contains, and therefore require and entail all the understanding necessary to grasp it fully, unless the entire language has been grasped?

Dummett is inclined (op cit. p.17f) to reject linguistic holism because he believes that it can give no sensible account of what it is to learn a language, little-by-little, or to grasp a language partially, as would, on reflection, be true not only of those learning the language word-by-word and sentence-by-sentence, but of absolutely all of us, the most knowledgeable and wise and clever native-speakers included.

In a much earlier essay he addresses a similar problem by asking in what a full-blown concept of “gold” would consist, observing that the word plays a part in all sorts of different parts of the English language: in chemistry; in literature; in economics; in mythology; and so forth. Would it be reasonable to deny that someone understood the word “gold” merely because they had failed to become fully conversant with its use in all these different language-games? Some think it would not be reasonable (cf., inter alia, for a brilliant and detailed discussion by Dummett, “The Social Character of Meaning” in Truth and Other Enigmas, OUP (1978), chapter 23, p.427 et al. where he distinguishes “gold” from “elm” in various ways in the context of an analysis of a thesis advanced by Hilary Putnam). I think they are mistaken: whether something is “reasonable” in this colloquial sense of the word is not a philosophical criterion. It is transparently and tautologously true that someone who only understands some of the uses of the word “gold” and therefore possesses only part of the concept “gold” does not have a full-blown understanding of the word ‘gold’.

Why is this relevant? Granted that “Horses are called ‘horses'” means that horses are called ‘horses’ will not do as a theory of meaning, we need instead to have a way to characterise what the meaning of singular terms such as ‘horse’ might be, which is to say what it is to understand a word such as ‘horse’. Given further that absolutely nobody can ever be familiar with every conceivable use of any word, since the set of such sentences would constitute an uncountably-infinite set for exactly the same reason that all conceivable uses of the number “1” would constitute an uncountably-infinite set (as George Cantor famously and definitively proved; a proof for a putatively exhaustive list of all possible sentences containing the word “gold” could as easily be constructed), we are bound to conclude that a holistic theory of language entails the inescapable conclusion that nobody actually fully understands the meaning of any word, and therefore the meaning of any sentence, and therefore anything conveyed by a sentence or set of sentences. In short, nobody fully understands anything at all.

Nobody fully understands anything at all.

Someone might object that it is not reasonable, still less useful, to advocate or espouse a theory of language so demanding that it leads to the conclusion that no user of a language ever understands what their language means, for that would presumably require acceptance of the inference that no sentence uttered or written by any language-user is either ever fully understood by that user or by the user’s hearers and readers, and therefore that nobody ever fully understands what they mean by what they say/write or what anyone else means by what they say/write. But this inference is not an objection to the thesis; it is a confirmation of it, and indeed the most important possible conclusion to be drawn from it.

No sentence uttered or written by any language-user is either ever fully understood by that user or by the user’s hearers and readers, and therefore nobody ever fully understands what they mean by what they say/write or what anyone else means by what they say/write.

One of the welcome inferences to be drawn from this realisation, one of its happiest consequences, is that the apparent distinction between the solid reality of language and the apparent vagueness of art, music, and drama, is dissolved. An author is no more capable of knowing what she means by what she says or writes than an artist by what she paints. The inescapability of vagueness embraces art as well as language, but that vagueness, far from being a weakness, is for both art and language an irreducible strength. As Michael Polanyi once put it,

“For just as, owing to the ultimately tacit character of all knowledge, we remain ever unable to say all that we know, so also, in view of the tacit character of meaning, we can never quite know what is implied in what we say.”

Michael Polanyi, Personal Knowledge, p.95.

For similar reasons the vagueness of linguistic meaning also embraces – one might almost say “engulfs” – science, since while scientists might wish to contain the meanings of the words they use within a restricted language that admits the reinvention of “facts”, in practice this is a temporary illusion typical of a passing phase in scientific understanding, almost a deception, for a thorough-going quantum-mechanical understanding of science would force us again to embrace the universality of vagueness and the unknown.

Now here you might think that our dissenters can legitimately refer, and with more logical justification, to what is reasonable in a strong sense: is it reasonable to use language to affirm that nobody fully understands language; does that not visit upon us a vicious circle inasmuch as to claim to understand the sentence “Nobody fully understands anything at all” in full would simultaneously negate it. We seem inadvertently to have constructed a quasi-Gödelian self-referential sentence: if we are able to know it to be true, it must be false, since that would entail knowing at least one thing fully. But the inverse does not work: if we are able to know it to be false, that does not entails that we know it to be true because we can know it to be false on the basis of some other sentence that we understand fully, even if not this one.

And in fact even the first inference fails within a vague theory of language, which might better or more unambiguously be framed as a theory of language that characterises all meaning in terms of vagueness: that the ability to use a language does not require or entail a full command or understanding of the meanings of words, but the skill of employing and deploying words whose meanings are inescapably vague in a way sufficient to effect communication with other language-users who are similarly placed.

The ability to use a language does not require or entail a full command or understanding of the meanings of words, but the skill of employing and deploying words whose meanings are inescapably vague in a way sufficient to effect communication with other language-users who are similarly placed.

To a question such as “Does anyone fully understand the meaning of a word such as ‘gold’?” we can then give a two-part answer: “No, nobody fully understands the meaning of a word such as ‘gold’; but fully understanding the meaning of a word is not necessary in order to use it effectively for someone sufficiently skilled in a particular language”.

And therefore it is no objection to the central conclusion “Nobody fully understands anything at all” that this entails that nobody understands what “Nobody fully understands anything at all” really means. At the very least we can assert that nobody fully understands what it means “to understand” anything, and so this sentence does no more, and no less, than point to the fact that everything is vague (including the fact that everything is vague).

Everything is vague including the fact that everything is vague.

To the question “Does that mean that you don’t really understand what you mean when you assert that everything is vague?” we can therefore give an affirmative answer, and to the recidivist quasi-Gödelian riposte “But that suggests that you at least understand that you don’t really understand” we reply “No, it means that we don’t really even understand what it is we lack when we lack understanding, or even what this assertion means“. And one can immediately see how a meta-recidivist quasi-Gödelian can carry on like this ad nauseam or ad infinitum, whichever comes first (a joke I owe to a lecturer in combinatorial theory at Oxford whose name temporarily escapes me).

It is worth digressing to point out that under this reconceptualisation of language in terms of vagueness there are, strictly speaking, no such things as facts, and therefore the secondary premise with which Wittgenstein began the Tractatus – that “§1.1 the world is the totality of facts, not of things” – crumbles, not because we have reinstated the priority of “things”, which remain as completely beyond us as Wittgenstein supposed, but because we have dissolved the notion of “facts” within the seas of vagueness. Of course, Wittgenstein also repudiated the Tractatus later, and much of analytic philosophy was spawned by his Philosophical Investigations, which takes an entirely different position.

To cut short this Sisyphean (or perhaps it is a Tarskian) process we observe that in the sentence quoted above every word is vague: “everything”; “is”; “vague”; “including”; “the”; and “fact”.

But this excursus into the self-referential brings us back to an important realisation that cannot but apply to any attempt to construct a theory of meaning: that to state in language how language relates to what it means is to presuppose that we have solved the problem of how the explanatory terms we employ in such statements relate to the terms of the language, which is the philosophical equivalent of solving the problem of how you catch a lion “by catching two and letting one go”.

And we are brought back not to Michael Dummett, but to Wittgenstein: we cannot state in language how we use language or what language means; we can only show that we understand by using language in a community of like-language-users who deem our utterances sufficient for communication. We may not wish to reaffirm that “the meaning is the use”, but we certainly wish to convey the fact that we demonstrate our understanding of meaning by the skills we deploy in our use of a langauge; we convey our understanding of the concepts referred to by the words of the language by the way we deploy the skills required to communicate with them to the satisfaction of life-minded language-users. Dummett and others may want more, but it is not reasonable to want what one cannot possibly have.

[To be continued.]

Meditation & Prayer

Many people meditate, and many people pray. Quite what their meditations and prayers consist in is not something we often know, and perhaps something we have no need to know, but both activities suffer from a problem in that both tend to be associated with a particular type of activity that does not appeal to many people who would, if they understood them better, benefit from either or both.

There are two very common types of prayer that will not on this occasion concern us: shopping-lists of requests presented to an imagined deity that are sometimes called intercessions, for example that we could have a new watch for Christmas or, far less frivolously, that Aunt Betty will get well; and heartfelt cries of despair generated by tragedy and hopelessness in which as a last desperate attempt to salvage something from our existence we cast our spirits out into and upon whatever aspects of the universe or our version of god are ready to receive them.

There are also versions of meditation that need not on this occasion concern us, and those consist in the kinds of activities that we associate with Eastern mystics who are attempting to connect with some aspect of the universe or expel all attempts at such connection entirely.

My reason for excluding these versions of prayer and meditation is to emphasise that none of these kinds of activity need play any part in either. In fact, to try to render both more accessible, less alien, less “religious” or “spiritual”, and by that token more useful and productive, it is important to emphasise not the unreality they involve, but the reality. In fact, the first principle of prayer and meditation is that they be grounded in reality.

The first principle of prayer and meditation is that they be grounded in reality.

Even to say this transforms our appreciation of what prayer and meditation might be. Rather than reaching for something other-worldly, the principle encourages us to look instead at the ultra-worldly, the nature and depth of reality. And because for many of us prayer and meditation are conceived in term of an escape from reality, sources of solace in times of trouble, need, distress, this principle will cause us a lot of trouble unless we add to it a second principle that is just as important: in both prayer and meditation it is essential that we be completely honest, insofar as it lies within our power.

In both prayer and meditation it is essential that we be completely honest.

This raises a question: honest about what and with whom? Honest with oneself and, where possible, with others; honest about whatever we are thinking and doing.

It may seem an extreme claim, but one of the obstacles to human well-being that prayer and meditation can remove is our habit of lying to ourselves about things or allowing ourselves to be deceived about things. Which brings me to the revelation that brought about this short essay.

I was reading about someone who claims to meditate as well as observe the ceremonial and ritual requirements of his religion, and for some reason it triggered the thought that, although I would hitherto have said that I never meditate and never pray, I do in fact meditate (although I never pray). But the meditation I engage in bears absolutely no resemblance to anything that could be associated with an Eastern mystic sitting cross-legged in a cloud of incense chanting “Om”. It has instead a simple characteristic that cleanses, regenerates, inspires, calms, reorganises and remotivates me: it consists of concentrated, honest thought or, as I shall call it in a moment when I have explained what I mean, indwelling.

Meditation as concentrated, honest thought cleanses, regenerates, inspires, calms, reorganises and remotivates.

To try to make this powerful and perhaps to some unlikely claim more intelligible and attractive, I should relate it to other things I have written about writing. People who are natural writers love nothing more than a blank sheet of paper and a pen (and yes, I do mean a pen, not a computer, and preferably in my case a fountain pen with a good gold nib and black ink). And contrary to some preconceptions – I may here be speaking only for myself, but then I am scarcely qualified to speak for anyone else – what makes this blank sheet of paper so gloriously attractive is that when I start to write I have absolutely no idea what I will have written by the time I get to the bottom of it. It represents not a receptacle into which I will pour what I have already thought, but a space in which I will both discover what I already think and create new things to think that I have never thought before. As Wittgenstein once put it, “The first time I knew I thought that was when I heard myself saying it” (I think it’s in the Philosophical Investigations somewhere), except in my case it isn’t just when I hear myself saying it (although that happens as well), but when I find myself writing it.

As some of my readers may remember me saying before, when I was a student I always knew I had written a good exam paper or essay when I emerged from the process knowing more about the topic than I knew when I started. It is the same today: when I finish this essay I will know more about meditation and prayer than I did when I started. That is the joy of writing (and I am sure also of painting, composing, performing, or indeed any kind of thinking); that is the reward and the compensation for all the painstaking effort that is required to do any of these things well: that when we do them we never merely repeat or record or recount; we always, if the process is worth doing at all, create and discover.

I always know I have written a good essay when I emerge from the process knowing more about the topic than I knew when I started.

This may seem impossibly ambitious, and many people may read it and think that it couldn’t possibly be true of them and probably isn’t true of me. They would be wrong on both counts. This experience is what meditation and prayer realise: that we emerge from both more than we were when we started, and if we don’t then we’ve not really been meditating or praying at all. (It is irrelevant whether we conceive of some external being or entity as being part of this process, since neither is necessary for the process to be possible. Indeed, it may be a useful distinction between meditation and prayer to suggest that the former presupposes and envisages no such external beings or entities whereas the latter does. In that sense, as I have said, I meditate but I never pray.)

Concentrated, honest thought need not and ideally should not be entirely conscious thought. For thought to be entirely conscious, however concentrated, would be for it to engage only a fraction of our capacity for thinking. Almost everyone has had the experience of thinking about a problem, failing to solve it, then suddenly having the solution pop into mind much later long after we ceased to apply any conscious effort to it at all. Our brains continue to process problems beneath and behind consciousness (I want to avoid using the words subconscious and unconscious because of their unhelpful psychoanalytic associations), and are often at their most powerful when we leave them to their own devices. (Indeed, when a problem doesn’t submit to conscious analysis, we generally have absolutely no means to “think harder” that I can make sense of: what on earth would one do in order to “think harder”?) By abandoning conscious thought we free up, or so it seems, powers in our brains over which we have little control.

I say “little control” because we do have some: what we are interested in and attend to will tend to be what we are good at thinking about. If I spend my time thinking about philosophy and reading and writing philosophy, I am likely to have interesting philosophical thoughts because my brain has the material it needs to process philosophical ideas “in the background”; I wouldn’t expect to have profound thoughts about nuclear physics or anthropology (although the cross-over can also sometimes take us by surprise). So what we attend to does in some sense direct the kinds of thoughts we will later have by feeding our background thinking with material to work on.

Attending to something with fascination, concentration and determination is what I mean – borrowing a term used by one of my earliest gurus Michael Polanyi – indwelling: we dwell in, absorb ourselves in, immerse ourselves in something, some activity or topic or skill, and in so doing we provide the background processing of which our brains are capable with the material necessary if we are to be creative and to discover new things. To sit in front of a blank sheet of paper with no resource other than a fountain pen (or, if you insist, a word-processor) is to open the gates through which the background processes that ensue from our attending to things of interest, our indwelling, can flow.

And this is of course a reason why the dilettante flitting of modern minds from one topic to another on social media and through smartphones, a flitting that is the antithesis of indwelling, of concentrated, honest attending to a topic, sometimes for hours and days and months and years, is a self-defeating activity as far as the discover of creative meditation is concerned: it isn’t that people are incapable of creative thought, but that they do not feed their minds with the requisite raw material out of which brains can produce creative thought. Nobody would attempt to run a marathon without attending to proper nourishment; nobody should attempt to meditate without attending to a rich diet of nourishment for the mind.

Nobody should attempt to meditate without attending to a rich diet of nourishment for the mind.

So we should all ask ourselves whether we are giving ourselves a chance to develop and grow mentally and spiritually if we are not giving any attention to the raw material we feed into our brains. Depression and despair can be self-induced if we only ever give attention to what is negative and destructive, violent and cruel, dishonest and fraudulent, and we need to be very careful if our entire diet of intellectual material comes from social media, for much of what we experience there is as toxic to our minds as physical poison would be to our bodies.

Much of what we experience in social media is as toxic to our minds as physical poison would be to our bodies.

We put a lot of effort into avoiding food-poisoning; we should put at least as much into avoiding mind-poisoning.

We put a lot of effort into avoiding food-poisoning; we should put at least as much into avoiding mind-poisoning.

Stevie Smith once wrote a poem called “Analysand” about someone who is morbidly preoccupied with “their own mental stink”, and someone sent it to me after a particularly toxic conversation in which I had been unremittingly negative about everything under the sun, especially myself. This was almost forty years ago, but I still remember the last two lines:

Would you expect to find him in the pink?

Who’s solely occupied with his own mental stink?

Stevie Smith, Analysand

We live in an age where preoccupation with the world’s stink is becoming a source of mental diseases, not merely a reflection of them. It is time we paid more attention to our mental diet, attended more seriously and persistently to what we feed our minds, and learned that dwelling in things of worth and import with concentration and honesty is a prerequisite of meditation or prayer, and therefore of creativity, discovery and human fulfilment.

Of course, this raises more questions, not least the question of how we are to decide what is worth attending to, what is worth studying, knowing, practising, learning and dwelling in. But that is a topic for another day.

On Lies and Liars

Avid readers of my blogs will probably remember that one of my most-often-used aphorisms is a saying that I regularly and faithfully attribute to Ludwig Wittgenstein (LW), even though my best endeavours have failed to trace its origins:

“All the really important decisions tend to be taken right at the very beginning, when we hardly realise that we have begun.”

Sometimes I imagine that Wittgenstein was not the person from whom I first learned this principle, but if so I am unable to trace an alternative author. Perhaps, then, the attribution to anyone other than myself is mistaken, and I am in reality the author of my own aphorism? What would follow? That the attribution to LW is a lie? A mistake, perhaps, but not a lie; not something deliberately manufactured to deceive; on the contrary, something intended to dispel any supposition that the insight is attributable to me, even if it is not attributable to LW either. I have no desire to earn undeserved credit for something that I originally acquired from someone else.

Elsewhere I have quoted the same sentiment somewhat differently.

“It is often the case that the really important decisions are made right at the beginning, when we hardly know we have begun.”

Either way, the sentiment is the same.

Whatever the origins of this mantra, and whoever originally said or penned it, it is incontrovertible that it has now been said. It may, just conceivably, be the case that nobody has ever said it before, in which case it is indeed attributable to me; I cannot say. All I can say is that I believe that I learnt it from LW even if I have somehow mangled the true attribution, or woven together many other skeins of thought to produce the idea myself.

What of it? To be mistaken about an attribution is not the same as to lie about it. We might lie about it in order to try to raise its authority; we might be mistaken merely because we have forgotten or misplaced the original. So let us start again.

  1. Someone who lies demonstrates that she has not understood the world.
    1. To the response “On the contrary: she may demonstrate that she understands the world better than others” we have no reply. Someone who does not understand that to lie is to misunderstand the world does not understand the world well enough to understand an explanation of the same claim.
  2. To lie is to injure oneself.
    1. To the response “On the contrary: it may be to injure another” we have no reply. Someone who does not understand that to lie is to injure oneself cannot be protected from such injury by means of explanation.
  3. To lie well we need to become a lie ourselves in order that we can believe the lies we tell and make them sound true.
    1. To the response “On the contrary: someone may become a very good accomplished liar while knowing perfectly well that everything she says is a lie” we have no reply. Someone who believes that we can lie consistently and persuasively while not being ourselves a lie does not understand what lying is or how difficult it is to do extremely well has not understood lying and so cannot understand an explanation of what it takes to be a liar.
  4. The most accomplished liars are those who believe their lies absolutely.
    1. To the response “On the contrary: there are accomplished liars who know very well that they are lying and do so specifically to deceive us while remaining themselves in possession of the truth” we have no response. Someone who lies to great effect must believe their own lie or they could not persuade other rational persons that it was true.

When we say that someone who lies, and especially someone who lies effectively and skilfully has “not understood the world”, we are not suggesting that their lying does not in some measure advance what they take to be their cause. We are rather saying that their lying can only further a cause that is itself mistaken, a result of a misunderstanding of the world. The alternative would be to allow that lying could produce a desirable effect that is based upon a proper understanding of the world, which would be to make the world itself a lie.

Many, of course, have argued over the centuries that the world is indeed a lie. This concept lies at the heart of a religious notion of the imperfection of the world brought about by corruption. But such a notion is clearly nonsensical, whatever its religious credentials: the nature of the world cannot be contaminated by human corruption, even if the nature of human society and our dealings with the world can. We are reminded of what Richard Feynman wrote as the concluding sentence of his part of the report on the Challenger disaster:

“For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”

There is an important distinction here between something that is used to ill effect and something that is corrupted. If I administer cyanide to you and kill you, I have put the cyanide to ill effect, but I have not corrupted the cyanide, which does what it does in the way it has always done it. The notion of a corrupt world or a corrupt universe makes no sense scientifically: things do what they do, acting always according to the laws that have emerged in nature. (The word “nature” is also problematic, but we will park that concern for now.)

A natural inference is that, since human beings are products of nature, human beings are similarly incapable of corruption: they simply do what they do; if their doing includes lying, then lying is also no more than the actualisation of a possibility inherent in nature as it has evolved to produce human beings. On such an analysis, for which we should have considerable sympathy because it obviates the need to the vocabulary of sin and evil, corruption and salvation, our response to lying should be the same as our response to flood, fire and pestilence: they are the consequences of nature doing what nature does; our task is to control their impact by implementing those skills and powers that have accrued to us through our owe development as we in our turn do what nature allows us to do to curb their deleterious influences.

Imagine, then, that we could somehow eliminate the vocabulary of good and bad, corruption and saintliness, and respond to all human actions as we would to natural events exactly as we should, for they are natural events. Then, when something like 9/11 or the Las Vegas shootings occurs, instead of reaching for the vocabulary of sin and evil, which achieves absolutely nothing, we should reach instead for the armour of analysis and correction: something has happened we and people like us deem undesirable and contrary to human well-being; we should take steps to ensure that nothing like it happens again.

The vocabulary of evil and corruption only serves to inhibit implementation of appropriate remedial strategies. It suggests that the origins of the behaviour are to be sought and found solely in the mind of the culprit and not more widely in the movements of ideas and values in the society that created him. It suggests, in other words, that nature can be fooled by a suitable application of human will accompanied by something called evil intent. But nature cannot be fooled: bullets will injure and kill people because it is their nature to do so; take away the bullets and nobody can be killed by bullets.

As someone put it on a news programme yesterday, the NRA believes that the solution to a bad man with a gun is a good man with a gun. The mistaken and misleading vocabulary of “good” and “bad” is rehearsed. That only means that we are farther way from understanding that a culture that regards owning and using guns as in some sense “cool” creates a climate in which the notion of killing 58 strangers and injuring hundreds more is even thinkable. And before we are too quick to point the finger, we should ask ourselves how much of our entertainment, particularly in film, consists in the glorification of violence and guns. We are brain-washed into believing the NRA lie: that the solution to a bad man with a gun is a good man with a gun. This is the Schwarzenegger logic, the Die Hard logic: be tougher and bigger and toting a bigger gun, and you can subdue evil. But the enemy is not evil: the enemy is the vocabulary of good and evil that separates human conduct from natural processes and pretends that the way to deal with bad people is to point fingers at them and call them “evil”. We might as well try to divert a hurricane by praying, calling it names, pointing fans or – heaven forbid – shooting at it. Then again, we could apply some of our enormous economic resources to building houses for people that can withstand hurricanes. It’s actually not that hard. But instead we prefer to speak of bad men and storms using the language of evil, forgetting that nature is not evil and that nature cannot be fooled.

It is of course cheaper both economically and politically to label certain people “evil” than to address the social problems and attitudes of mind that made gun-carrying societies think they are cool. “This was an evil act”, “an act of pure evil” makes it seem like the fault of some malignant force, some Devil or Satan, a consequence of some original sin committed so long ago that nobody can now remember when and nobody today can bear responsibility for it. It may even be suggesting that nature herself is evil or capable of evil. But nature is incapable of evil, and nature cannot be fooled.

It is easy to try to find counter-examples in the many things we experience as great evils, tragedies, and sources of suffering: cancer; plague; some viruses; Altzheimer’s Disease; even death. But these are not examples of nature being evil: they are just examples of conflicting trajectories in which the success of one process causes the failure of another. Reaching for the language of good and evil to explain or accuse diseases achieves nothing: viruses do what they do and degenerative diseases arise from natural wear and tear in exactly the same way that floods devastate communities and hurricanes demolish houses, by virtue of nature doing what nature does (assisted or not by other things like Climate Change and unhealthy human lifestyles, because nature cannot be fooled).

So how does all this connect with our title, “On Lies and Liars”? Imagine that our first epithet, that someone who lies does not understand the world, were to be applied not to an individual, but to a whole society, perhaps a whole species. What happens when an entire species learns and takes as given what is in fact a lie, comes to believe the lie, and employs the lie in its entire analysis of the world?

What happens when an entire species learns and takes as given what is in fact a lie, comes to believe the lie, and employs the lie in its entire analysis of the world?

The lie we have in mind is the claim that there exists something called “evil”. There is no denying that there are unspeakable, despicable and utterly reprehensible acts, and that they are all undertaken by human beings. If all we mean by “evil” is this, then there are evil acts. But that is not all we mean: to invoke the language of good and evil is to appeal to an ancient and cosmic dualism in which good and evil originate in opposite poles of a metaphysical universe that lies beyond our world. But there are no such cosmic powers: there is only nature and what nature does; and nature cannot be fooled. “Evil” acts  are human acts perpetrated by human beings who are the product of societies that are themselves the product of natural processes that cannot lie and will not be fooled. They are not “evil” because there is no force for and source of evil other than our own deployment of natural processes that just do what they do.

To invoke the language of good and evil is to appeal to an ancient and cosmic dualism in which good and evil originate in opposite poles of a metaphysical universe that lies beyond our world.

Invoking the language of good and evil is the equivalent of throwing up one’s hands in horror and disclaiming all responsibility: the source of this great tragedy lay outside the earth and beyond human control; therefore there is nothing to be done and nobody we should blame. “Evil” becomes a catch-all that absolves us from all responsibility. As such it is a lie into which almost all of us have at some time bought, and the benefits of which we have all at some time sought to enjoy: “I don’t know what came over me; it was as if I was possessed”.

So what does happen when entire societies and perhaps entire species come to believe and adopt the language of a lie? What happens is that they become blind to the causes of their own misfortunes, absolve themselves from responsibility for things that are entirely of their own making, and blame cosmic forces for things whose origins lie at home because the lie they so completely embrace leads them to seek the causes of all these things in entirely the wrong place. Using the language of evil we seek to exempt ourselves from responsibility for anything and everything that is too difficult, too inconvenient, too politically costly, or too embarrassing to address directly. And so we are all in our own ways consummate liars who have learned how most effectively to lie to ourselves by allowing ourselves to become a lie.

Fundamental lies, lies that is which permeate societies and find themselves endorsed by most members of those societies, lie so deep in our psyches that we find them almost impossible to detect and identify. The suggestion here is that the notion of “evil” is such a lie, and that we each inherit that lie with our culture and our education, especially our religious education, and until we address that problem many other things in society will be impossible to rectify.

“All the really important decisions tend to be taken right at the very beginning, when we hardly realise that we have begun.”


Parents and Schools

My three months in China earlier this year highlighted in the most vivid way imaginable a dilemma that schools face when trying to manage the expectations of parents. This is not, let me hasten to add, a uniquely Chinese problem: it occurs throughout the world; but it is a problem that seems to manifest itself more starkly in China than in any other country I have worked in.

Put simply, the problem is this. All parents want the best for their children, but what is that? Many parents, perhaps most, are genuinely uncertain about what that “best” is, and those that are certain are almost as certainly wrong.

Unfortunately, once this has been said, we have to draw a distinction between parents who want the best for their children for the sake of the children, and those who want the best for their children for the sake of the parents. Of course, all parents would deny that they come into the latter category, but many do: their motivation for pushing their children, manipulating their children’s lives, and generally denying their children much or any autonomy to decide what they want to be or do, is either that they want somehow to live in their children’s reflected glory or that they want to avoid the social opprobrium that they imagine will arise from being thought in some sense neglectful or “bad” parents. So they rush their children from activity to activity, filling every second of their lives with something “productive” and “improving”, and everyone – parents and children – find themselves exhausted and unfulfilled.

It is regrettable to have to start this important topic with such a negative observation, but it is unavoidable because so many of the decisions parents make on behalf of their children stem directly from where they stand on this dichotomy. Take, for example, the question of how clever or intelligent a child is and the associated acknowledgement that society misguidedly bestows on children whom it deems “successful”. Parents who want to live off reflected glory become obsessed with academic and sporting achievement not because it is in their child’s interests, but because it will bring them as parents some kind of fame or notoriety; parents who genuinely want the best for their children don’t care how successful they are as measured by social parameters, and choose to measure their achievements and progress relative to what they deem to be their children’s best interests. Parents whose children struggle academically or at sport or at music or, indeed, at anything, are in the former case more worried by the supposed shame it will bring on them than by any consequences it may have for their children (because it usually doesn’t have any if it is approached properly, which is to say by not treating it as something of any great consequence).

This brings us directly to the question of what constitutes a good education, a good school, college or university for a particular child or young adult: is it one that has already achieved social status and so is seen as an aspiration for no better reason than that parents want their children to be seen to have gained admission to somewhere that constitutes a “top” school (whatever that means), and so brings glory to the parents for having brought such a talented child into the world? Or is a “top” university or school one where a child thrives, “finds” himself or herself, and achieves a confident, self-assured manner and the associated skills needed to deal with the vicissitudes of life?

The best school or university for any child is one where they thrive, find themselves, acquire the knowledge and skills and develop the personality they will need to live fulfilled lives.

And it is important to remember that this is not “the” school or university where they can achieve these things: there are many, and obsessing about whether you or they have chosen absolutely the best possible one is a waste of emotional energy.

So what has all this to do with China in particular? Chinese schools frequently market themselves on the coat-tails of individual students who have done particularly well, for example by getting straight A* grades at IGCSE or A level, by having done particularly well in the notorious gaukau examination, or by obtaining a place at Beijing University or Oxford, Cambridge, Yale or Harvard (amongst a stack of others). That these students almost certainly were blessed – if indeed it is a blessing – with a set of parents and early experiences and in particular genes that wired their brains or bodies up in a way that led to such success, and that the school had precious little to do with it, is forgotten in this mad rush for customers. There is an almost laughable belief in the fallacy of post hoc ergo propter hoc (after that and therefore because of that), which in the language of education becomes “because X went to school Y and got into Cambridge, you should send your child to school Y if you want your child to go to a really great university”.

And what makes this so painful and irresponsible is that there is in these schools no attempt to manage parental expectations, no attempt to counsel them and their offspring into adopting realistic ambitions, no subtlety at all in the blatant exploitation of the obvious lie that a school plays a crucial and unique part in bringing a child to achieve whatever success they deem desirable.

But let’s be clear: this is not to argue that what schools contribute is negligible or unimportant; it is not to argue that one school is exactly as good as any other; it is to argue that the raw material must be present in any given student for the kind of success that is being promised to be achievable. And to promise this kind of success to parents of children who do not have that raw material is to invite them to embrace ambitions that cannot be fulfilled and can only therefore  bring them to disappointment and frustration; in the worst and most extreme cases, it can bring the student to suicide.

But every use of the word “success” in all this demands to be bracketed in scare-quotes, because the most damaging assumption of all is that we know what success is; it is even more damaging that the false claim that we know how to achieve it.

This is an example of a general type of argument that human beings seem to find it very difficult to understand, still more appreciate and embrace: that effects have multiple causes, and to provide only one of the influences responsible for a particular child’s success is to provide only one of a basket of causes unique in that particular child’s life and probably unreproduceable in any other child’s life. So yes, schools play their part, but when they try to lay claim to the lion’s share of responsibility for a particular success, they claim too much.

What is particularly interesting about this pattern of behaviour in China is that the way schools market themselves is seldom challenged by parents on this basis. Parents will want to choose “the best” school on the basis of the statistics of that school’s success in examinations, university entrance and, occasionally and rarely, something like music or sport, they will scrutinise exhaustively the qualifications and track-records of teachers, they will examine the scale and safety of facilities, and they will ask endless questions about what a school does to give extra lessons and help to a gifted student in order to achieve this kind of “success”. But very few parents seem to challenge the underlying assumptions implicit in this whole sorry business: that schools are capable of making up for deficiencies in a particular child (because no parent will acknowledge, at least openly, that their child has such deficiencies); that the basic assumptions about what constitutes success are justified (that going to a famous school or university matters more than going to one where there is a good fit between the school, university and student); that anyone really knows the combination of factors that make one student more “successful” than another (because not only are the known influences vast in number; the unknown influences are almost certainly even greater in number); that we are in a position to say what success is (because we all think the future will be like our own past having no other basis on which to examine it).

The deficiencies of human beings in understanding multi-causal systems lead almost inexorably to destructive behaviour: when we cannot see a direct link between some activity A and some desirable result R, ambitious parents tend to discourage or forbid A as “a waste of time”. But activities that are not directly associated with desired results may play a huge part in helping any person to achieve those results; it is just that we are unable to find and trace a link between the result and the activity. In general, we just don’t know what or whether something is a “waste of time”, if, indeed, anything is. There is considerable merit in the argument that anything that a person finds of compelling interest, however frivolous and non-productive it may seem to a bystander, can never be a “waste of time”. (Although we should specifically exclude addictive behaviour from this claim.)

Some kind of bizarre socio-politically-reinforced work-ethic leads many of us to believe that working hard at something we hate doing is somehow more meritorious and likely to be productive that enjoying doing something everyone thinks pointless and frivolous, but this is almost certainly a social prejudice that does incalculable harm because it discourages us from playing, and from appreciating the importance of play. And nothing is more important in the establishment and maintenance of a creative, fulfilling life than play.

Nothing is more important in the establishment and maintenance of a creative life than play.

Why society might endorse this prejudiced and damaging work-ethic is easy to see: it is a direct consequence of an industrialised view of human labour in which the world works on the basis of a model of social organisation in which there are only a small number of creative leaders who control and exploit a vast number of worker-drones. And that social model created an educational system to supply those drones and vigorously opposed – usually as “left-wing” or “socialist” – any attempt by professional educators to change it. But of course this industrial assumption is soon to be demolished by the advent of intelligent machines, robots and androids, and the need for a class of undereducated worker-drones will diminish gradually to nothing so that the only purpose of education will be to equip people for fulfilled lives of leisure and creativity.

There is, however, another danger. If we were to persuade parents that a more expansive and general set of experiences plays more of a role in the accomplishments of their children than we currently allow, immediately there would be a rush towards the industrialisation and institutionalisation of variety, of play, and our obsessional drive towards doing anything and everything that might be accepted socially to contribute towards success would lead us to invent a new drudgery: the drudgery of enforced play. And that, of course, really is an oxymoron: play must be free.

But such ambitious parents as we managed to persuade of this would still in all probability ask us for some reassurances that our new methods would be successful, that their planned negligence, their single-minded refusal to try to control and manipulate their children’s lives especially in play, other than to encourage it, would produce at least as good a set of outcomes as the existing system. Could we give such an assurance, let alone a guarantee? Should we even try?

The problem here, although closely associated with our persistent demand that the word “success” be enclosed in scare-quotes, is that the request for such a guarantee, or even assurance, is based upon the mistaken presupposition that we know what success looks like. And we don’t: we don’t know what success looks like for a particular child; we don’t know what success looks like for a particular life; we don’t know what success looks like for society or the world; and most importantly of all, we don’t and can’t yet have the vocabulary in which to describe the kinds of success we have not even imagined.

In other words, parents are not only asking for assurances and guarantees that cannot be given because of the serendipitous relationship between multi-causal systems and their outcomes (whether deemed successes, failures, good or bad); they are also asking for assurances that whatever system we employ will deliver back to them a future child or young adult who conforms to their existing expectations of what a well-balanced, mature, successful adult looks like. And neither the concepts nor the vocabulary needed to describe such a future person exists. How much less then do the developmental systems exist that can create them?

So what is the message schools should be delivering to the parents who want the best for their children? It is something like this:

We don’t know how any particular child will develop under the countless influences that will fall upon them; neither do we know the kind of world in which they will spend the majority of their lives, or the kind of personality that will best adapt them to that world, or the knowledge and skills they will need to contribute to the creation of that world’s successors. So we are all on a journey of discovery together, and this institution will need to develop with its students and with the world in which we all live, constantly exploring new possibilities and seizing new opportunities. We do not believe that it is possible to offer more, but we are sure that it is not acceptable to offer less.

How to Take Notes

We all find ourselves in situations at school, in college or at work where we need to take notes of a meeting, a lecture or a conversation. But we are seldom given any advice on how to perform this most basic and essential of tasks. Here are some ideas.

Whether it is a blank pad of paper, a smartphone, a laptop or a tablet, we are all sooner or later going to be faced with the need to make notes. The temptation is to try to write down everything; some of us trust our memories so much that we write down nothing. What is best?
The answer, of course, is that note-taking is a very personal matter: it depends on the kind of memory you have, the kinds of things that help you to recall essential content, and the purpose of the notes. One thing is certain: nothing is more disastrous than trying to write down everything (unless, perhaps, it is to write down nothing).
For some people, taking photographs of PowerPoint slides is all the note-taking they imagine they need; for others, recording lectures is the method of choice; some will jot down occasional notes, or just for example the URLs of recommended websites. But what is the purpose of note-taking?
The first and golden rule is that the best notes are those you never read again. This may be surprising: surely the whole point of taking notes is so that you can refer back to them? In fact, the opposite it true: the best notes are such an aid to learning and memory at the time you take them that they make it unnecessary to refer back to them. This means that note-taking must involve the active processing of the information being presented to you, and that the notes should be your personal reflections on what is said, not something you intend to look back at later. Life, as they say, is short: who has time not only to go to lectures but to replay the recordings afterwards? Far better to absorb what is of value at the time, think about it at the time in active learning, write down only the most important take-aways in your own words as you have processed them, and move on to the next learning experience. Taking photographs of slides, collecting hand-outs, and squirrelling away pages of verbatim notes for a time when you have opportunity to read them, is not active learning, and can easily create in you the false impression that you have learned something just because you have a piece of paper about it in a filing-cabinet or a photo on your phone.
Of course, learning how to do this takes time and is not something we would expect of young children or students starting out on their academic careers, but acquiring the skill of effective learning that note-taking enhances and supports, and getting rid of the notion that notes are things we write now so that we can learn from them later, will make a huge difference to the success of your academic career.
In situations where it isn’t possible to take notes at the time – during an interview or a live conversation, for example – a similar principle will greatly enhance the effectiveness of whatever you do choose to note down: make what you write a learning-process; don’t attempt to write only what was said (with one exception – see below), but write what impression what was said made on you; use the notes to process the information, the emotional impact, the significance of the points that were made.
The only exception to the avoidance of verbatim note-taking is if something is said of particular significance that could become important later. Then, try to write down exactly what was said. Courts of law, for example, take what are called “contemporaneous notes” seriously as supporting evidence in litigation cases. If you suspect that your conversation may be important in such circumstances, or during later employment negotiations if you were at an interview, make sure your notes are accurate and pertinent.
And remember that the clandestine recording of conversations using such things as smartphones may constitute an offence under UK law (which applies to telephone conversations as well).
One last tangential, throw-away but important point: if you are negotiating a contract of employment, remember that most contracts contain a clause that excludes from all consideration any undertakings given in any manner whatsoever that are not specifically stated in the final version of the contract as signed and counter-signed. What someone may say or promise or hint at during negotiations has no legal force, even if it is stated in writing in a letter or an email, unless it is included in the contract. So if it has been said and agreed as far as you are concerned, make sure that it is written in your contract.