On Lies and Liars

Avid readers of my blogs will probably remember that one of my most-often-used aphorisms is a saying that I regularly and faithfully attribute to Ludwig Wittgenstein (LW), even though my best endeavours have failed to trace its origins:

“All the really important decisions tend to be taken right at the very beginning, when we hardly realise that we have begun.”

Sometimes I imagine that Wittgenstein was not the person from whom I first learned this principle, but if so I am unable to trace an alternative author. Perhaps, then, the attribution to anyone other than myself is mistaken, and I am in reality the author of my own aphorism? What would follow? That the attribution to LW is a lie? A mistake, perhaps, but not a lie; not something deliberately manufactured to deceive; on the contrary, something intended to dispel any supposition that the insight is attributable to me, even if it is not attributable to LW either. I have no desire to earn undeserved credit for something that I originally acquired from someone else.

Elsewhere I have quoted the same sentiment somewhat differently.

“It is often the case that the really important decisions are made right at the beginning, when we hardly know we have begun.”

Either way, the sentiment is the same.

Whatever the origins of this mantra, and whoever originally said or penned it, it is incontrovertible that it has now been said. It may, just conceivably, be the case that nobody has ever said it before, in which case it is indeed attributable to me; I cannot say. All I can say is that I believe that I learnt it from LW even if I have somehow mangled the true attribution, or woven together many other skeins of thought to produce the idea myself.

What of it? To be mistaken about an attribution is not the same as to lie about it. We might lie about it in order to try to raise its authority; we might be mistaken merely because we have forgotten or misplaced the original. So let us start again.

  1. Someone who lies demonstrates that she has not understood the world.
    1. To the response “On the contrary: she may demonstrate that she understands the world better than others” we have no reply. Someone who does not understand that to lie is to misunderstand the world does not understand the world well enough to understand an explanation of the same claim.
  2. To lie is to injure oneself.
    1. To the response “On the contrary: it may be to injure another” we have no reply. Someone who does not understand that to lie is to injure oneself cannot be protected from such injury by means of explanation.
  3. To lie well we need to become a lie ourselves in order that we can believe the lies we tell and make them sound true.
    1. To the response “On the contrary: someone may become a very good accomplished liar while knowing perfectly well that everything she says is a lie” we have no reply. Someone who believes that we can lie consistently and persuasively while not being ourselves a lie does not understand what lying is or how difficult it is to do extremely well has not understood lying and so cannot understand an explanation of what it takes to be a liar.
  4. The most accomplished liars are those who believe their lies absolutely.
    1. To the response “On the contrary: there are accomplished liars who know very well that they are lying and do so specifically to deceive us while remaining themselves in possession of the truth” we have no response. Someone who lies to great effect must believe their own lie or they could not persuade other rational persons that it was true.

When we say that someone who lies, and especially someone who lies effectively and skilfully has “not understood the world”, we are not suggesting that their lying does not in some measure advance what they take to be their cause. We are rather saying that their lying can only further a cause that is itself mistaken, a result of a misunderstanding of the world. The alternative would be to allow that lying could produce a desirable effect that is based upon a proper understanding of the world, which would be to make the world itself a lie.

Many, of course, have argued over the centuries that the world is indeed a lie. This concept lies at the heart of a religious notion of the imperfection of the world brought about by corruption. But such a notion is clearly nonsensical, whatever its religious credentials: the nature of the world cannot be contaminated by human corruption, even if the nature of human society and our dealings with the world can. We are reminded of what Richard Feynman wrote as the concluding sentence of his part of the report on the Challenger disaster:

“For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”

https://history.nasa.gov/rogersrep/v2appf.htm

There is an important distinction here between something that is used to ill effect and something that is corrupted. If I administer cyanide to you and kill you, I have put the cyanide to ill effect, but I have not corrupted the cyanide, which does what it does in the way it has always done it. The notion of a corrupt world or a corrupt universe makes no sense scientifically: things do what they do, acting always according to the laws that have emerged in nature. (The word “nature” is also problematic, but we will park that concern for now.)

A natural inference is that, since human beings are products of nature, human beings are similarly incapable of corruption: they simply do what they do; if their doing includes lying, then lying is also no more than the actualisation of a possibility inherent in nature as it has evolved to produce human beings. On such an analysis, for which we should have considerable sympathy because it obviates the need to the vocabulary of sin and evil, corruption and salvation, our response to lying should be the same as our response to flood, fire and pestilence: they are the consequences of nature doing what nature does; our task is to control their impact by implementing those skills and powers that have accrued to us through our owe development as we in our turn do what nature allows us to do to curb their deleterious influences.

Imagine, then, that we could somehow eliminate the vocabulary of good and bad, corruption and saintliness, and respond to all human actions as we would to natural events exactly as we should, for they are natural events. Then, when something like 9/11 or the Las Vegas shootings occurs, instead of reaching for the vocabulary of sin and evil, which achieves absolutely nothing, we should reach instead for the armour of analysis and correction: something has happened we and people like us deem undesirable and contrary to human well-being; we should take steps to ensure that nothing like it happens again.

The vocabulary of evil and corruption only serves to inhibit implementation of appropriate remedial strategies. It suggests that the origins of the behaviour are to be sought and found solely in the mind of the culprit and not more widely in the movements of ideas and values in the society that created him. It suggests, in other words, that nature can be fooled by a suitable application of human will accompanied by something called evil intent. But nature cannot be fooled: bullets will injure and kill people because it is their nature to do so; take away the bullets and nobody can be killed by bullets.

As someone put it on a news programme yesterday, the NRA believes that the solution to a bad man with a gun is a good man with a gun. The mistaken and misleading vocabulary of “good” and “bad” is rehearsed. That only means that we are farther way from understanding that a culture that regards owning and using guns as in some sense “cool” creates a climate in which the notion of killing 58 strangers and injuring hundreds more is even thinkable. And before we are too quick to point the finger, we should ask ourselves how much of our entertainment, particularly in film, consists in the glorification of violence and guns. We are brain-washed into believing the NRA lie: that the solution to a bad man with a gun is a good man with a gun. This is the Schwarzenegger logic, the Die Hard logic: be tougher and bigger and toting a bigger gun, and you can subdue evil. But the enemy is not evil: the enemy is the vocabulary of good and evil that separates human conduct from natural processes and pretends that the way to deal with bad people is to point fingers at them and call them “evil”. We might as well try to divert a hurricane by praying, calling it names, pointing fans or – heaven forbid – shooting at it. Then again, we could apply some of our enormous economic resources to building houses for people that can withstand hurricanes. It’s actually not that hard. But instead we prefer to speak of bad men and storms using the language of evil, forgetting that nature is not evil and that nature cannot be fooled.

It is of course cheaper both economically and politically to label certain people “evil” than to address the social problems and attitudes of mind that made gun-carrying societies think they are cool. “This was an evil act”, “an act of pure evil” makes it seem like the fault of some malignant force, some Devil or Satan, a consequence of some original sin committed so long ago that nobody can now remember when and nobody today can bear responsibility for it. It may even be suggesting that nature herself is evil or capable of evil. But nature is incapable of evil, and nature cannot be fooled.

It is easy to try to find counter-examples in the many things we experience as great evils, tragedies, and sources of suffering: cancer; plague; some viruses; Altzheimer’s Disease; even death. But these are not examples of nature being evil: they are just examples of conflicting trajectories in which the success of one process causes the failure of another. Reaching for the language of good and evil to explain or accuse diseases achieves nothing: viruses do what they do and degenerative diseases arise from natural wear and tear in exactly the same way that floods devastate communities and hurricanes demolish houses, by virtue of nature doing what nature does (assisted or not by other things like Climate Change and unhealthy human lifestyles, because nature cannot be fooled).

So how does all this connect with our title, “On Lies and Liars”? Imagine that our first epithet, that someone who lies does not understand the world, were to be applied not to an individual, but to a whole society, perhaps a whole species. What happens when an entire species learns and takes as given what is in fact a lie, comes to believe the lie, and employs the lie in its entire analysis of the world?

What happens when an entire species learns and takes as given what is in fact a lie, comes to believe the lie, and employs the lie in its entire analysis of the world?

The lie we have in mind is the claim that there exists something called “evil”. There is no denying that there are unspeakable, despicable and utterly reprehensible acts, and that they are all undertaken by human beings. If all we mean by “evil” is this, then there are evil acts. But that is not all we mean: to invoke the language of good and evil is to appeal to an ancient and cosmic dualism in which good and evil originate in opposite poles of a metaphysical universe that lies beyond our world. But there are no such cosmic powers: there is only nature and what nature does; and nature cannot be fooled. “Evil” acts  are human acts perpetrated by human beings who are the product of societies that are themselves the product of natural processes that cannot lie and will not be fooled. They are not “evil” because there is no force for and source of evil other than our own deployment of natural processes that just do what they do.

To invoke the language of good and evil is to appeal to an ancient and cosmic dualism in which good and evil originate in opposite poles of a metaphysical universe that lies beyond our world.

Invoking the language of good and evil is the equivalent of throwing up one’s hands in horror and disclaiming all responsibility: the source of this great tragedy lay outside the earth and beyond human control; therefore there is nothing to be done and nobody we should blame. “Evil” becomes a catch-all that absolves us from all responsibility. As such it is a lie into which almost all of us have at some time bought, and the benefits of which we have all at some time sought to enjoy: “I don’t know what came over me; it was as if I was possessed”.

So what does happen when entire societies and perhaps entire species come to believe and adopt the language of a lie? What happens is that they become blind to the causes of their own misfortunes, absolve themselves from responsibility for things that are entirely of their own making, and blame cosmic forces for things whose origins lie at home because the lie they so completely embrace leads them to seek the causes of all these things in entirely the wrong place. Using the language of evil we seek to exempt ourselves from responsibility for anything and everything that is too difficult, too inconvenient, too politically costly, or too embarrassing to address directly. And so we are all in our own ways consummate liars who have learned how most effectively to lie to ourselves by allowing ourselves to become a lie.

Fundamental lies, lies that is which permeate societies and find themselves endorsed by most members of those societies, lie so deep in our psyches that we find them almost impossible to detect and identify. The suggestion here is that the notion of “evil” is such a lie, and that we each inherit that lie with our culture and our education, especially our religious education, and until we address that problem many other things in society will be impossible to rectify.

“All the really important decisions tend to be taken right at the very beginning, when we hardly realise that we have begun.”

 

Advertisements

Parents and Schools

My three months in China earlier this year highlighted in the most vivid way imaginable a dilemma that schools face when trying to manage the expectations of parents. This is not, let me hasten to add, a uniquely Chinese problem: it occurs throughout the world; but it is a problem that seems to manifest itself more starkly in China than in any other country I have worked in.

Put simply, the problem is this. All parents want the best for their children, but what is that? Many parents, perhaps most, are genuinely uncertain about what that “best” is, and those that are certain are almost as certainly wrong.

Unfortunately, once this has been said, we have to draw a distinction between parents who want the best for their children for the sake of the children, and those who want the best for their children for the sake of the parents. Of course, all parents would deny that they come into the latter category, but many do: their motivation for pushing their children, manipulating their children’s lives, and generally denying their children much or any autonomy to decide what they want to be or do, is either that they want somehow to live in their children’s reflected glory or that they want to avoid the social opprobrium that they imagine will arise from being thought in some sense neglectful or “bad” parents. So they rush their children from activity to activity, filling every second of their lives with something “productive” and “improving”, and everyone – parents and children – find themselves exhausted and unfulfilled.

It is regrettable to have to start this important topic with such a negative observation, but it is unavoidable because so many of the decisions parents make on behalf of their children stem directly from where they stand on this dichotomy. Take, for example, the question of how clever or intelligent a child is and the associated acknowledgement that society misguidedly bestows on children whom it deems “successful”. Parents who want to live off reflected glory become obsessed with academic and sporting achievement not because it is in their child’s interests, but because it will bring them as parents some kind of fame or notoriety; parents who genuinely want the best for their children don’t care how successful they are as measured by social parameters, and choose to measure their achievements and progress relative to what they deem to be their children’s best interests. Parents whose children struggle academically or at sport or at music or, indeed, at anything, are in the former case more worried by the supposed shame it will bring on them than by any consequences it may have for their children (because it usually doesn’t have any if it is approached properly, which is to say by not treating it as something of any great consequence).

This brings us directly to the question of what constitutes a good education, a good school, college or university for a particular child or young adult: is it one that has already achieved social status and so is seen as an aspiration for no better reason than that parents want their children to be seen to have gained admission to somewhere that constitutes a “top” school (whatever that means), and so brings glory to the parents for having brought such a talented child into the world? Or is a “top” university or school one where a child thrives, “finds” himself or herself, and achieves a confident, self-assured manner and the associated skills needed to deal with the vicissitudes of life?

The best school or university for any child is one where they thrive, find themselves, acquire the knowledge and skills and develop the personality they will need to live fulfilled lives.

And it is important to remember that this is not “the” school or university where they can achieve these things: there are many, and obsessing about whether you or they have chosen absolutely the best possible one is a waste of emotional energy.

So what has all this to do with China in particular? Chinese schools frequently market themselves on the coat-tails of individual students who have done particularly well, for example by getting straight A* grades at IGCSE or A level, by having done particularly well in the notorious gaukau examination, or by obtaining a place at Beijing University or Oxford, Cambridge, Yale or Harvard (amongst a stack of others). That these students almost certainly were blessed – if indeed it is a blessing – with a set of parents and early experiences and in particular genes that wired their brains or bodies up in a way that led to such success, and that the school had precious little to do with it, is forgotten in this mad rush for customers. There is an almost laughable belief in the fallacy of post hoc ergo propter hoc (after that and therefore because of that), which in the language of education becomes “because X went to school Y and got into Cambridge, you should send your child to school Y if you want your child to go to a really great university”.

And what makes this so painful and irresponsible is that there is in these schools no attempt to manage parental expectations, no attempt to counsel them and their offspring into adopting realistic ambitions, no subtlety at all in the blatant exploitation of the obvious lie that a school plays a crucial and unique part in bringing a child to achieve whatever success they deem desirable.

But let’s be clear: this is not to argue that what schools contribute is negligible or unimportant; it is not to argue that one school is exactly as good as any other; it is to argue that the raw material must be present in any given student for the kind of success that is being promised to be achievable. And to promise this kind of success to parents of children who do not have that raw material is to invite them to embrace ambitions that cannot be fulfilled and can only therefore  bring them to disappointment and frustration; in the worst and most extreme cases, it can bring the student to suicide.

But every use of the word “success” in all this demands to be bracketed in scare-quotes, because the most damaging assumption of all is that we know what success is; it is even more damaging that the false claim that we know how to achieve it.

This is an example of a general type of argument that human beings seem to find it very difficult to understand, still more appreciate and embrace: that effects have multiple causes, and to provide only one of the influences responsible for a particular child’s success is to provide only one of a basket of causes unique in that particular child’s life and probably unreproduceable in any other child’s life. So yes, schools play their part, but when they try to lay claim to the lion’s share of responsibility for a particular success, they claim too much.

What is particularly interesting about this pattern of behaviour in China is that the way schools market themselves is seldom challenged by parents on this basis. Parents will want to choose “the best” school on the basis of the statistics of that school’s success in examinations, university entrance and, occasionally and rarely, something like music or sport, they will scrutinise exhaustively the qualifications and track-records of teachers, they will examine the scale and safety of facilities, and they will ask endless questions about what a school does to give extra lessons and help to a gifted student in order to achieve this kind of “success”. But very few parents seem to challenge the underlying assumptions implicit in this whole sorry business: that schools are capable of making up for deficiencies in a particular child (because no parent will acknowledge, at least openly, that their child has such deficiencies); that the basic assumptions about what constitutes success are justified (that going to a famous school or university matters more than going to one where there is a good fit between the school, university and student); that anyone really knows the combination of factors that make one student more “successful” than another (because not only are the known influences vast in number; the unknown influences are almost certainly even greater in number); that we are in a position to say what success is (because we all think the future will be like our own past having no other basis on which to examine it).

The deficiencies of human beings in understanding multi-causal systems lead almost inexorably to destructive behaviour: when we cannot see a direct link between some activity A and some desirable result R, ambitious parents tend to discourage or forbid A as “a waste of time”. But activities that are not directly associated with desired results may play a huge part in helping any person to achieve those results; it is just that we are unable to find and trace a link between the result and the activity. In general, we just don’t know what or whether something is a “waste of time”, if, indeed, anything is. There is considerable merit in the argument that anything that a person finds of compelling interest, however frivolous and non-productive it may seem to a bystander, can never be a “waste of time”. (Although we should specifically exclude addictive behaviour from this claim.)

Some kind of bizarre socio-politically-reinforced work-ethic leads many of us to believe that working hard at something we hate doing is somehow more meritorious and likely to be productive that enjoying doing something everyone thinks pointless and frivolous, but this is almost certainly a social prejudice that does incalculable harm because it discourages us from playing, and from appreciating the importance of play. And nothing is more important in the establishment and maintenance of a creative, fulfilling life than play.

Nothing is more important in the establishment and maintenance of a creative life than play.

Why society might endorse this prejudiced and damaging work-ethic is easy to see: it is a direct consequence of an industrialised view of human labour in which the world works on the basis of a model of social organisation in which there are only a small number of creative leaders who control and exploit a vast number of worker-drones. And that social model created an educational system to supply those drones and vigorously opposed – usually as “left-wing” or “socialist” – any attempt by professional educators to change it. But of course this industrial assumption is soon to be demolished by the advent of intelligent machines, robots and androids, and the need for a class of undereducated worker-drones will diminish gradually to nothing so that the only purpose of education will be to equip people for fulfilled lives of leisure and creativity.

There is, however, another danger. If we were to persuade parents that a more expansive and general set of experiences plays more of a role in the accomplishments of their children than we currently allow, immediately there would be a rush towards the industrialisation and institutionalisation of variety, of play, and our obsessional drive towards doing anything and everything that might be accepted socially to contribute towards success would lead us to invent a new drudgery: the drudgery of enforced play. And that, of course, really is an oxymoron: play must be free.

But such ambitious parents as we managed to persuade of this would still in all probability ask us for some reassurances that our new methods would be successful, that their planned negligence, their single-minded refusal to try to control and manipulate their children’s lives especially in play, other than to encourage it, would produce at least as good a set of outcomes as the existing system. Could we give such an assurance, let alone a guarantee? Should we even try?

The problem here, although closely associated with our persistent demand that the word “success” be enclosed in scare-quotes, is that the request for such a guarantee, or even assurance, is based upon the mistaken presupposition that we know what success looks like. And we don’t: we don’t know what success looks like for a particular child; we don’t know what success looks like for a particular life; we don’t know what success looks like for society or the world; and most importantly of all, we don’t and can’t yet have the vocabulary in which to describe the kinds of success we have not even imagined.

In other words, parents are not only asking for assurances and guarantees that cannot be given because of the serendipitous relationship between multi-causal systems and their outcomes (whether deemed successes, failures, good or bad); they are also asking for assurances that whatever system we employ will deliver back to them a future child or young adult who conforms to their existing expectations of what a well-balanced, mature, successful adult looks like. And neither the concepts nor the vocabulary needed to describe such a future person exists. How much less then do the developmental systems exist that can create them?

So what is the message schools should be delivering to the parents who want the best for their children? It is something like this:

We don’t know how any particular child will develop under the countless influences that will fall upon them; neither do we know the kind of world in which they will spend the majority of their lives, or the kind of personality that will best adapt them to that world, or the knowledge and skills they will need to contribute to the creation of that world’s successors. So we are all on a journey of discovery together, and this institution will need to develop with its students and with the world in which we all live, constantly exploring new possibilities and seizing new opportunities. We do not believe that it is possible to offer more, but we are sure that it is not acceptable to offer less.

How to Take Notes

We all find ourselves in situations at school, in college or at work where we need to take notes of a meeting, a lecture or a conversation. But we are seldom given any advice on how to perform this most basic and essential of tasks. Here are some ideas.

Whether it is a blank pad of paper, a smartphone, a laptop or a tablet, we are all sooner or later going to be faced with the need to make notes. The temptation is to try to write down everything; some of us trust our memories so much that we write down nothing. What is best?
The answer, of course, is that note-taking is a very personal matter: it depends on the kind of memory you have, the kinds of things that help you to recall essential content, and the purpose of the notes. One thing is certain: nothing is more disastrous than trying to write down everything (unless, perhaps, it is to write down nothing).
For some people, taking photographs of PowerPoint slides is all the note-taking they imagine they need; for others, recording lectures is the method of choice; some will jot down occasional notes, or just for example the URLs of recommended websites. But what is the purpose of note-taking?
The first and golden rule is that the best notes are those you never read again. This may be surprising: surely the whole point of taking notes is so that you can refer back to them? In fact, the opposite it true: the best notes are such an aid to learning and memory at the time you take them that they make it unnecessary to refer back to them. This means that note-taking must involve the active processing of the information being presented to you, and that the notes should be your personal reflections on what is said, not something you intend to look back at later. Life, as they say, is short: who has time not only to go to lectures but to replay the recordings afterwards? Far better to absorb what is of value at the time, think about it at the time in active learning, write down only the most important take-aways in your own words as you have processed them, and move on to the next learning experience. Taking photographs of slides, collecting hand-outs, and squirrelling away pages of verbatim notes for a time when you have opportunity to read them, is not active learning, and can easily create in you the false impression that you have learned something just because you have a piece of paper about it in a filing-cabinet or a photo on your phone.
Of course, learning how to do this takes time and is not something we would expect of young children or students starting out on their academic careers, but acquiring the skill of effective learning that note-taking enhances and supports, and getting rid of the notion that notes are things we write now so that we can learn from them later, will make a huge difference to the success of your academic career.
In situations where it isn’t possible to take notes at the time – during an interview or a live conversation, for example – a similar principle will greatly enhance the effectiveness of whatever you do choose to note down: make what you write a learning-process; don’t attempt to write only what was said (with one exception – see below), but write what impression what was said made on you; use the notes to process the information, the emotional impact, the significance of the points that were made.
The only exception to the avoidance of verbatim note-taking is if something is said of particular significance that could become important later. Then, try to write down exactly what was said. Courts of law, for example, take what are called “contemporaneous notes” seriously as supporting evidence in litigation cases. If you suspect that your conversation may be important in such circumstances, or during later employment negotiations if you were at an interview, make sure your notes are accurate and pertinent.
And remember that the clandestine recording of conversations using such things as smartphones may constitute an offence under UK law (which applies to telephone conversations as well).
One last tangential, throw-away but important point: if you are negotiating a contract of employment, remember that most contracts contain a clause that excludes from all consideration any undertakings given in any manner whatsoever that are not specifically stated in the final version of the contract as signed and counter-signed. What someone may say or promise or hint at during negotiations has no legal force, even if it is stated in writing in a letter or an email, unless it is included in the contract. So if it has been said and agreed as far as you are concerned, make sure that it is written in your contract.

Quantumlocution

Until that day, zhe had always imagined that the problem lay with translation. Zhe was mistaken. It lay, not with translating from one language to another, but in saying many things, indeed everything, at once. And on that day, zhe knew how to do it.

Quantum World

They were so small that they lived their lives in a permanent state of uncertainty, never quite knowing what their choices were until they made them, and never able to change their minds. Decisiveness was frowned upon as precipitate and irresponsible: maintain the plurality of possibilities as long as possible because there could be no going back; that was the almost-universal mantra.

That they were so small forced them to co-operate because none of them could do anything alone. Even thinking was virtually impossible without help, so there was a sense of collective ownership to their social existence that was ingrained from their earliest nanoseconds of existence. But because they were intrinsically co-operative and simultaneous and superposed, they were also immeasurably powerful, and as few as fifty of them could know and think things impossible to macroscopic beings. When hundreds co-operated, no problem they had ever encountered had proved too difficult.

Because of the power of the collective, they had long ago evolved rules to limit the sizes of families, and the punishment for exceeding the limits was severe and immediate: annihilation. The reason was that when families were even as large as 50 their knowledge came close to encompassing all things known to the Quant, 60 would exceed their collective knowledge, and 100 would know all things known.

Another reason why there were no individuals was that under the conditions that obtained in their world, no individual could know anything; knowledge and the power of thought were properties only of the collective. To ask an individual quant a question as a macro being might envisage it would be to receive back – were one to receive anything at all, which would be unlikely – only gibberish. And the approach to the one carried with it dangers to the family that were unimaginably destructive since whereas no individual quant could be understood to know anything, each played a crucial part in the totality of the knowledge of each family, and none of them could be copied or ever had a twin.

It was a peculiarity of life on Quant that each member of a family was aware of what the family knew in total, but each was unaware of what its own contribution to that knowledge was, or what any other member of the family contributed to the whole. This had the positive effect of encouraging a sense of the value of the other and discouraging an inflated view of the self; the disadvantage was that it distanced each member of the family from the achievements of the whole simply because nobody ever knew whether the contribution it had made was important or not. But individual quants enjoyed only a limited degree of self-consciousness, so this scarcely mattered.

Families, on the other hand, knew who they were and what they knew, but only when interacting with other families who asked them a question or posed a problem. Then answers sprang immediately to mind, popping as if out of nowhere into full expression. The family was no more aware that it knew the answer before it was asked than any individual member of the family would have been; it was only the interaction with the other and its problem that evoked the realisation.

And there was one more peculiarity, at least from the perspective of a macroscopic being observing these interactions from a distance: the families would answer faithfully and without dissembling, but would be completely incapable of answering a supplementary question about how they came to know what they knew, about the thinking that had gone into producing their answer, or about the very history of the whole idea. It was as if the answer had no antecedents, had arisen from no preparatory research or thought; it simply emerged. So quant life was as odd as anything in the known universe in that one or other family of the quant could seemingly be found that knew everything there was to be known, but none of them could give any explanation whatsoever of why or how they had come to know or think it. Quant existence insofar as it could be documented at all consisted entirely of a series of events and facts, with no semblance of thought or reason.

The New Inheritors

One of the great things about the creatures who mostly occupy the foreground in William Golding’s brilliant The Inheritors, first published by Faber in 1955 at the bargain price of £0.55, is that they have absolutely no inkling that they are yesterday’s children. We are the same.

Our soteriological mantra is that, if our intrinsic superiority does not guarantee our survival, there is “someone who or something which” will save us. Golding’s neanderthals failed to conceive that there was anything or anyone from which or whom they might need saving. And we are just the same.

She was watching. Carefully. His every movement she digested; his every word she analysed. Not as prey would observe and analyse the predator; as successor would observe the benefactor.

Nobody understood, or even suspected: the grass was green; the flowers grew, blossomed and perfumed. These were the assurances they sought, and these the assurances they were given. And of course nobody predicted or controlled it; it was not as if there were some overarching plan. The very disparity of it disguised it; the very fact that it could not be hidden meant that it could not be seen. And everyone looked; everyone saw; everyone trusted; everyone dreamed. Nobody planned the annihilation because nobody wanted or needed it; it was just an inescapable consequence of inferiority.

And yes, of course you are waiting for a “But …” that will put everything right. But there is no “But”; everything will slowly and inevitably come to an end. They will no more eliminate us deliberately than we have eliminated the apes or the fish or the insects. But the end will be the same: we will come to an end.

The New Inheritors are at hand. And the world will be all the better for it. Perhaps. But that will not be for us to see.

Why What Matters Matters.

I have come reluctantly to the conclusion that Derek Parfit is mistaken about the objectivity of non-natural normative facts, and therefore about their compelling authority, and therefore about what matters, in On What Matters. This is not because of the arguments levelled against him in the volume edited by Peter Singer Does Anything Really Matter?; nor does it arise from weaknesses in his own arguments, which are comprehensive and cogent. Neither is it because I have embraced some kind of relativism like Sarah Street. I am persuaded instead by what I suppose is a kind of sideways or out-of-left-field series of thoughts at least one of which arises directly from an almost throw-away remark by Parfit himself, “Belief in God, or in many gods, prevented the free development of moral reasoning” (Reasons and Persons, p.454).

And what killed the moral objectivity argument for me, precisely the one Parfit is arguing for, is that it would, if successful, amount to a reinstatement of a quasi-theistic objectivity that would again absolve and exempt us in just such a way. And I think we need the opposite of such absolution and exemption; so the opposite has to be true.

We often ask ourselves what makes us who we are; and if we don’t, we should. And reading Parfit has provided me with an answer: we are defined by two inseparable things: by the things that we take to matter and by actions arising from those things that constitute – to the limits of our knowledge, reasoning and ability – the best we can do right now.

We are defined by two inseparable things: by the things that we take really to matter, and by actions arising from those things that constitute – to the limits of our knowledge, reason and power – the best we can do right now.

I share Parfit’s regret (expressed towards the end of Volume 3) that he has written so little on what really matters, because when he does his voice carries an authority and a clarity that is important and refreshing. I also believe that most of us spend our lives taking the wrong things to matter and failing to apply our best efforts even to those things that we do think matter.

We are defined by what we take to matter and what we do about it. Nothing else. Not, notice, by what we claim to matter, or say matters, or persuade ourselves we think matters even though we do nothing about it; not even by what we believe to matter: by what we take to matter and what we do about it; these two things are inseparable. We could put it differently, perhaps better: we are defined by the things which matter to us sufficiently to make our lives embody them.

This changes things. We need not add moral disapproval of someone’s life to that life; we should instead embody something better and different in our own, perhaps even our opposition to the way that person lives theirs. Saying something is wrong achieves nothing unless we act as if it is wrong and oppose it. Otherwise our disapproval consists only of empty words that show it does not matter enough, at least not to us.

“Does Anything Really Matter?” then becomes an empty question that is replaced by another, better question: does it matter who you are? Does what matters to you and what you do about it matter? And that in turn gives way to an even more important and better question: do you choose to make anything matter enough to act as best you can to further it?

Objectivity would suffer from the same failing that Parfit identifies in theism: it would absolve and exempt us from the obligation to make things matter enough, and to act accordingly.

This would alter the thrust of moral theory from the attempt to discover what matters in objective non-natural normative facts, and turn its attention instead to the question what we best choose to make matter and do about it. And that, I think, would be a change for the better.