The Ethics of AI

In 1997 I published a book called God and the Mind Machine through SPCK. It didn’t sell, and they pulped it. The biggest intellectual mistake of my life was to allow myself to be so discouraged by this that I effectively abandoned study of AI for the better part of 20 years. More fool me.

God and the Mind Machine (hereafter GMM) was more interested in the mind-body problem, the question of the soul (or rather why we don’t have one), and how to conceptualise the inner life of other entities, including potentially machines. I still regard it as having essentially resolved the mind-body problem in such a way as to leave open the possibility of artificial life that is fully sentient, and I have read no persuasive counter-arguments.

The solution to the mind-body problem can be stated in a sentence: to be a suitable body with a suitable brain is to be sentient; our minds are our bodies in their inside-looking-out-ness.

To be a suitable body with a suitable brain is to be sentient; our minds are our bodies in their inside-looking-out-ness.

That’s it. No souls, no ghosts in the machine, no special additional qualities bestowed upon us by the gods: just being bodies with brains existing interactively with the world is sufficient. And therefore a suitably sophisticated machine that interacts with the world can also be a mind, can also be intelligent, and something with which or someone with whom we could have just the same relationship as we do with another human being or one of the higher animals. And it seems to me inevitable that those machines will one day supersede us in intelligence and come to see us as the failed species that we are.

We shouldn’t be surprised by this. It is only our mistaken adherence to a version of Plato’s world-view tangled up with one or other kind of religion that makes us want more or think that there is more.

All this is obviously relevant to the current debate about AI, and the position I adopted in GMM persuades me that those saying that it isn’t a question of “if” but of “when” AI will become self-aware and therefore conscious are right. I propose to waste no more time debating the matter.

So the new question becomes not whether sentient AI will emerge, but what kind of sentience AI will enjoy. And this is the dilemma of “The Ethics of AI” because human beings only understand sentience from their own perspective, and an advanced AI will certainly understand it differently, and probably more deeply than we do.

Before we can sensibly enter into a discussion of the ethics of AI we have to resolve some pretty fundamental questions about the ethics of human beings. Almost everyone involved in the AI debate wants to ensure that they – the super-intelligent AI beings – will be benevolent to human beings, but I find it hard to understand why. Human beings are irretrievably flawed, and I don’t see why a super-intelligent AI would see them in a more favourable light; in fact I see every reason to suppose that a super-intelligent AI will see us exactly for what we are, and perhaps better than we understand ourselves. The reality is that we are a failed species: we have done some things very well and crawled a long way from the primaeval slime; but we rely on war and violence and drugs and brutality and discrimination to defend our so-called freedoms, which is to say that we are still locked into the same kind of evolutionary war that produced us and seemingly incapable of developing beyond it.

So there is a fundamental philosophical challenge buried at the heart of the debate about AI and especially its ethics: in terms of ethics and intelligence, how can we supersede ourselves? How, in other words, does a flawed species propose to engineer a species that is superior to itself in terms both of intelligence and ethics?

How does a flawed species propose to engineer a species that is superior to itself in terms both of intelligence and ethics?

This is a philosophically deep question, especially if we approach the problem from the perspective of the design of algorithms (although there are good reasons gleaned from the best and most successful of approaches to machine-learning to believe that this approach is not optimal). The problem is as old as computing itself: we commonly confuse the unpredictability of the behaviour of coded systems with the necessity for them to be coded. This amounts to another of our failed intuitions: we tend to think that if we know all the coded steps that define the behaviour of an AI we must necessarily know what that behaviour will be. This is an illusion.

We tend to think that, if we know all the coded steps that define the behaviour of an AI, we must necessarily know what that behaviour will be. This is an illusion.

It is also a very dangerous illusion. To see that it is an illusion consider the axiomatic definition of the natural numbers {1, 2, 3, 4, …}: we know exactly how to define the natural numbers in primitive terms in such a way that we can generate them indefinitely; yet there remain many properties of the natural numbers that elude us (with Goldbach’s Conjecture being the most famous). Or consider the rules of chess or of Go: we know exactly and entirely what they are, but we do not know every game that can be played and the question of whether beginning the game is sufficient with best play to win it remains undecided. Most powerfully of all, consider the operation of language: we can produce a dictionary and a grammar that defines the use of a language, but we have absolutely no hope of being able to predict all the uses to which its words and syntax can be put, even though we know that anything that is ever said must employ them.

In the case of AI this illusion is potentially catastrophic because it leads us to believe that because we control the process of creating the AI we must necessarily be able to control the way the AI will evolve and behave. Yet if the system is sufficiently sophisticated to offer a hope of reproducing human or superhuman intelligence, we cannot and we don’t.

In the case of AI this illusion is potentially catastrophic because it leads us to believe that because we control the process of creating the AI we must necessarily be able to control the way the AI will evolve and behave.

To put it as clearly as possible: we create our children, but we cannot control how they develop or behave; the same is true of AI, and perhaps to a greater extent because we understand the ramifications of their design far less.

This illustration is not arbitrary: in the case of human existence and behaviour we have attempted, and usually failed, to design ethical systems to constrain lives. They have failed because of the lack of a necessary and binding connection between the words that shape ethics and the behaviour ethics is intended to govern. As a result we have found it necessary to have recourse to policing, law, trial and punishment to enforce underlying ethical principles; it has never proved possible to leave adherence to a particular ethics entirely to the people, however many people have generally behaved well. Indeed, the fact that many people behave well has often been seen by those inclined to misbehave as a weakness that only encourages them to act as criminals.

The point is that throughout history the human ethical enterprise has failed: we have never succeeded in creating and maintaining a society in which ethical principles would govern behaviour; we have always had to resort to law and enforcement to maintain order. Even the most high-minded principles articulated in the best religions and philosophies have always required supporting force, and that force has always subverted the higher principles and in the end destroyed them.

Our anxieties about what we may have created or may yet create in AI are reminiscent of the Greek myth where Chronos kills his first five children by Rhea, only himself to be destroyed by the sixth, Zeus, because Rhea deceives him into believing that Zeus is already dead. I can’t imagine a more apposite myth to describe what will happen with AI, with or without an ethics, because sooner or later someone will hide what they are doing from the world until it is too late. (It is a myth that may repeat itself “upside down” in self-driving-truck Anthony Levandowski’s dream of an AI religion where, if historical precedent is anything to go by, those who make the AI gods will be the first to be destroyed by them.)

So even before we begin to develop an “Ethics of AI” we need to recognise that such an ethics will not work unless it is policed, and there will inevitably be desperate, reckless, destructive human agencies who will seek to deploy AI against any or all the principles elaborated in any ethics, ostensibly to their own advantage.

Pessimism about the prospects for an ethics of AI should not deter us from trying to work on one, any more than excitement at the possibilities offered by such new technology should blind us to its dangers, but for a species so stupid that it sees the only solution to a man with a gun as a man with a bigger gun, AI will inevitably be weaponised, so it is probably already too late.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s