Ethics for Transhumans - A reply to Josh Hall's Ethics for Machines

In Ethics for Machines, Josh Hall – well known for his brilliant contributions to nanotechnology - has turned his attention to the crucial issue of morality in our rapidly changing, high-tech world. However, I'm afraid that I cannot support many of the article's premises, inferences, or conclusions.

The paper postulates an inborn 'ethical instinct' that fosters group 'dynamism' and survival - goals that the author sees as both descriptive, as well as prescriptive (what is, and what is desirable). This instinct, programmed by moral rules dominant in society, yields our conscience; which in turn serves to overrule our common sense and self-interest - thereby promoting group 'fitness'. The essay claims that evolution is the best means for selecting moral rules - (ironically) rejecting deliberate human evaluation and design. The author extrapolates all of this to conclude that our best chance for averting disaster in this fast-moving world, is to imbue organizations and machines with an 'ethical instinct' that will learn society's rules - in order to overcome their selfishness.

The article suffers the usual problems endemic to moral debate – vague and shifting definitions, confusion over 'duty', rejection of the possibility of rationally derived morality, and mixing prescription & description (see De-scription versus Pre-scription - and other Ethical Confusions). These, and other objections are summarized below, while the appendix provides a more detailed analysis:

  1. The entire structure of Hall's theory revolves around an ill-defined goal of maximizing the 'dynamism' of groups. No justification is given – the prescriptive aspects of the essay simply imply a duty to this goal (and in fact, various others). Not all of us share such a meta-ethical goal. Some of us believe that morality should serve as a guide for optimizing individual life.
  2. The meaning of several crucial words is ambiguous. In particular, 'self-interest' seems to range from 'rational, long-term self-interest', to 'myopic, wanton selfishness'; while 'common sense' ranges from 'valid, general knowledge of everyday life', to 'seemingly obvious, yet mistaken beliefs'. Such equivocations may appear to help the author make a particular point - but only at the expense of overall consistency and comprehension.
  3. The harmful and mistaken assumption that self-interest is inherently at odds with what is good for society: Civilization is not primarily a zero-sum game. A derivative point is the disastrous view that morality itself is not in an individual's best interest. This error creates a fundamental psychological conflict: A war between conscience and self-esteem.
  4. The text not only opposes self-interest, but recommends that we also override our common sense, to blindly follow our conscience – do what feels right. Common sense is neither defined, nor is its relationship to conscience analyzed.
  5. While the author seems unable to decide whether any moral progress has been made in the past, he asserts that evolution automatically creates the best moral codes in us, and that the right is what our conscience tells us. This implies that we can neither improve the workings of our instinct, nor selection of codes – that no progress has been made by deliberate 'tinkering'. These propositions seem patently false, unworkable, and harmful. Rational moral design and debate is the best way to upgrade the meme-pool, to achieve our goals, and to minimize use of force to resolve differences.
  6. There is no analysis of rules an 'ethical instinct' can or cannot learn; which codes it will learn in a given environment; or which of the many (often conflicting) codes it will actually apply to a given situation.. This lack prevents the theory from being testable or predictive, and furthermore precludes it from guiding the design of machine instinct or ethic - the very purpose of the essay.
  7. Ignoring the importance of high-level consciousness in morality prevents the paper from progressing much beyond historic, descriptive moral statistics – Core issues of prescriptive ethics such as personal values, motivation, choice and responsibility cannot be addressed. It is exactly our deliberate, conceptual cognition that separates us from animals and savages - and that represent the key to elevating our morality to transhuman status (see Volition vs. Determinism).
  8. The text wistfully reflects on the possibility of one day understanding and improving our human 'ethical instinct' (and to build robots that 'we'd let our daughters marry'). Not only don't we have to wait - there is a lot we could do today – but sorting our (trans-)human ethics is a prerequisite for deciding how we should treat other organisms and machines, and what morality we can and should build into our machines.

Undefined terms and internal contradictions make it very hard to interpret the paper. However, the most obvious reading yields rather unpalatable conclusions: That we accept evolutionary selection of the fittest groups as our ultimate value. To abandon rational analysis, debate, and design of what is good, and to passively play our part in the natural process by blindly following whatever code our group happens to have adopted – and to kill off those that disagree. To eschew self-interest and to abandon common sense whenever something doesn't feel right. Finally, to transfer this deadly design into our machines and organizations.

It is a pipedream (nightmare) to imagine that we can design an unconscious instinct that will correctly override rational thought to make us (or our machines and organizations) do what is right. We cannot abdicate our moral responsibility to mind- and heartless evolution.

If this is not what the author intended to convey, it is not at all clear what his prescriptions are: What is his standard of value; what does 'dynamic survival' mean? How do we know when we have some 'duty' to something (if ever)? Should we ever judge moral codes and try to reprogram ourselves? If so, how? Should we ever override feelings of right and wrong, and choose common sense, rationality, or (god forbid) self-interest? If we cannot use rationality to decide what is right, then how else do we resolve differences between our moral codes? Brute force? The paper provides no clues.

Yet, our (moral) future is not so grim. There are many answers.

What a wonderful faculty we have. Rationality – that made the tremendous accumulation of knowledge and the scientific method possible – is what separates us from barbarians, and gives us a chance to overcome animal and human limitations. Yes, we are neither omniscient nor infallible, but that doesn't preclude us from rationally developing good moral codes. Past failures in moral design are no more predictive of the future, than thousands of years of failure to fly.

Reason is the best tool we have to avoid disaster and to optimize life. Let's by all means use whatever insights we can gain from evolution – or any other aspect of reality; but let's not abdicate our moral responsibility – and our future – to a blind, unconscious watchmaker. We can transcend our primitive kill-or-be-killed 'group instincts' by rational evaluation and conscious design: By discovering what moral principles best foster our goals, by understanding how such a moral code serves our self-interest, and by re-calibrating our moral compasses with this wisdom. Then, what makes sense also feels right – eliminating conflicts between conscience and reason.

Let us debate; and design the best morality we can!

Appendix: Detailed Analysis of Ethics for Machines - (original text: "...")

1 The Purpose and Goal of Ethics – What are our Duties?

The essay fails to provide clear definitions of 'ethics' and 'morality' – it gives only a few clues: The purpose of morality is for entities to learn things that are good for groups. Descriptive aspects of Hall's theory posit group fitness to be the (teleological) goal of morality - also defined as 'the good'. This is variously described as increased group 'dynamism', vitality of culture, survival, or fitness 'for a particular niche' – to make groups prosper, successful, more numerous, enviable, militarily powerful. The ambiguity of this standard of good, as well as other problems with the theory are addressed later on.

Prescription – what we ought to do – presents the author with a dilemma: His theory states that we are too stupid to be trusted with deliberate moral design, yet pleads for a better ethic, makes wall-to-wall judgments, and exhorts us to implement specific steps. Furthermore, while the overall goal of prescriptive ethics is not spelt out, ongoing group 'dynamism' and 'fitness' are clearly implied by the wish to replicate our 'ethical instincts' in machines. No justification for these fuzzy goals is given. It seems that we simply have 'a duty' to our selfish genes and memes.

Yet the paper cannot consistently maintain such selflessness. For example, here is the real reason for teaching machines our ethics: "...not too long after there are computers as intelligent as we are, there will be ones that are much more so. *We* will all too soon be the lower-order creatures. It will behoove us to have taught them well their responsibilities toward us."

Such confusion culminates in the essay ending with a laundry list of blanked assertions and personal values judgments: "...The inescapable conclusion is that not only should we give consciences to our machines where we can, but if we can indeed create machines that exceed us in the moral as well as the intellectual dimensions, we are bound to do so. It is our duty. If we have any duty to the future at all, to give our children sound bodies and educated minds, to preserve history, the arts, science, and knowledge, the Earth's biosphere, 'to secure the blessings of liberty for ourselves and our posterity' -- to promote any of the things we value --those things are better cared for by, *more valued by*, our moral superiors whom we have this opportunity to bring into being."

By the author's own theory, why should we heed his advice? By all accounts, these are just the memes he happens to have picked up - modified by his own common sense and self-interest.

Historically, duty ethics has almost exclusively been used by groups and individuals to control other people: Be it fascism, religion, or just paternalism. Some of us will have no part in this. We don't want to rely on the manipulative emotions of duty to decide how to deal with fellow humans; or for that matter, be confused about our relationship to plants, or machines: "...There is no hint, for example, that plants are conscious, either individually or as species, but that does not, in and of itself, preclude a possible moral duty to them..." -- "...We have never, however, considered ourselves to have *moral* duties to our machines, or them to us. All that is about to change."

To some of us, duty is not the starting point of ethics – neither to substitute explicit goals, nor as duty-bound acceptance of whatever our conscience tells us.

Not only do we reject duty as the ultimate justification for ethics' goal; we also reject the goal - survival of the fittest group – and the means to achieve it. Essentially the paper urges: 'Let evolution kill off societies and organizations with less powerful rules'. Furthermore: 'Let us provide the Watchmaker with the tools to do his blind, unconscious work'. Many of us don't share that vision of a desirable standard of 'good', or our passive collusion with this heart and mind-less process.

We accept no duty to evolution. What we want is more important than what our genes or memes 'want'. If they predispose us to following certain rules – and in particular the rule that we should blindly follow society's rules – we question their utility. We take the long-term flourishing of individuals (within society) as our highest value – and as the goal of morality. Ethics is a tool for optimizing our life.

People must dramatically devalue their actual existence, and view their lives incredibly abstractly, to place a higher value on some unknown future 'dynamic'' society – on blind evolution – than on the happiness and success of people alive today: And in particular, their own betterment and that of the people important to them.

However, we also realize that in order to better achieve our goals, we must take personal responsibility for our ethics: Our moral compasses are in constant need of proactive re-calibration and fine-tuning - our conscience, by itself, is not a reliable guide.

A clear definition of 'morality' should be the first item on the agenda of any ethics debate. In addition, any prescriptive discussion must be preceded by addressing the following meta-ethical questions: What is the goal of ethics? Why do we need it? How are good principles identified or discovered? What is the standard of good? Good for whom? If it turns out that our values differ – e.g. altruism vs. self-ownership – then let's face up to it. Let it not get lost under the scientific-sounding cloak of evolutionary so-and-so....

2 Fuzzy Definitions & Equivocations

Interpretation of the text is difficult, as several crucial terms are not well defined, and in many cases change meaning quite dramatically:

Fitness/ Dynamism – These central terms are not defined. Their meaning seems to range from survivability, to maximum growth (number), to culturally active, to militarily powerful, etc. No absolute or relative time/ size parameters are given. What does he mean: Survival over a human life-span or the universe's? Survival at the expense of competing societies? Growth and power at the expense of quality-of-life? Change for the sake of change?

In addition, to the extent that 'fitness' is defined by survival, it is an empty concept: Fitter groups are simply those that happen to survive.

It is worth noting the inconsistency that powerful corporations are judged to be bad: "...We may be on the cusp of a crisis as... corporations grow in power but not in moral wisdom." In contrast, the power of groups and memes is deemed to be a mark of their good-ness.

Ethics/ Moral – At different times these terms refer to either what 'ethical evolution' has produced - the 'good' - or to what the author (obviously) deems good. In addition, they also equivocate goal-directed & duty motivation, descriptive & prescriptive views, and human & universal beneficence. In other words, lack of distinction between: 'objective' & subjective, consequentialist & deontological, what is & what should be, and who or what ethics is for. (More details below).

Conscience/ Deontology – Conscience is identified as being deontological, with the following explanation: "...the rules in our heads govern our actions without regard for results (indeed in spite of them)". Here philosophy is confused with psychology. Our conscience comprises our automatized responses – the subconscious, emotional values & disvalues we happen to have acquired. Deontology, on the other hand is a philosophical (deliberate) commitment to the idea that there is such a thing as a goal-less good – Good with a capital 'G'. Most people believe that their moral actions are good for something: society, their children, getting to Heaven, etc.

Consciousness – "...there is a tendency for people to set a dividing line for ethics between the conscious and the non-conscious... The short answer is that it doesn't matter." The essay claims that identification of consciousness is irrelevant. This stand prompts the author to puzzle over our moral relationship with plants, but more importantly, it makes it impossible to decide what entities (children, other races, animals, organizations, programs, robots) to expect moral actions from, or how higher intelligences might judge us. It surely will make a difference if those Super-Intelligences conclude that we are 'not really conscious'.

Trying to resolve the problem by changing the criteria from 'being conscious' to 'act conscious' is epistemologically sloppy, and psychologically dangerous. An entity is conscious to the extent that it possesses the characteristics of (a specific kind of) consciousness – including, but not limited to, how it acts. We know how easily we can be fooled into ascribing more intelligence or awareness to animals or programs than they really possess. The problem of understanding consciousness can be solved by carefully unpacking the grab-bag of different types of awareness: sensory, pattern matching, conceptual, physical self, mental self, etc.

Human-level consciousness is a prerequisite for moral responsibility – you have to know what you are doing, and that you are doing it.

Self-Interest – Use of this word implies a range of meaning from real to apparent self-interest – essentially, from 'rational, long-term self-interest', to 'myopic, wanton selfishness'. For example: "... interactions between intelligent self-interested agents...", vs. "...when people start tinkering with their own moral codes, the first thing they do is to 'fix' them to match better with their self-interest and common sense (with predictably poor results).", and "...at odds with their *perceived* self-interest and common sense."

Common Sense – This term refers to both 'seemingly obvious, yet mistaken beliefs', and 'valid, general knowledge of everyday life'. In addition to previous examples: "...adoption of a rule that seemed to contravene common sense... could have a substantial beneficial effect...", vs. "...Industrial robots... have... no common sense whatsoever.", and "...Bureaucracies famously exhibit the same lack of common sense as do computer programs." Also note this contradiction: "...[corporations do not] obey prevalent standards of behavior which stand at odds to their... common sense".

Machines – Identifying (government) organizations literally (as opposed to metaphorically) as machines probably doesn't clarify things. The reason we have different words for machines & organizations is precisely because they are different concepts they have different properties. The real point is that because organizations (and increasingly machines/ programs) are networks of relatively autonomous entities, they are much more difficult to control and predict.

3 Self-Interest versus Group-Interest, and Morality versus Self-Interest

"... a moral code is a set of rules that evolved under the pressure that obeying these rules *against people's individual interests and common sense* has tended to make societies prosper..." -- "...In many cases, the adoption of a rule that seemed to contravene common sense or one's own interest, if generally followed, could have a substantial beneficial effect on a human group." -- "...one of the points of a moral code is to make people do things they would not do otherwise, e.g. from self-interest".

The essay's fixation with morality having to oppose common sense and self-interest is one of its most disturbing aspects. This section deals with 'self-interest', the next one with 'common sense'.

As noted above, at times 'self-interest' seems to refer to short-sighted or unthinking selfish desires – 'perceived' as opposed to real self-interest. We agree that acting on such motivation is undesirable. However, taking the term at face value, it simply refers to things that truly promote an individual's survival and overall well-being. Looking at it conceptually and longer-term – as ethics should – two fundamental points support the view that what is good for the individual, is also good for society:

Firstly, societies are made up of individuals – a group of flourishing individuals is a flourishing group.

Secondly, individuals cannot optimize their lives without effective interaction and cooperation with others. Apart from the fact that we cannot flourish while at odds with society, people also provide countless positive values to each other:

  • Varied and unique strength of individuals: exchanging or providing specific knowledge, skills, or possessions.
  • Effects of networking and cooperation: collective strength and ability to achieve large or difficult tasks; reduced unit costs.
  • Psychological benefits: cognitive and emotional stimulation and support in romantic and other friendships.

All of these are positive-sum or win-win interactions - no-one loses out. True zero- (or negative-) sum situations are actually quite rare. This is especially true in a modern world where by far the majority of our values are produced, and not just taken from nature. Whatever conflicts remain can be mitigated by explicit, rational moral codes, and sensible (ownership) laws. More commonly the dichotomy is not between what is good for the individual and society, but between a person (or group) wanting to dominate, and their victims.

Josh Hall's paper provides no examples or specific data of inherent individual/ group conflicts, it simply assumes them: "... particularly for social animals, there are many kinds of interactions whose benefit matrices have the character of a Prisoner's Dilemma or Tragedy of the Commons, i.e. where the best choice from the individuals' standpoint is at odds with that of the group as a whole". (Those cases where an individual's short-sightedness prevents him from correctly assessing his true self-interest, are dealt with in the next section – they call for increased rationality and common sense, not abandonment of it).

Not only is rational self-interest compatible with well-functioning society, it is a prerequisite.

If the goal of morality is to promote human flourishing, then its principles must embody character traits that will benefit the individual. Such a code must concern itself not just with group values, but personal ones as well - i.e. not just being honest with others, but not lying to yourself. In fact, as covered in Rational Principles for Optimal Living, social morality is a secondary issue. In such a view, morality is our friend, not something to practice kicking and screaming.

It is tragic that being moral is so often equated with being altruistic – acting for the benefit of others, and against one's self-interest. This must rate as one of the most brilliant con-jobs of all times: how various manipulators and despots have managed to convince the masses of this 'no pain, no gain' theory – that you cannot be a 'good' person if you don't sacrifice or suffer. Obviously they rely on the invalid concept of 'duty' - to the state, church, race, god, future – to achieve their purpose. How else do you prevent the perfectly obvious question: 'Why should I sacrifice myself?'. Many religions have perfected this dark art by setting the additional booby-trap of making it a sin to question the very moral code – or to apply reason.

One cannot maintain good psychological health living with such a contradiction – something has to give. Either we judge ourselves 'bad' or 'unworthy of happiness' – both ways our self-esteem suffers (see Nathaniel Branden's Reflections on the Ethics of Selflessness).

I shudder at the massive damage accumulated over the centuries, both to individuals and society. As budding transhumans let us not perpetuate this destructive 'what is good for me must be bad for you' mentality.

None of these objection deny the dire need for an effective moral compass – our conscience. Its automatized moral principles - our virtues – help guide us through short-term temptations and irrationalities, lack of knowledge or wisdom, and inherent uncertainties. However, a poorly calibrated compass may be even worse than wanton selfishness.

4 Common Sense versus 'Ethical Instinct'

Self-interest can be myopic and misguided; and seemingly obvious common sense can be mistaken. No-one denies this. One assumes that the author is not putting up these strawmen. Real self-interest and good common sense would be the antidote – not increased reliance on subconscious, emotional guidance.

However, it seems that the author does not regard common senses' errors as particularly harmful, but its proper operation. He claims that 'prevalent standards of behavior' blindly executed by an 'ethical instinct' generally provide better results than common sense. In fact, his very definition of 'right' embodies this view: "...[the right] is the moral instinct you have inherited and the moral code you have learned". By this definition, whatever beliefs our conscience reflects, are right! Good or bad. Slavery or racial genocide are right, provided they feel right.

How else are we to interpret: "...there are clearly viable codes... the most obvious of these... the historically common practice of slavery.", and: "...In many cases, the adoption of a rule that seemed to contravene common sense... could have a substantial beneficial effect on a human group. If the rules... happen to be more beneficial than not on the average, genes for 'follow the rules, and kill those who break them' might well prosper."

In contrast, 'common sense' is denigrated: "...What is more, when people start tinkering with their own moral codes, the first thing they do is to 'fix' them to match better with their self-interest and common sense (with predictably poor results)."

Apart from the meta-ethical question of whether we actually share the author's ultimate value – the dynamism of groups – and leaving aside the 'zero-sum' assumption addressed earlier, the real questions is: Is it true? Does conscience provide superior guidance than common sense?

Unfortunately, the essay does not give any clear examples (let alone quantitative analysis) of conflicts between commonsensical and emotional assessments. Looking at the list of rules comprising 'moral deep structure' - from 'reciprocity' to 'bounds on moral agency' – I'm hard pressed to find any that contravene common sense. This is not surprising: common sense is largely an automatic, subconscious assessment of cause-and-effect – and what 'everybody knows'. Common sense is usually aligned with prevalent moral rules - not only because they are the strongest memes, but also because others will shun or punish us if we don't conform.

However, there are cases where common sense and conscience disagree – where seemingly sensible actions make people feel guilty. For example: prohibition against birth-control, women taking up careers, draft dodgers. It seems to me, rather than being inferior, common sense is often the leading edge of improving morality. When rules imposed by some authority or embedded by some other means become too absurd, then common sense starts rejecting them.

By and large, common sense serves as a pretty effective guide for coping with reality - for living life. Our conscience on the other hand, is only as good as the underlying rules that our 'instinct' happens to have picked them up – and those are pretty atrocious at times. There is no doubt that throughout history people's conscience has frequently guided them to suffer needlessly, and to discriminate, enslave, and murder. Nazism must serve as one of the best examples of millions of people overcoming their self-interest and common sense to follow the moral codes of the day: blind obedience to the Fuehrer, sacrificing your life to the collective, and 'killing all of those who don't agree'. Yes, there was dynamism; for a while. The world would have benefited from more common sense, not less.

On a different point, there is a strange logical disconnect, when the essay (correctly) bemoans lack of common sense in bureaucracies and (current) artificial intelligence, and yet urges that we build 'ethical instincts' into them to combat common sense!

In any case, common sense is not the complete answer. As we progress from animal to transhuman, it is rationality - common sense's grown-up brother – that helps us overcome limitations of painfully slow and destructive natural selection.

5 Moral Progress and Rational Design

Before addressing the merits of rational moral design, let's survey the author's view of moral progress: "...Up to now, we haven't had, or really needed, similar advances in 'ethical instrumentation'." This amazing statement ignores the vast amount of unnecessary suffering, and dramatic retardation of technological and economic progress caused by poor morality (prohibitions against education, lack of individual freedom, etc.). Perhaps what was meant is that now there is a much greater need for sorting out our morality, as massively destructive technology ups the stakes.

On the other hand, the paper does judge certain changes to be improvements: "...Almost all of [moral progress] has been in the expansion of inclusiveness, broadening the definition of who deserves the same consideration you always gave your neighbor..... Perhaps the rejection of wars of adventure can also be counted". Western democracies (and corporations) also seem to be 'better': "...the liberal Western democracies, were significantly less evil", and "... it is probably a moral thing for corporations to exist. Western societies with corporations have been considerably more dynamic... than other societies...".

Such vacillation on judging moral progress is a direct result of poorly defined terms and goals. How can we even debate the relative merits of different rules or codes if we can't recognize moral progress? In addition, the tension between pronouncing judgments on moral codes, and claiming that "... [moral] codes are smarter than the people" (including current moral philosophers?) comes home to roost as the article moves from describing morality, to recommending improvements (such as the rule that 'moral instincts' are the best mechanisms for choosing what is 'right' – promoting the 'good').

The author firmly rejects the idea that better cognitive tools and knowledge – i.e. rationality – should be used to discover better moral rules. He calls it "...an extremely seductive proposition and an even more dangerous one [, that] is responsible for some social mistakes of catastrophic proportions...".

Yet then he seems to hedge his bets (perhaps to justify his own 'tinkering') with the following qualifications: "...in pre-scientific times, there were many effects of actions, long and short term, that simply weren't understood.", and prefixing "the codes are smarter than people..." with the phrase "historically anyway". Then there is also the suggestion that "... using computer simulation as 'moral instrumentation' may help weigh the balance in favor of scientific utilitarianism...".

The essay argues pro 'ethical evolution & instinct' and anti rationality & deliberate design using a number of approaches: by definition, evolutionary theory, cognitive science, psychology, historic and other empirical evidence.

Quote: "... Ethical evolution clearly has something to say about the right; it is the moral instinct you have inherited and the moral code you have learned. It also has something to say about the general good; it is the fitness or dynamism of the society." – These definitions must not be confused with what we usually and instinctively mean by the words 'right' and 'good'. Calling slavery 'right' is counter-intuitive, to say the least. Calling actions driven by an ethical instinct 'right', does not make them so. Similarly, simply calling whatever evolution dishes up – dynamism, fitness, or whatever - 'good', should convince no-one.

However, equating 'the good' with dynamic survival' (and thus evolution) is not just semantics; it also reflects the author's value judgment. Evolution is seen as the new Omniscience – It knows best. If it survives, it must be good – 'survival of the fittest'. This by itself, is little more than a tautology. What needs to be demonstrated is that a) moral codes (rather than other factors) played an important part in survival, and b) that increased rational design would not have produced even 'fitter' societies. Neither point is made. It is hard to believe that the many cruel and destructive 'moral' memes carried in past and present societies represent an 'optimum' of possible outcomes. Quote: "...Such codes tend to have substantial similarities... because of optima in the space of group behaviors that form memetic-ecological 'niches'."

In the context of such admiration for natural selection, it is ironic that rationality – the key characteristic of evolution's most successful design – is regarded inferior to (moral) instinct – an earlier, more primitive development. For an 'evolutionary ethicist', rationality's evolutionary success alone should be sufficient argument against our conscience's superiority.

Moving on to cognitive science: "... humans have the built-in hardware for obeying rules but not for the general moral calculation...". What a dim view of mankind! In any case, this claim is rather misleading: neither task relies only on 'built-in hardware' – both actually require 'fully configured hardware and software'. Once you look at the whole human, we do indeed have the capacity for 'moral calculation' - exactly what separates us from animals. Humans can foresee consequences of actions, and generalize this experience into principles. It is our self-awareness joined with our ability to think about things that makes moral responsibility – and thus complex social codes – possible. More about this later.

Let's now look at empirical evidence for: how codes are formed, whether codes correlate with outcome, and whether conscious design is inferior.

The paper states that "...the codes themselves are formed by the consequences of the actions of the people in the society." Absent further details, this observation is trivial – what else would codes be formed by than 'actions of people'? Actually they are formed by a mix of chance, conscious and unconscious action by individuals: church and government leaders trying to increase their power, people attempting to control their children (and spouses), philosophers trying to save mankind, etc. Which moral memes survive again depends on a slew of factors, including of course their practicality. However, blind evolution is notoriously slow.

No empirical data is provided for either formation of codes, nor for causal relationships between particular codes and outcome – let alone, between those opposing common-sense and self-interest, and (say) prosperity or military success. There is scant correlation of moral codes between countries that have, by the author's definition, 'prospered': the wealth of Saudi Arabia, Japan, Germany, Sweden, and the US; the military power of Iraq, Israel, and the US; or the sheer number of Russians, Indians, Africans, Chinese, and Americans. No doubt specific moral rules do have profound effects on society; however, the hodgepodge of rules embodied in social codes makes analysis tricky.

For our purposes the more relevant point is: no evidence supports the claim that less intentional moral system designing - less tinkering – produces better outcomes. If anything, the opposite is true. Consider this: it is deliberate application of intelligence and rational thought, built on accumulated knowledge, that has driven progress in science and technology. There is no reason to believe that this same human ingenuity cannot be applied to human conduct. It seems to me that the Founding Fathers did just that. Through their design of the Constitution they improved America's political and social ethic orders of magnitude. It certainly wasn't blind, unconscious mutation and selection that made that happen.

More generally, it is a mark of civilization and progress for people to reject primitive mob responses and to (more or less) rationally design and institute laws to resolve conflicts, and to overcome temptation to violate the rights of others.

Interestingly, when it comes to 'machines' the author seems to share my sentiment about not just letting blind evolution to do its bloody work, but to help guide it: "...Why shouldn't we just let them evolve consciences on their own (AI's and corporations alike)? If the theory is right, they will, over the long run. But what that means is that there will be many societies of AI's, and that most of them will die off because their poor proto-ethics made them waste too much of their time fighting each other (as corporations seem to do now!), and slowly, after the rise and fall of many civilizations, the ones who have randomly accumulated the basis of sound moral behavior will prosper. Personally I don't want to wait".

I think it would be smart to apply the same logic to human ethics. I don't want to wait either. If we can't resolve our moral differences through rational design and debate, then force looms as the only alternative. Let's do more of our moral development in the virtual world of ideas, rather than in the hard reality of evolution alone.

From another perspective, we really don't have a choice about making moral decisions: Doing 'nothing' – not thinking, not being proactive – also has consequences. And we know it. Because we are aware of our ability to think and choose, we can never fully abdicate our responsibility. Additionally, if we have any self-confidence whatsoever, we must surely believe that each one of us can contribute something valuable to the meme-pool, and to our own morality. Is seems then that the only questions are how we do it, and how proactive we are. How, and to what degree do we want to expedite – and add some vision to – blind, torturous evolution.

The essay proposes that we rely more on our 'instinct'. This strategy would likely amplify existing codes – good and bad. To leave our future largely to our genes, and existing memes. I am confident that we can do better than that. Using our intellect, we should be able to advance ethics in ways similar to the tremendous progress seen in, for example, medicine and communications.

Is it really wise to follow this "...extremely seductive... and an even more dangerous" approach? Not all seductive things are bad. And - its dangers must simply be faced.

Before looking at limitations and risks, let me be a bit more specific about the kind of design I am proposing: We certainly don't want a brittle, absolutist, know-it-all, purely deductive approach. What we need is ongoing, open-minded, empirically guided refinements to rational moral theory. There are degrees of evidence and certainty. Some moral principles seem to be beyond doubt, while others are rather tentative. As we do more research we will identify limits of knowledge and control ever more clearly. We obviously have to take these factors into account when making decisions about implementing changes – both to ourselves, and in society. Any rational design must, of course, also include any insights we gain from evolution. Such moral investigation and discovery is no different from any other science – but it does go beyond description, to make explicit value judgments.

Now for some precautions: Firstly, yes, rationality has its limitations – it is neither omniscience nor infallibility. That is why we have to test our ideas both against the minds of others, and against reality. However, the more of our testing we perform in the realm of mental simulation - of imagination and debate – the fewer hard knocks we suffer in reality.

Additionally, we certainly have an amazing ability to rationalize existing beliefs and short-term desires. When people 'tinker' with their morality there is clearly the risk - that they will design an inferior code that they do 'poor science'. As an interesting sideline, the vast majority of moral philosophy has been motivated by altruism, i.e. doing things against your self-interest (Christianity, communism, fascism, utilitarianism, Kant's duty ethic, etc). It seems likely that in these cases, a poor choice of goal was more responsible for the carnage they caused, than pure lack of rationality.

Anyway, here are some of the more obvious pitfalls we have to avoid:

  • moral architects with malevolent motives, such as wanting to control others
  • forcing others to adopt the moral system
  • ignoring our fallibility and lack of omniscience, not providing sufficient checks and balances
  • insufficient testing of theories in the free marketplace of ideas – i.e. rational debate
  • disregarding our propensity for rationalization and emotional choices

A related psychological issue is how to get the power and stability of moral character required to overcome short-term thinking, irrationality, and myopic or emotional selfishness. The essay claims that a socially derived conscience is the answer. I contend that the best remedy is to methodically discover/ develop a non-contradictory system of principles that foster individual (and thus societal) flourishing. Knowing such a system, and understanding how and why it optimizes life in the long run, will be the most powerful force to motivate people to be moral. Such compatible memes are also very likely to persist over time.

Unwittingly, the author makes a similar point: In stark contrast to recommendations elsewhere, a (sensible) call for a rational ethics aligned with self-interest is given as the robots' motivation for being moral: "... it will be necessary to give our robots a sound basis for a true, valid, universal ethics that will be as valuable to them as it is for us". I assume that those robots will be smart enough to decide what codes to live by.

Some of us believe that as transhumans – humans working on overcoming our genetic and social limitations – we should utilize our accumulated knowledge and combined ingenuity to consciously design better rules and systems: moral, legal, and political. We need visionaries, scientific analysis, and rational dialog to discover which moral principles are more likely to avert the kind of carnage mankind has brought onto itself in the past; and more importantly, to minimize the risks and to optimize the positive potential of future technology.

"... It is the height of arrogance to assume that we are the final word in goodness...." – Who claims to be the final word on goodness? No, not final – but it seems to be highly irresponsible to have no (considered) opinion on what is right and wrong.

6 Conscience, our Moral Compass

"... In summary, ethical evolution claims that there is an ethical instinct' in the makeup of human beings, and that it consists of the propensity to learn and obey certain kinds of ethical codes." -- "...In particular, I contend that moral codes are much like language grammars: there are structures in our brains that predispose us to learn moral codes, that they determine within broad limits the kinds of codes we can learn, and that while the moral codes of human cultures vary within those limits, they have many structural features in common." -- "...Moral codes have much in common from culture to culture; we might call this 'moral deep structure'."

There are a number of problems with this notion of 'moral deep structure': In reality, rules that prevail at different times and places, frequently contradict each other (women as property vs. freedom, eye for eye vs. turn other cheek). Secondly, many rules simply make sense, they help individuals and society – they are 'deep' in the way that the memes 'use knife to cut' and 'cooking improve food' are. Finally, to a large extent 'deep' commonality exists simply by definition; morality is often equated with altruism, therefore certain rules like 'practice rational self-interest' simply don't make it onto the list.

The concept of 'ethical instinct' is also flawed – it is misidentified. Evidence shows that humans (and animals) automatize a wide range of behavior – many of which have nothing to do with morality. We are excellent pattern processors. We easily learn and automatize rules, behaviors, and responses – i.e. subconsciously react to specific stimuli or conditions.

Many of the patterns we learn are attached to our pain-pleasure axis. In humans, the mental pain-pleasure dimension (in contrast to physical) plays a dominant part: we are very good at imagining. Powerful emotions can be linked to patterns by direct experience (eating honey, burning stove), by external reinforcement (praise or scolding), or by mental influence or deliberation (someone or something convincing us that something is good or bad – a speech, a book, or just thinking). Sometimes emotional responses are beaten into us. Early influences, at a time when our minds are more plastic, are particularly potent and deep-seated. On the positive side, good automatized principles – including virtues – also perform a crucial developmental function: allowing us to acquire useful responses long before we are able to figure them out for ourselves.

This mechanism operates in many realms, including those usually considered outside of ethics, such as driving, eating, art or sports. The collection of responses that attach to emotional evaluations of our self-worth – are we good or bad – is what we call 'conscience'. This moral compass is what alerts us when our behavior or intentions align with, or contradict, automatized moral rules or values – and we experience pride, or guilt. How our compass guides us depends on a complex mix of deliberate external programming, random environmental influences, genetic predispositions, plus personal thought.

It is extremely important to note that what our conscience tells us bears little relationship to what really is good or bad. What feels right or wrong can be diametrically opposed to what actually is good or bad for ourselves or for society – witness the long list of inverted 'virtues' and 'vices' accepted by some groups at various times. For example: approval of bigotry, blind obedience, self-torture – and conversely – condemnation of masturbation, birth-control, and (as I contend) pride and self-interest. We all have a moral compass (with some pathological exceptions), but it is not automatically calibrated to help achieve any particular (meta-ethical) goal. The 'moral organ' by itself has no ability to distinguish between good and bad codes.

Powerful as this mechanism is (I'm sure that people have committed suicide over masturbation guilt), it is not all that mysterious.

Contrary to what the article claims, we do not possess special brain structures, particularly receptive to 'certain kinds of ethical codes'. Not only does the essay fail to present evidence for a special 'ethical instinct', it also provides no analysis of specifically what kind of rules it supposedly can or cannot learn, which codes it will learn in a given environment, or which of the many (often conflicting) codes it will actually apply to a given situation. These omissions prevent the theory from being testable or predictive, and furthermore preclude it from guiding the design of machine instinct or ethic - the very purpose of the essay.

What about endowing corporations with a 'conscience'? My overall objection to this question is that organizations are simply a collection of individuals, collaborating on the basis of certain explicit and implicit parameters. As such, the individuals' consciences determine the company's. However, trying to stretch an analogy, one could say that a company's bottom line is its central pain-pleasure mechanism. Behavior tends to be reinforced along profit lines. Through its employees, corporations pick up prevalent ideas and customs and incorporate them. Governments, on the other hand, are driven by influence and size.

While we can play such metaphoric games, their utility is highly questionable. Organizations are not humans. They are not even machines. It is far more instructive to investigate human goals, and to discover what principles, virtues, and laws are likely to foster them. We can them improve our institutions by increased personal wisdom and morality, leading to better corporate codes.

On the other hand, incorporating a 'moral sense' in autonomous robots is primarily an engineering challenge, not a conceptual problem. This is not the place to go into details of what this would entail, however, it is crucial to note that this by itself would not solve anything - its actions would only be as good as the moral rules it happens to pick up. It is only conscious, intelligent evaluation that can filter out bad memes, and design better ones. Failing that, it too would be at the mercy of whatever harmful customs happen to be out there – or worse, such a primitive moral mechanism could be hijacked by some malevolent entity or philosophy, as has so often happened in human history (cults, despots, etc.). It would be at risk to both accidental and deliberate subversion.

7 Conscious Moral Choices

By ignoring or denigrating personal motivation and conscious choice, the essay obliterates people as autonomous moral actors - thus missing the highest form of morality. Conscious individuals are reduced to gene and meme pools – morality to what we happen to do. Personal responsibility is lost in the spectacle of focusing on our animal heritage. Values, motivation, and choice feature only as mischaracterized 'self-interest' and 'common sense' – supposedly, mortal enemies to the Good.

Everyday morality and legal systems both recognize different levels of moral action. These crystallize into three categories: can't think, didn't think, and thought about it.

Animals, babies, and mentally retarded people are unable to rationally think about their actions. We don't expect them to take responsibility for their behavior. At best, we can 'make them understand' that they did something good or bad by immediate reward or punishment.

The two higher levels apply only to thinking entities – at the moment, only humans. Oftentimes we too act unthinkingly – be it appropriate, or not. In this state any number of factors may dominate: general habit, fear, love, greed, patriotism, laziness, passion for some particular value, etc. Here, irrespective of what 'made us do it', culpability is diminished. Crimes of passion, careless accidents, and cult-induced behavior fit this category. On the other hand, our responsibility for not thinking is well recognized.

At our highest level of functioning we think about our choices, and take responsibility for what we do. Again, convention and law judge accordingly: premeditated crimes carry the highest penalty – conscious, deliberate acts of virtue garner the greatest honor.

Note that in both cases – thinking or not – action may turn out to be beneficial or detrimental (depending on habits, chance, knowledge, and cognitive skill). Additionally, it may be in line with our conscience or not (we may, for example, discriminate against someone, while feeling guilty about it). Finally, it may coincide with explicitly held values, or it may contradict them (e.g. a communist accumulating personal wealth).

The common denominator is that we have the ability to think about our actions – and we know it. This ultimately, is the root of personal responsibility. We can know what we are doing, and we know that we are doing it. We are aware of the fact that our choices and actions have consequences (and to some extent, what these are). We are also aware of the fact that we, as individual entities, are the causal agent.

The keys to prescriptive ethics are rationality, being able to foresee consequences, and self-awareness (including mental). This is what it means to say 'I could have done otherwise'. Could have done otherwise if I had taken some other mental action, if I had made different a choice somewhere along the line. Looking forward, talking about what I should do - what I want to bring about – presupposes personal choice, and an ability to change (see Volition vs. Determinism).

Crucial to understanding volition and personal responsibility is the caveat that our freedom of choice and action is severely constrained by various factors: 'nature and nurture', cognitive limitations, personal knowledge and beliefs, time limitations, etc. This is the context of our ability. It is the given. The bottom line, though, is that given this context, we humans alone possess abstract conceptual thought and reason. This ability confers (self-) awareness of thought and mind, and self-reflexive choice. This gift makes it possible – and indeed imperative – for us to pro-actively improve our morality.

Our conscience is an invaluable component of our moral faculty. Most of our everyday decisions and responses happen automatically, subconsciously. Many situations are too complex for us to analyze explicitly and consciously; or we simply lack time or information. Also, sometimes 'in the heat of the moment' we lack objectivity. Thus we have to rely on good habits - on our moral compass. This is why we benefit from rationally evaluating the rules and values that we live by, from scientifically discovering optimal moral codes, and doing what it takes to reprogram ourselves. Ideally, we want optimal, explicit moral codes for achieving our meta-ethical goal – and for our automatic, habitual responses to reflect these.

Moral philosophy, practiced as a science, can help us discover good principles, and (cognitive) psychology can help us change our habits – to recalibrate our compasses. The alternative to proactively applying the highest level of intelligence to this problem, is to continue to function nearer the animal level – and to pray for evolution to eventually select the right memes (keeping in mind that evolution's measure of success, is not our happiness or well-being, but reproduction).

I claim the degree of responsibility that we take for ourselves – the scope and success of deliberately re-programming our conscience - is a direct measure of our humanity. (See also, Max More's Self-Ownership: A Core Transhuman Virtue.)

The essay addresses – and dismisses – the need for personal morality in just one reference: "... the kinds of general qualities of character that were considered good (and indeed were good) have changed significantly over the past few centuries, and by any reasonable expectation, will continue to do so". Apart from the question of how the author judges that 'they were indeed good', this quote stands in stark contrast to the theory's 'deep moral structure', and to other claims of the universality of rules: "... ancient moral codes contain wisdom... which hasn't changed a bit since the Pharaohs". I strongly support the view that certain principles are quite universal: character traits such as honesty, integrity, productiveness, respect for others – not to mention rationality – generally promote individual as well as social flourishing.

Understanding personal morality, and designing better selves, is a prerequisite for improving groups, corporations, and governments. It serves not only as the foundation for better laws and politics, but also for machine ethics - how to treat them, how to program them, and what to expect from them.

8 Human, Transhuman, and Machine Ethics

"... For our own sake it seems imperative for us to begin to understand our own moral senses at a detailed and technical enough level that we can build their like into our machines. Once the machines are as smart as we are, they will see both the need and the inevitability of morality among intelligent-but-not-omniscient nearly autonomous creatures, and thank us rather than merely trying to circumvent the strictures of their consciences." - "... Suppose, instead, we can build (or become) machines that can not only run faster, jump higher, dive deeper, and come up drier than we can, but have moral senses similarly more capable? Beings that can see right and wrong through the political garbage dump of our legal system; corporations one would like to have as a friend (or would let ones daughter marry); governments less likely to lie than your neighbor is".

We have, for a long time, had the potential to dramatically improve our moral 'sense': The pertinent knowledge has been available, but has not always been utilized. One of the most significant spurts of moral growth was the courageous and ingenious formulation and implementation of the American constitution. Unfortunately, neither were the values of individual liberty and rights carried to their logical conclusion, nor were they adopted by much of the world. Identifying personal freedom and responsibility as core moral principles not only promotes morality from a group perspective, but more importantly, it provides rational and psychological motivation for personal virtue. What a pity that these ideas weren't developed more vigorously.

Now, two centuries later, not only can we capitalize on these (and other) brilliant insights, but we can additionally draw on huge advances in other fields – e.g. cognitive psychology, computer and complexity sciences, communications, bio-sciences, philosophy, etc. All of this knowledge provides a much clearer picture of what kind of rules and principles foster human flourishing, and what we can do to embody and implement them. Right now, we have vast potential for moral growth.

If we apply as much rationality to the problem of ethics as we have to technology, we could see spectacular improvements. Those of us who manage to gain ever increasing control over our own minds and bodies to overcome our historic limitations, are the first transhumans.

Naturally, there are huge challenges facing us. For one, very few people even know about or understand the rational approach to prescriptive ethics. Another problem is the complexity of group dynamics: it is impossible to predict the 'critical mass' of a new idea – how and when 'rational ethics' could catch fire, and infuse society. However, a positive factor is that as individuals learn about and implement such moral improvements in themselves, their benefit will automatically extend to the organizations and legal and political systems they touch – and improve their 'moral wisdom'.

For reasons given above, I think that is a patently bad idea to rely on a 'moral instinct' for good behavior. To the extent that this concept can even be applied to organizations, what would it mean? Such an 'ethical organ' would be an autonomous system within the company that polls public opinion and forces it to act accordingly, overriding its profit motive and the common sense of its employees. It seems that we have something close to that right now: for example, the public wants companies to pay outrageous liability settlements; the masses want subsidies and price controls - and the 'political garbage dump of our legal system' forces companies to comply. Superior moral beings, which the author hopes would be able to 'see through' this morass of political motivation to discern right and wrong, can only do so by rational analysis – not by going with the flow.

This is not to imply that laws in themselves are bad. Quite the contrary. Compare American companies to those operating in effectively lawless societies, such as Russia. Indeed, laws and companies are inherently moral inventions. What makes corporations virtuous is precisely the fact that they are legal entities that concretize individual and property rights. Companies embody and promote two core values: productiveness, and voluntary exchange – the ultimate win-win situation.

Contrast this with current governments. Instead of fulfilling the legitimate moral function of protecting basic rights, by far the bulk of their effort is occupied with involuntary redistribution – worse than zero-sum. The bureaucratic overhead ensures a net loss. No wonder that companies and individuals scramble to grab 'their share'.

Most government functions are deeply immoral. Recognizing and eliminating these problems would dramatically improve the human condition. Company owners, on the other hand, need to rationality work out principles that foster a long-term profitability compatible with respect for individual rights. That is their responsibility.

What does any of this tell us about machine ethics? While is not the place to go into the complexities of artificial intelligence design, a few comments are appropriate:

Firstly, there is the question of how we should treat increasingly capable and autonomous artifacts. Key to all of these issues of machine ethics, is understanding human morality. Once we identify our overall goal (say, long-term human, transhuman, and posthuman flourishing), we can then measure our moral principles against that standard. For example, this would tell us that it is immoral to destroy machines of unique value. Rational ethic also clears up confusion over what rights (self-authority) and responsibility to assign to different kinds of conscious entities – let alone puzzlement over plant rights. Naturally, as machines reach and eclipse human-level intelligence and awareness, the question changes to how they are likely to treat us!

At the moment, rational ethics has not developed sufficiently to predict how beings far more rational and far-sighted than us will behave. Lacking much of the evolutionary baggage that we carry, their motivations and values may be quite different. In fact, it is not at all clear how much we will be able to foresee. However, our best chance of understanding them is to pursue rationality to the max.

Finally then, what special features can we design into our intelligent machines that might increase their morality, and that might protect us? Ultimately, I don't believe that it will make any difference what rules or values we build into them – they will be able to reprogram every aspect of themselves. However, there is the possibility that small differences in initial configuration may have dramatic effects on their developmental path, and practical outcome. At this stage we don't seem to know.

I hope that the knowledge we gain from developing our own transhuman morality will provide insights into the ethics of super-intelligent beings.

Summary

Crucial to this debate is identifying our purpose: what goal are we aiming for? Indeed, we need to be clear that it is a specific state or condition that we want to achieve, and not just a report of what is. We have no inherent duties. Clarity of purpose and well-defined terms are prerequisites for good and effective science – including the science of ethics.

It is not our selfishness that is the problem – what we need is more self-interest. Individuals are not inherently at odds with society. It is not less common sense, but more, that will foster our goals. It is not our 'instincts' that give us the best chance of success. It is rationality, debate, and deliberate design – which also reduce conflict and the need to resort to force. Furthermore, morality should be 'our friend' – a desirable tool for each one of us – once we correctly calibrate our compass.

And, no, we don't need to wait till we can 'edit our own biological natures'. Bootstrapping our rationality, and deliberately upgrading our ethics can help us overcome many of our moral limitations right now. We can personally choose to enter the transhuman era by taking more responsibility for reprogramming our own moral systems. Why not apply the scientific approach to prescriptive ethics? It has improved our lives in so many other areas. Let us design and test our ideas through (mental) simulation and debate, before letting them loose on slow and painful evolution. Let's implement the best moral codes we can - not just for current challenges, but more importantly, to better face the future. This will give us a solid foundation for effectively dealing with ethics for machines.

Peter Voss Aug 00