Saturday 26 December 2015

The price of human life

A debate is currently taking place in New Zealand about whether the drug Keytruda should be funded by the national drug agency Pharmac. It is reported that Keytruda is apparently a successful treatment in about one third of all cases of melanoma.  I have been unable to find the origin of this figure but few seem to doubt that the drug has significant efficacy and some other countries have made it available at a highly subsidised cost to patients. Pharmac itself decided recently that it was too costly to fund here but this decision may possibly be reversed by the NZ government.

Of course there is much more to the funding question than economics. The issue is partly statistical and it may well be that the "one third" figure is not particularly robust. There are also more human issues such as compassion and equity. However it is undeniable that economics has to play the dominant part in the debate and this leads to the question I want to ruminate about: what is the price of a human life?

Let me be clear. I am not asking a universal question. My question is directed to particular societies and I am sure the answer will not always be the same. Perhaps that is already an indicator that world-wide we are very far from being able to treat all human beings as equal but I am going to have to put that into the "too hard" box for the present. So let us admit that citizens of New Zealand are treated much better than, say, citizens of Syria or Mexico.

Within New Zealand (or pick your own country) the political parties would surely say that they stood for equal treatment of all their citizens. And therefore if a price is put on a human life this price should be the same for all citizens. If you don't agree then read no further and don't stand for political office! Well, perhaps that was a little unfair: after all you could argue that in a particular domain the common price of a life should be one amount but the price in another domain should be another amount. The trouble with that position is that one can partition domains into subdomains as much as one pleases, pricing every subdomain differently - and the consequences for the supposed equality of the citizenry soon vanishes in a puff of smoke. So bear with me and let's look at the ramifications of setting a fixed price for everyones life across every domain of the country.

If there was such a price many national funding decisions would become easier. Take road safety measures as a first example. Should we increase police monitoring of busy highways in Wellington? Well, how many lives would it save? And that tells us how much we should spend. Or should we install fencing at some of the trickier stretches of the Great Walks? Again: how many lives would it save gives us the answer. Such calculations would enable us to say which of many competing priorities in different domains should be funded and to what level.

So why don't we have an agreed price for a human life? I think part of the answer is that we can't help confusing "price" with "value". OK - let's admit that we can't put a numerical value on a human life; I'm happy to concede any descriptive word about its value (even "infinite) but still wish to distinguish between price and value. Having made that concession we can move forward and think about the price of a life purely in economic terms. A bigger reason for our hesitation to fix a price is that the way in which we would do it is very murky. Yet surely we could at least begin so long as we recognise an important caveat.

We want to have a price that we shall use for determining policy in general. So our price is an average price (averaged over all residents of the country). It is not to be used as a summary of how much a particular individual is worth. But an average should surely be easier to come by (do I show naivete!?): it will a ratio in which the denominator is the population size and the numerator is .....  Well, what? The Gross Domestic Product would be one possible numerator. Here perhaps is where the debate will be had and I would be interested in what readers thought would be most appropriate.

To sum up: having an agreed national average price of a human life would enable us to make economic decisions on how much to spend on certain social goals, and it would enable us to prioritise social goals.  It would also torpedo arguments of the type "Country X does this, therefore so should we". To do this we have to recognise that the price would only be used for societal decisions not individual decisions and we have to be very clear of the difference between "price" and "value"; I am certainly not proposing that we estimate the worth of people - communities or individuals - by a dollar amount.

Monday 30 November 2015

Intentions and the road to hell

The shocking attacks in Paris have brought home to many Westerners the horror of war and the sufferings of similar victims in Syria and Iraq. In this post I would like to reflect on whether there are significant differences between the various acts of mass killing brought about by the war in the Middle East.

I'm sure we can all agree that the degree of personal anguish suffered through the loss of a loved one does not depend on how many others were killed at the same time. Therefore our empathetic response to the bereaved should be the same whether or not the murder occurred in Paris (120 dead) or in the Beirut attack (41 dead) the day before. Nevertheless, judging from the degree of media coverage, the Paris tragedy seems to be regarded as more serious than the Beirut tragedy.

Why do we have such differing reactions to apparently similar events around the world? Well, one response is that perhaps we wouldn't if we saw equally detailed media coverage. In my opinion that is not the the whole story because, to sell their products, the media give us the coverage that is most likely to resonate with their readers - and readers in the West naturally tend to read Western media. So I think that Western readers as a whole do find outrages against their own kind more culpable. No doubt non-Western readers exhibit a similar bias in favour of their own as well.

So, having come to the conclusion that we have a natural bias towards our own culture, let's return to my original question and ask if there are culturally independent differences between acts of mass killing. We can focus on this question more clearly if we consider degrees of culpability.

I suggest that there are two key ingredients about how much culpability we attribute to a crime. The first is not very controversial: we ask how much harm was done. How many lives were lost? What was the value of the property stolen? What physical damage was done? Some types of harm may not be so easy to quantify and we can have different opinions on things like "How much long-term psychic damage was done"; however often the amount of harm can be readily measured. By the way, although we can agree that culpability is proportional to damage it is one of the sadder facts about our society that punishment seems not to be correlated to harm done.

There is a second ingredient of culpability that we take into account: the intentions of the individuals who committed the crime. This can be much more controversial because much of the time the only information we have about the perpetrator's intentions is from the perpetrator himself. However most of us would accept that there is a difference between a deliberate shooting and an accidental gun discharge even if both of them result in a death; so we do accept that intent matters.

It is the job of courts to judge how intentional was the crime of a defendant. However here we are considering acts of violence for which there is no defendant in court. Two notable examples were recently debated by Sam Harris and Noam Chomsky (in an email correspondence in which there was no meeting of minds). They had no trouble agreeing that the intention of the 9/11 attackers was to cause loss of life to civilians (well, they are intellectuals so nothing is certain).

However they disagreed heatedly about the US bombing of the Al-Shifa pharmaceutical factory in 1998. The bombings caused thousands of Sudanis to die because they were deprived of essential medicines. The Clinton administration claimed the factory was making chemical weapons and it was not their intention to be a mass murderer. Basically Harris is sympathetic to this justification: for him the claimed lack of intention mitigates the crime. Chomsky on the other hand believes that the consequences of such a carefully planned attack must have been plain to the instigators.

This example captures the difficulty about allowing intentions to mitigate culpability. Do we try to discern the personal intention of President Clinton? His team of senior advisers? Or do we try to define the governmental intention? None of these is really possible. There are some philosophers who evaluate actions entirely by their consequences - for them the bombing is condemned despite the aggressors pleading lack of intent to kill. Our present traditions and legal practices are somewhat out of sympathy with such consequential ethics. It would be a brutal legal system that operated by these principles and our more nuanced approach certainly seems preferable at national or local level. But at the international level, where we are judging actions by nation states, consequences are often all we have to go on. That should not prevent us from trying to assess intent but we are often groping in a fog of ignorance.

I hope it is now clear why we find such lack of unanimity even when we make a determined effort to make judgements independently of our cultural bias. But where does that leave us when we formulate our reactions to the next act of killing? I suggest we should condemn with our gut rather than with a measured amount of ferocity that depends on who was the aggressor, how much harm was done, and whether we think the act is calculated. Dead innocents are dead innocents and we should condemn their deaths no matter the circumstances. Following this logic also means that we should condemn reprisals if they include killing innocents. And therefore, inevitably, we shall often find ourselves condemning "our side". Well, so be it.

Friday 11 September 2015

14 years on from 9/11

The Al-Qaeda attack on the USA on September 11, 2001 was a landmark event. It and its aftermath will be discussed by historians for centuries. No doubt every faction will develop its own mythology and these myths will obscure the sober appraisal of exactly how the twenty-first century kicked off with such a bang. Fourteen years into "The War on Terror" I would like to reflect on how successful were those 19 hijackers (mostly Saudi nationals).

Of course no-one can doubt their immediate military success. From the point of view of those who supported the attack it was the stuff of heroism against overwhelming odds - hundreds of times more successful than, say, the British "Dam Busters" drama which so caught my own schoolboy imagination.

In some ways though the success of the aftermath is even more interesting. Immediately after the attacks there was a tremendous outpouring of sympathy and support around the world for America and its citizens. In my opinion, the greater Al-Qaeda victory was turning around world opinion until today America is the most feared and hated country in the world. What went wrong for the Land of the Free?

The answer is that it had nothing to do with Al-Qaeda and everything to do with the American reaction. Their enemies had the inestimable advantage of a hopelessly inept American administration whose knee-jerk response was as bad as it could possibly have been. They began by retaliating against the wrong country, they woefully misunderstood that war against a minor military enemy state was much more more than having greater firepower, and they expended trillions of dollars that could have been used to repair their own domestic social fabric. But that is by no means all that went wrong.

The US administration have co-opted the word "Terror" as a convenient catch-all to justify a raft of policies that would have been once unthinkable. They have enshrined the word in their criminal justice system as an excuse to trample on the civil liberties on their citizens. They have used it to snoop on their own nationals (to say nothing of foreigners) in a way that uncannily recalls Orwell's 1984. The "War on Terror" has such a nebulous enemy that it can never be won in the same way that Oceania could never defeat Eastasia. And if "Terror" really is something against which war can be waged just ask the average American how safe they now feel: a poll last year showed that about half of all Americans feel less safe than immediately after 9/11.

Think of that! 19 men robbed the USA of a vast amount of their national wealth, caused them to lose their moral credibility and international respect, and made their citizenry permanently anxious. Can you describe that as anything other than a victory?

So what should America do to climb out of the pit of defeat? It won't be quick and it won't be easy but the rest of the world has an interest in helping them do so. I would like to propose an approach that addresses one of the fundamental problems underlying the American disaster. The world should make every effort to make Americans engage with nationalities beyond their borders. Ignorance breeds fear and suspicion and it is significant how few Americans hold passports to travel. These efforts can be made at all levels but every city outside the US could do its bit by twinning with a city within the US. The twinning protocol should offer funds for a handful of people in an American city to visit their twinned counterpart for a month say, living with local families. In this way, over time, there would build up a nucleus of Americans familiar with some culture beyond their own. They would see that other nations have cultures that are perfectly acceptable alternatives to their own and that "American Exceptionalism" is a myth without foundation.

Maybe that is too naive a proposal. Does anyone have a better one?

Saturday 22 August 2015

How robust is empirical science?

I'd like to begin this post by admitting that I am writing far outside my own research experience. Nevertheless the paper that prompted it ("Likelihood of null effects of large NHLBI clinical trials has increased over time" by Robert Kaplan and Veronica Irvin) sounds such a potentially alarming message that I think it is worth publicising.

Kaplan and Irvin looked at all "large" research trials funded by the NHLIB (the US National Heart, Lung and Blood Institute) between 1970 and 2012 ("large" here was precisely defined in their paper). These trials examined a variety of drugs and dietary supplements for preventing cardio-vascular disease. The authors recorded whether the trial had a positive outcome (statistical evidence that the treatment was successful, a negative outcome (statistical evidence that the treatment was ineffective) or a null outcome (no evidence either way).

 There were 30 studies prior to 2000 of which 17 produced a positive outcome, and 25 afterwards of which only 2 produced a positive outcome. This is a large decline in the number of positive outcomes and the authors discussed various possible causes.

 Two possibilities that might, a priori, explain the decline are

  1. researchers prior to 2000 were more influenced to produce positive outcomes because these are preferred by drug companies, and 
  2. the use of a control group to which a placebo was administered occurred less prior to 2000.
However possibility 1 was refuted in that the proportion of trials sponsored by drug companies was essentially the same for both periods, and possibility 2 was refuted for similar reasons.

Kaplan and Irvin did however advance quite a compelling reason to explain the discrepancy. Before I tell you what it was, it is worth saying something about the culture in which such experiments take place.

Imagine you are a scientist who is testing the efficacy of a drug to combat high blood pressure (say). You set up your clinical trials complete with a control group to whom you administer only a placebo, gather your data, and analyse it. This may take a long time and you are heavily invested in your results. Naturally you are hoping that the results will demonstrate that your new drug will prove effective in reducing blood pressure.

But maybe that doesn't happen. Oh dear what could you do? Doesn't it seem rather lame to simply report that you found no effect either way (or, worse, a negative effect)? Since you have gathered all this data why not look at it again - after all, it might have some significance. Maybe your drug has had some unexpected positive effect and, when you find it, you can report a positive outcome.

What's wrong with that? Isn't it perfectly fair since your data did demonstrate some form of efficacy?

No, it's not fair for several reasons. One reason is that your experimental procedures were perhaps not so well-tailored to assessing the result you did, in fact, find. Perhaps a more important reason though is that any such conclusion only comes with a statistical likelihood and, in the regime that you actually implemented, you gave yourself many opportunities to "get lucky" with a statistical conclusion.

It has become increasingly recognised that such researcher bias should be minimised. In the year 2000 the NHLIB did introduce a mechanism to prevent researchers from gaming the system in the way I described above. They began to require researchers to register in advance that they were conducting a clinical trial and what hypothesis they were testing. So once the data is gathered the researchers cannot change their research question.

Pre-registration of trials is what Kaplan and Irving believe explains the drop in positive outcomes since 2000. They could easily be right and, at the very least, their work should be scrutinised and critiqued.

Now, just for the moment let's assume they have hit upon a significant finding. What would this mean? Surely it means that, for the 30 trials conducted prior to 2000, we might assume that, rather than a fraction 57% of them having positive conclusions (17 out of 30), the fraction should be closer to 8% (the fraction of trials post-2000 that were positive). Since we have absolutely no idea which of the positive-outcome trials lie in this much smaller set we should dismiss 30 years of NHLIB funded research. Worse than that thousands of patients have been administered treatments whose efficacy is unproven. I'm not suggesting a witch-hunt against researchers who have acted in good faith but surely there are lessons to be learnt.

One lesson to learn is that we should value trials with a negative or null outcome just as much as we value those with a positive outcome. In particular the bias towards publishing only positive outcomes must disappear. This is already becoming increasingly accepted but, as Ben Goldacre has demonstrated in his book Bad Pharma, there is still a very long way to go. In fact Goldacre shows that pharmaceutical companies have actively concealed studies with null outcomes and cherry-picked only those studies that shine a favourable light on the drugs they promote.

But there is another lesson, one with potentially much wider implications.  Many disciplines conduct their research studies by the "formulate hypothesis, gather data, look for statistical conclusion" methodology. In fact hardly a discipline is untouched by this methodology and, in most cases, they are years behind the medical disciplines in recognising what can contribute to researcher bias. It is therefore not an overstatement when I claim that it is a strong possibility that such disciplines have a track record of generating dodgy research results.

This shocking conclusion should be taken to heart by every research institution and, in particular, it should be taken to heart by our universities which claim to have a mission to seek out and disseminate truth. In my opinion it is now incumbent on the relevant disciplines (most of them) to at least conduct an analogue of the work carried out by Kaplan and Irvin. We need to extent of the problem (if indeed there is a problem) and we need to repeat, with all the new rigour we now know about, many previous investigations even their conclusions have been accepted for years.

It is not enough to aggregate the results of several investigations to raise confidence in their conclusions (meta-studies). We will often need to begin afresh. Well, at least that will give us all something to do.

Monday 25 May 2015

Sapiens: a tale of human history

There have been several recent books that attempt to survey the whole of human history. Each one has a particular perspective. Thus Jared Diamond's Guns, Germs and Steel is a stab at how physical geography might explain why some regions of the world have made faster progress than others since the Agricultural Revolution; and Daron Acemoglu and James Robinson's Why Nations Fail is a similar project but here the emphasis is on which institutional factors enable some states to flourish more than others. These two books each have a definite thesis - a framework of opinions - that they defend in detail. The books are interesting because the authors' opinions are defended quite convincingly but they are opinions nevertheless.

I have just finished another book of the same type whose scope is even broader. It is Sapiens: A Brief History of Humankind by Yuval Noah Harari. The book covers the span of human existence from the plains of Africa through to our 21st century existence. One thing I like very much is that this tale is not presented as a tale of progress or advancement; it is told much more neutrally, sometimes almost as though by an observer from some distant star system. In this post I cannot do justice to every chapter but I will highlight only some of the viewpoints that impressed me the most.

In the first chapter the early origins of the Sapiens species are discussed. Since 2010 there has been a controversy about the biological relationship between that species and other human species such as Neanderthals and Denisovans. To review that controversy let me remind you that 100,000 years ago there were several species of the genus Homo alive on earth. They were the descendants of a migration of ape-like ancestors around 2 Million years ago who spread from Africa around the planet evolving into separate species such as Homo Neanderthalenis, Homo Soloensis, Homo Floresiensis, Homo Denisova and Homo Sapiens (the latter species evolving in Africa about 150,000 years ago).

Until 2010 it had been thought that, when Sapiens left Africa around 70,000 years ago to spread throughout the world, they displaced (peacably or forcibly) all other hominid species; there was no interbreeding as the species were too genetically dissimilar. So, according to this theory, all members of Homo Sapiens are pure descendants of African ancestors of 70,000 years ago and, in particular, we are all genetically alike.

However, in 2010, it was discovered that modern humans in Europe and the Middle East had about 1.4% of their DNA of Neanderthal origin; and a few months later it was discovered that up to 6% of Melanesian and Australian aboriginal DNA was of Denisovan origin. This is a major and very recent discovery and I would think that it might be wise to await confirmation before accepting it fully. However, if confirmed, a rather different picture is painted. The theory now would be that, when Sapiens met Neanderthalenis and until around 50,000 years ago, Sapiens and Neanderthalensis were genetically sufficiently close for interbreeding to occur (and similarly for interbreeding of Sapiens and other hominid species). In other words Sapiens has much more genetic diversity than had been thought previously.

This controversy is not purely a scientific one because the interbreeding side of it offers ammunition to those who would use it to promote discrimination on the grounds of DNA differences. Whether this will happen yet remains to be seen. Harari does not pursue this question - an early indicator that he is more interested in presenting a factual narrative uncontaminated by value judgements.

The next few chapters of the book are about that long period between the birth of the Sapiens species and the Agricultural revolution. We have very little to go on but we can make some plausible assumptions. Perhaps the major thing to explain is how Sapiens went from being a minor player at the beginning of that period to becoming a creature that was near the top of the food chain at the end. All other hominids had disappeared together with many of the largest animals - and we can correlate quite well their disappearance with the moment they first came into contact with Sapiens so it is reasonable to believe that Sapiens was becoming more and more formidable. From what source were these powers derived?

While confessing that there are no certainties Harari advances some interesting explanations. They are all cognitive and the idea I found the most interesting was the idea that only Homo Sapiens was capable of inventing myths. The word 'myth' for me used to mean an unfounded belief, or a story, about something fanciful. But Harari uses it in a wider sense. For him a myth is any concept about something that has no objective existence (and it carries no negative connotation despite that - myths are simply ideas).

He makes it plain by an example - the Peugot example - of how general a meaning the word 'myth' has.  Small groups of individuals can act cohesively on an enterprise by everyone being able to communicate with everyone else. Clearly this won't work for larger groups. Larger groups need to be unified behind a myth, some shared belief that binds them together. A nation state can only operate because its members share the myth of national identity. A legal system can only operate because its members believe in the system of laws, justice and human rights it promotes and defends. A motor company, such as Peugot, binds its employers together because they all believe in the myth of limited liability companies. And national identities, the idea of justice, the notion of a limited liability company really are myths in that none of these things would continue to exist if the minds of the believers all perished in an instant.

This general idea of myth surfaces several times later in the book. One can raise a philosophical objection depending on how Platonic a frame of mind one is in. Is it really the case that 'Justice' is no more than a belief or does it have some "real" aspect? Once upon a time I, as a practising Pure Mathematician, would have been quite susceptible to strong mathematical Platonism. Now I am not so sure; as I have retreated from the subject so has my conviction that mathematical notions have an independent reality. In any case I found myself very persuaded that for non-mathematical thought objects there was really no case that they had an independent existence. The French Revolutionaries all believed in Liberté, Egalité, Fraternité and Americans believe in 'American exceptionalism'; but there's no objective reality to these terms - they are myths.

Once one renounces the Platonic inclination that these myths do have an objective reality a lot of contradictions disappear. Liberals like the ideas of equality and freedom. But they are contradictory (and, in a stroke, we see why politicians are always fighting). If you champion equality you must place checks on individual freedom - otherwise people would be free to trample on their equals. We don't need myths to be consistent - but we do want our Physics to be consistent!

The next tranche of chapters in the book concerns the period from the Agricultural Revolution through to the Enlightenment. This period began between 10,000 and 15,000 years ago. It is far too short a time for any significant gain in the raw power of Sapiens by way of natural evolution. Yet since number of changes in that time to the way Sapiens live far outnumbers those occurring in the previous 100,000 years (I will return to the post-Enlightenment changes later).  What happened?

Harari points out that the change from the hunter-gatherer life-style to that of a permanent small communities centred around farming was possibly a change for the worse for most people. The hours were long and diseases caught from farm animals were usually mortal. But the ability to have reliable and abundant food allowed a rapid increase in population; unfortunately the benefits of more abundant food would have been counter-balanced by the need to feed a larger number of people.

The Agricultural Revolution is ostensibly about how Sapiens domesticated their food sources, the most extensive food source being wheat. Harari offers an opposite perspective: that wheat domesticated us! It's a compelling point of view. Wheat the grass that once grew only in the East has spread throughout the world, spreading its genes far and wide, and become one of the most successful life-forms ever. It has done it by having Sapiens creatures in their millions tend to its health, prepare stone-free soil for it to grow strong, defend it from disease, fetch and carry water for it from dawn to dusk, and build fenced areas to keep it safe from predators.

Once Sapiens began to live in agricultural communities myths were needed to keep them from fragmenting. Some of these were religious: serving the gods of the seasons and the spirits that protected animal and food sources was vital and rituals grew up to bind people to a common purpose. But other myths became important as communities grew in size. Legal codes such as that devised by Hammurabi depend on myths: the notion that rights and wrongs were absolutes is mythical. If you feel uneasy about that just think how opinions have changed over the treatment of those members of Sapiens who have two X-chromosones. Nations states are sustained by myths: seriously, would you voluntarily die to defend the 'honour' of your country?

But if the pace of change picked up after the Agricultural Revolution it became a frenetic Scherzo once the Scientific Revolution got underway. This is the subject of the third quarter of the book. Since this occurred within historical times we know much more about what triggered this second revolution (sometimes called the Enlightenment but this term emphasises the questioning and rejection of religious authority more than the discoveries made by the new scientific method).

Harari gives a lot of attention to a new practice that began to pervade learned society: the free admission of ignorance: 'ignoramus' was less and less a term of opprobrium. He points out that ancient traditions recognised two kinds of ignorance. An individual might be ignorant of something important (where did human beings come from? - ask the priest) or an entire society might be ignorant of something apparently unimportant (how do spiders weave their webs? - it matters not because God knows). But eventually it dawned on certain thinkers that there were important things that the entire society might not know and so they had to find these things out for themselves. This turned out to be possible by adopting what we now know as the scientific method: systematic observation, gathering data, forming hypotheses, testing these hypotheses. Amazingly this worked and after a few hundred years of it we can look back and compare the knowledge so acquired since 1500AD with the knowledge acquired in the previous 150,000 years - it is an astonishing comparison.

The Scientific Revolution had many knock-on effects. One such effect which I had not previously considered until Harari highlighted it is the rise of capitalism. Capitalism is characterised by investing the profits of an enterprise rather than blowing it all on war or good living. When it began to be apparent that scientific discoveries often gave advantages to a society it became necessary to finance the work of scientists. Thus when James Cook sailed to claim new territories for the English King he took with him men who had no military skill at all: their purpose was to collect more knowledge to further science. Harari very skillfully describes how from these humble origins our capitalist societies grew into the behemoths we see today. And, as usual, he describes these societies without giving value judgements.

So, if there was one thing that really impressed me about the author it was his ability to keep his opinions out of the narrative. As he might himself say: my opinions are myths - they will vanish when I die. This lack of polemic is actually rather a good technique for persuading his readers. It does not back them into corners and they can come to their own conclusions about which aspects of Sapiens culture they are comfortable with and which they abhor. For example, I myself found his laconic description of how Sapiens has enslaved other animals as food sources all the more compelling because it was so muted.

There is much in the book that I haven't mentioned, particularly about modern societies. It is a tour de force as well as being a fascinating read and I strongly recommend it. As a general guide to the book I give below its detailed Table of Contents.
  1. Cognitive revolution
    1. An animal of no significance
    2. The tree of knowledge
    3. A day in the life of Adam and Eve
    4. The flood
  2. Agricultural revolution
    1. History's biggest fraud
    2. Building pyramids
    3. Memory overload
    4. There is no justice in history
  3. Unification of humankind
    1. The arrow of history
    2. The scent of money
    3. Imperial visions
    4. The law of religion
    5. The secret of success
  4. Scientific revolution
    1. The discovery of ignorance
    2. The marriage of science and empire
    3. The capitalist creed
    4. A permanent revolution
    5. And they lived happily ever after
    6. The end of Homo Sapiens

Sunday 19 April 2015

World views, narratives and cognitive dissonance

Our thinking is influenced by everything that happens to us, everything we read, and every interaction. As we assemble these influences, facts and opinions we form a picture of our personal world and other people's worlds. But not every input to our consciousness has the same weight; otherwise we could never come to a stable understanding of anything. Instead we construct inside our heads a framework that represents our personal reality and we appeal to this whenever we are exposed to a new influence; our personal reality weights each new influence so that this reality doesn't oscillate widely. Most often the new influence is given a negligible weight; only rarely is the new influence given a weight so high that it transforms our reality.

These personal realities are often called world-views. If we think about our world-view we are thinking about the totality of our life's experience and the various weights that we give to each aspect of it. These experiences and the weight we accord them comprise our beliefs.  Our world view is a large set of separate beliefs that we have tried to organize into a consistent whole. Some of the beliefs of our world-view will be shared by many other people, especially those parts that represent facts about the physical world; but some of our beliefs will be at great odds with other people's beliefs.

Indeed it frequently happens that we hold conflicting beliefs within our world-view. Much of the time we are unconscious of such a conflict and there is not much to be done about this unless we critically and constantly examine our beliefs. This perhaps is behind Socrates' dictum that the unexamined life is not worth living. When we come across such a conflict surely the noblest act is to honestly readjust our beliefs so that they come back into harmony. But sometimes, lazily, we put the conflict aside with a mental shrug. The conflict may have come to our notice because of some fact, newly learnt, and we have dealt with it by assigning it a very low weight in our framework of concepts; or it may have arisen from the Socratic discipline of systematically examining our beliefs for inconsistencies.  Probably every single person's world view has such suppressed inconsistencies. If we have made a determined attempt at suppression we can feel a sense of unease when we confront (or are made to confront) a conflict, a sensation that is called cognitive dissonance.

There are several ways to deal with our personal cognitive dissonances. We can adjust our beliefs which can be embarrassing and painful.  We can forget them entirely, hoping that we shan't ever have to confront them. Or we can create excuses of various complexity. This is a very common thing to do. When we notice other people doing this (and it may be very obvious when they do) we often deplore their inconsistency. But we are much blinder and more forgiving when we do it ourselves; we forge for ourselves a complex personal narrative which justifies our mental gymnastics.

If we are subject to cognitive dissonance we are always in danger of being wrong-footed, possibly acutely embarrassed, when our mental inconsistency surfaces among our friends and acquaintances. Therefore most people prefer their world-views to be as internally consistent as possible. Yet to form a consistent world-view from scratch (even over years) is very difficult. Therefore most of us bring some pre-existing package of consistent ideas into our consciousness that partly (some very largely) forms the core of our world-view. Some of the philosophic traditions, such as Stoicism or Epicureanism, served the Greek and Romans very well in this regard. Religions and other ethical systems are also used. For many people, a commitment to naturalism is important. These partial world-view packages generally can't give us our entire world-view of course. For example, the Christian package generally won't inform us accurately about whether we can trust our senses as a way of understanding how the physical world fits together. Nor will the naturalistic package necessarily inform us about good ways to conduct our personal relations. But they are definitely a start, and they enable us to piggy-back on the wisdom of our forebears.

Obviously some world-views are more successful than others. If your world-view is constantly producing cognitive dissonance, or hampering your effort to be happy and productive because it consistently leads you to poor decisions, you are worse off than someone whose world-view serves them well in guiding their thoughts and their interactions with their fellow human beings. What types of world-view are successful? That of course is a question too large even to scratch the surface of in this brief post but, nevertheless, I do have some opinions that I cannot refrain from offering!

I think you are going to be in severe trouble if your beliefs are at odds with what humankind has discovered about the natural world. If you are a flat-earther, or a denier of evolution, or you are very superstitious your life is not going to be able to take best advantage of centuries of scientific endeavour. In other words I do recommend the naturalistic package.

I also think it is very worthwhile to think about incorporating some sort of ethical package: virtue ethics, consequential ethics, utilitarianism, Islam or some other religion. Be aware though that these packages don't always lead to sensible conclusions; they only serve as a useful guide in commonplace situations. Indeed in almost every ethical package there is the possibility of coming to contradictions (for example, fundamental Christianity is, in my opinion, necessarily so rife with contradictions that it is worthless).

I think it is useful to have some way of resolving your cognitive dissonances when you notice them - and burying them or ignoring them entirely is obviously not what I suggest. If the dissonance is a disagreement between facts or what you thought were facts I would advise doing some research into what is actually known. That might not solve the dissonance because there may conflicting opinions on what reality is; then you will often have much more work to do. I think it is worth cultivating a depressed ego so that you can relinquish opinions even if you have previously espoused them in public. Obviously this advice won't suit everyone because, for some people (politicians, for example), it is more important to have an unchanging message than to be internally consistent.

Finally I think it is important to be self-reflective, constantly examining your beliefs, and constantly rooting out inconsistencies. This is a habit that needs to be cultivated. Personally I know that I still have a long way to go but the journey is a fascinating one.


Monday 23 February 2015

Citizenfour and Citizen of the World

For well over a year I have been gripped by Edward Snowden's exposure of mass surveillance by the NSA and GCHQ, surveillance of all of us. I have read the books by Luke Harding (The Snowden Files) and Glenn Greenwald (No Place to Hide) and consider myself fairly well-informed about the whole story. Modestly staying in the background of the story has been the film-maker Laura Poitras (although both Greenwald and Harding both praise her role in bringing Snowden's story to public attention).

Poitras' major role in the story is her film Citizenfour, recently released and which I have just watched. In saying that I do not want to minimize her role as the journalist with whom Snowden first established contact and her fearless investigation with Greenwald in many parts of the world as she developed her story. But it is surely as the film-maker of Citizenfour that she will be remembered. The film is brilliant and everyone should see it.

As well as being an account of how the story came to the world's attention it is also a personal account of Snowden's bravery, skill and intelligence; perhaps not everyone will agree with me that he is the greatest hero of our age but his personal qualities cannot be doubted.

In the last week I have seen two other films about heroes: Martin Luther King (Selma) and Stephen Hawking (The Theory of Everything). These are films about heroism where the principals are either dead or their achievements are in the past. In contrast Edward Snowden's heroism will be needed to sustain him for many years to come and, of course, his story is very recent indeed. This is why it is so important to see Citizenfour. It is not just a gripping drama whose immediacy stems from its brilliant camera work; it is immediate because it informs us about the threat today to our democratic freedoms from the authorities who watch our every move speciously claiming to keep us safe.

As I write these comments news has just broken that the film has won the 2015 Oscar for Best Documentary film; welcome news indeed because it will now attract a larger viewing audience. Public awareness about our surveillance states is very important because, unless there is a public mood to restore our privacy, things will only get worse.

We need to have open debates about the degree of intrusion we can tolerate without surrendering our privacy (such as begun here). At present I would say that we are far from the situation that prevailed over Civil Rights in 1960s America where at least the basic principles of Civil Rights were enshrined in law. Our privacy laws are currently very weak and our rights to privacy are constantly being eroded either by stealth or by new laws. So go and see Citizenfour and add your voice to protect our democracies from our governments.

Monday 12 January 2015

Freedom of speech, respect and Charlie Hebdo

Voltaire is famously quoted as saying "I disapprove of what you say, but I will defend to the death your right to say it." The fact that these words were only attributed to him by his biographer Evelyn Beatrice Hall takes nothing away from the sentiment which most people would generally support. And yet we have to acknowledge at least three objections.

To begin with even Western Countries do not grant freedom of speech to everyone on all topics. Many countries, for example, have made it an offence to publicly deny the Holocaust in the Second World War. Once there is even a single example where freedom of speech is proscribed we cannot pretend it is a universal human right.

The second thing is that just because someone has the legal freedom to say something does not mean that they can necessarily say it with impunity. That is such a truism that it hardly needs to be said. We criticize public statements all the time. Many times our objections might be factual corrections but it is perfectly acceptable to object because you simply do not like what is being said. Where things begin to become problematic is when a government or another powerful body has such power that it can punish someone saying something it does not want to published abroad. By "punish" I do not mean legally punish through a successful libel prosecution; I mean an extra-legal punishment such as ostracizing the offender to the detriment of their career, threats to use the draconian UK libel laws which are ruinously expensive to defend against, or maybe being harassed by an organization such as the FBI for being a whistle-blower.

And the third thing is that just because someone has legal freedom to say something does not mean that they actually should say it. For example, if you are the head of an organization speaking on its behalf you should stick to the things that are relevant, and not scatter gratuitous insults:- that's impolite, irrelevant, and distracting.

All of these issues come very much to mind in the aftermath of the attack on the offices and staff of the Charlie Hebdo satirical magazine in Paris. The attackers, enraged by a series of cartoons that poked fun at Islam, murdered 11 journalists. Publicly, almost universal condemnation has followed from both Moslems and non-Moslems (this despite the predicable Fox News claim - refuted by Katie Halper that Moslems were generally silent on the issue). However, while the general tenor of the coverage has insisted that no reining in of Free Speech should occur, it has been interesting that most of the British Press have chosen not to print any of the cartoons that provoked the attacks. This decision was defended in a recent Guardian article and the main defence given was that, in the normal run of events, such cartoons would not normally be published in the Guardian and, despite a natural reaction to show solidarity with their Charlie Hebdo colleagues by publishing cartoons, it was better to fight intolerance with tolerance. I believe this is an honorable stance but I can't imagine that it applied to the gutter press who also didn't reprint any cartoons. One is therefore left wondering whether intimidation played any part in the collective decision.

Finally, it is worth commenting on a point that is obviously very important to many Moslems: should the cartoons have been published by Charlie Hebdo itself? Don't the arguments of good taste (which seemed to apply to the Guardian) apply universally? I think not for the following reason.  Charlie Hebdo is a satirical magazine and its whole purpose is to provoke thought despite it provoking controversy; that is what satire is all about. So I would argue that by publishing offensive cartoons they were simply carrying out their remit - and that remit cannot be confined to safe targets only. Down the ages satirists have always risked their freedom, livelihoods and lives. We applaud the forerunners of Charlie Hebdo and we should applaud such heroes today.