21 September 2023

Wikipedia and the culture war

• In January 2018 I published a post about the Ethics-and-Empire scandal. This was a shameful episode in the history of academia, centred on the University of Oxford, in which a number of junior and senior academics engaged in a bullying exercise against one of their own, for daring to flout the prevailing taboo against discussing the topics of empire and colonialism other than negatively. I had, and still have, no personal interest in this topic, but I felt that the behaviour of the dons in question was wrong and harmful.
   My post was intended to make the perpetrators of the bullying look bad, and I guess it succeeded in doing so. It can't have been pleasant for the academics to have been exposed and censured in this way, but I do not think bullying of dissidents should be tolerated, and there was nothing underhand in my critique. The academics in question should have taken it on the chin.
   Ten days after my post appeared, two Wikipedia users, acting in apparent collusion, succeeded in getting my Wikipedia article (which had been there for 15 years) nominated for deletion. The timing was so close to that of my blog post that it was hard to avoid the conclusion that the vandalism was a revenge attack.
   Rather than demoralising me, the vandalism encouraged me to write a full-length article about the Ethics-and-Empire issue. This generated a significant number of views, and seems to have widened awareness of shenanigans within the humanities well beyond what would have happened if I had not written anything more than the blog post. So the action of the vandals backfired. (The deletion attempt, incidentally, failed.)
   I have no idea who was behind the attack: whether it was some of the academics themselves, or one or more of their minions, or some fanboys/girls of one of the academics, or simply one of the global army of SJWs who believe they are fighting on the same side as intolerant humanities professors.

• As anyone who has been observing political and cultural affairs for the last few years should have noticed by now, there is — in the words of the Home Secretary — a war out there. If you don't observe things with a critical eye, the war can seem invisible, but that is because the media is largely on the side of what has become the culturally dominant team.
   Calling it a culture war captures only half the truth. Physical violence is rarely involved, but the war goes well beyond mere intellectual and moral positioning. People's lives and careers are at stake. Many of those who consider themselves on the 'right' side — and SJW is as good a term as any for them — seem to feel justified in using whatever methods are available, including dirty tricks.
   They like to present themselves as being on the side of the deserving underdog, and the opposition as hostile to underdog groups. Since they control cultural institutions and hence the dominant narratives, this myth has become easy for them to perpetuate.
   In reality, the war is about oppression versus tolerance. Ironic, since it is they who have always claimed to oppose oppression and intolerance (though there is less reference these days to the latter, presumably because their claims to be on the side of tolerance have become hard to sustain). It is they, rather, who are the oppressors, or wanna-be oppressors.
   Every day they no doubt convert hundreds more to their cause, using their control of parts of the education system, particularly in the tertiary sector, to indoctrinate students. Given their apparent dominance, the counter-struggle to maintain openness and tolerance can easily seem doomed. Fortunately, the Brexit and Trump phenomena demonstrated that there are many ordinary people on the side of anti-oppression. Contrary to the wisdom of the il‑liberal elites, those ordinary people are not stupid, or racist, or any of the other slurs SJWs like to throw at them.

• Perhaps I am getting too close again to exposing some of the intellectual frauds at the heart of the academic humanities profession. Last year I began to critically analyse Oxford professor Paul Collier's socialist handbook, The Future of Capitalism (see here, here and here); earlier this year, I highlighted the contradictions of anti-individualists such as Daniel Kahneman.
   This time, the interval between critique and counter-attack has been longer. And rather than attacking my page — which may have been deemed unsuitable as a target since it survived a take-down attempt too recently — they have gone for the page of my colleague Celia Green.
   Via what appears to have been another hostile double act, within a 48-hour period starting on 31 May my colleague's article — which had remained largely stable for over ten years — was first nominated for deletion, and then given a makeover to reassign her to the derogatory category of 'parapsychologist'. The effect of the hostile edits was detraction from what she is best known for: philosophical scepticism, through books such as The Human Evasion, and pioneering research on lucid dreams and false awakenings which helped to put those two phenomena on the map. At the same time, the article on our organisation, Oxford Forum, was also mooted for deletion by one of the double act (by adding a notability tag).
   Oxford Forum is of course a thorn in the flesh of the University, not because we pose any meaningful threat to an organisation hundreds of times larger than ours, but simply because, like most other members of the il-liberal elite with comfortable positions, they find it hard to tolerate serious challenge of any kind.

• Inevitably, Wikipedia is becoming yet another locus for the culture war. The gradual woke‑isation of many of the articles with political themes highlights the weakness of the wiki model, which in other ways seems to have been surprisingly successful. The model works well in uncontroversial areas such as most of the sciences, history and general knowledge. It can be argued there is excess volume in certain areas — whole pages devoted to minor cartoon characters or individual soap opera episodes — but those can be ignored. By and large, Wikipedia has become an incredibly helpful tool.
   However, the wiki model works less well with controversial topics, or with living persons. No meaningful "NPOV" is possible when it comes to issues such as Trump, or alt-right, or cultural Marxism, or communism. It boils down to a battle of numbers: how many Wiki contributors have the skill and energy to spin the article in one direction, versus those who would like to spin it in the other direction.
   Take the article called 'Collectivism'. In 2012 this reflected a reasonable balance between positive and negative. (Here is a link to a saved version of the old article; note particularly the section 'Criticisms'.) By 2021 the article had turned distinctly biased, with collectivism being given a largely positive spin (paraphrasing: "it's so much better than individualism, which is selfish and uncaring!"), and the Criticisms section disappearing. Unfortunately I didn't keep a copy of the 2021 version — I assumed I could come back to it later — because the article has now been removed altogether. What remains is an article on Communitarianism which is almost entirely favourable, and dominated by woke-speak such as the following:
Early communitarians were charged with being, in effect, social conservatives. However, many contemporary communitarians, especially those who define themselves as responsive communitarians, fully realize and often stress that they do not seek to return to traditional communities, with their authoritarian power structure, rigid stratification, and discriminatory practices against minorities and women. Responsive communitarians seek to build communities based on open participation, dialogue, and truly shared values.
Such woke-speak is not exactly false, but hopelessly vague and one-sided, rather like material in a religious manual.

My verdict on Wikipedia:
Do not consult it on any topics to do with political theory; articles in this area are (by now) likely to be unreliable and/or biased. For such topics you are better off with Encyclopedia Britannica. Or supplement with Conservapedia to get a different perspective, for the sake of balance.

21 July 2023

Gender Pay Gap ideology

This is a story about a yoghurt manufacturer. This yoghurt manufacturer makes excellent yoghurt. So much so that the firm now has a dominant market share. Having become hugely successful, the yoghurt manufacturer decided to employ a marketing director. The marketing director announced that, as the company had become dominant, it no longer needed to focus its advertising on product quality, and should switch to virtue signalling. It was decided that this should be done in two main ways, one to do with the environment, the other to do with gender.
   With regard to the environment, it was decided that the company should abandon plastic lids, and leave people to rely on the film covering. This, the company announced, would avoid many tonnes of plastic waste. Instead, people could apply to the company for a reusable lid. However, this required customers first to collect points on their phone by scanning QR codes on the yoghurt tubs. Unfortunately, this scanning technology often failed. Also, for many months the company was out of reusable lids; however, it assured customers that the lids would soon be back in stock, and to "keep checking back on the website!" Meanwhile, supermarkets delivered many lidless tubs of the yoghurt to customers. A significant percentage of these tubs were damaged in transit due to lack of lids, leading to spillage of yoghurt over customers' deliveries, and resulting in much wastage of food. However, those losses were outweighed (from the company's point of view) by its enhanced public image.

With regard to gender, the company smugly announced on its website that it was working hard to reduce its "gender pay gap" (GPG), providing statistics to prove this was indeed the case. Since no target GPG was mentioned, readers were left with the implication that the most desirable level of GPG would be zero, and that this was what the company was aiming at. Readers were also left with the impression that a non-zero GPG was somehow morally wrong.

* * * * *

Treating a gender pay gap as an automatic negative, thus implicitly calling for action to reduce it to zero, is not about making sure that a woman doing the same job as a man is paid the same as the man. It is — in effect — about arranging that women do the same jobs as men. In other words, if there are four company directors, two of them should (it is implicitly demanded) be women. If there are six secretaries, three of them should be men. If there are two cleaners, the gender split should again be 50:50. That seems the more obvious way of eliminating the GPG, by equalising proportions.
   The less obvious way of eliminating the GPG would be to equalise pay between different jobs. If directors, secretaries, cleaners, and other jobs were all paid the same hourly rate, the gender pay gap would disappear, because everybody would receive the same rate of pay.

The idea that the female members of a company’s workforce should, on average, earn exactly the same per hour as the male members rests on a particular theory of gender differences. Namely, that they should not exist.

* * * * *

Let's start by clarifying the terminology. Discussions about whether women have a greater or lower level of X than men, where X is some characteristic such as intelligence or management skill, are befuddled by the fact that talking about differences between populations requires a different approach from talking about differences between individuals. We can say "women have Fallopian tubes, men do not" without much risk of error, but saying "women are shorter than men" is misleading. What is really meant is:

average-height-of-women has a lower value than average-height-of-men.

If you want to abbreviate this, you could write it as:

{women} are shorter than {men}

where curly brackets round the word "women" indicates that what is meant is "the average woman" or "the population of women, considered in terms of its statistical properties". Note that although {women} are shorter than {men}, there are many women who are taller than the average man.

The red (green) bell curve shows the distribution of men's (women's) height.
The shaded area represents women who are taller than the average man.
(schematic)

We could easily pick a group of tall women (call them T-women) and a group of short men (S-men) where we could say:

T-women are taller than S-men

without risk of being misleading, since every single T-woman would be taller than every single S-man. Similarly, we could easily pick a group of women ("alpha-women") and a group of men ("beta-men") where it would be fine to say "alpha-women are cleverer than beta-men" (meaning every alpha-woman is cleverer than every beta-man) — just as easily as it would be to do it the other way round, and find two groups where we could say, "alpha-men are cleverer than beta-women".

* * * * *

Having got that out of the way, the question arises:

Are {women} the same as {men}?

In other words: we know that men are different from one another, and that populations of them normally exhibit bell curves for any given characteristic (height, intelligence, artistic skill etc). Do {women} have exactly the same bell curve as {men} for every single characteristic? Or, to rephrase that, to exclude physical differences such as strength or height:

Do {women} have exactly the same bell curve as {men} for every innate characteristic relevant to white-collar jobs?

Prima facie, this is highly unlikely. Take any two populations that have been selected by two different criteria, and the chances that the two populations have identical averages on every one of a group of measures is extremely small. The probability of exact equality on a single measure is already very small — though if you sampled data often enough, equality of a single measure might happen occasionally by chance, especially if you have to allow for errors in measurement.

* * * * *

How then can it make sense to aim at zero GPG? We know that childcare (still) makes a difference to what women currently choose to do job-wise, so that is one reason we would not expect to see perfect equality between the jobs that {women} and {men} do. Even if we allow that some of what happens is not due to innate differences, but the result of cultural factors, so that in theory things could be different, the assumption that:

innately, {women} and {men} are identical in terms of all preferences and talents

is without empirical support, and a priori highly implausible.
   Of course, that doesn't imply {women} are less intelligent [or: insert other required employee property] than {men}. Depending on how you define intelligence, for example, it could well be that {women} are more intelligent than {men}. The chances that {women} are exactly as 'intelligent' as {men}, however, can safely be taken to be negligible.

Note
The requirement for UK firms with more than 250 employees to calculate and publish their gender pay gap data came into force in 2018. The legislation was invented by Labour but its implementation was resisted by the Conservatives for some years, until David Cameron's pledge in 2015 — to "end the gender pay gap within a generation" — triggered its go-ahead via the Small Business, Enterprise and Employment Act.
   Employers do not have a legal duty to take action to reduce the gap. However, the requirement to publish their figures is clearly intended to put pressure on firms to find ways to make the gap disappear — regardless of whether it's efficient for them to do so.

16 May 2023

subcontracted ‘caring’

• How does one improve society? Simple: find an individual who needs something but is unable to get it, and give up some of your own resources to help him or her get the thing they need.*
   Since you've chosen your altruistic action, rather than having it forced on you, you will hopefully feel better — at least on some level — as a result of carrying it out. Hence you're better off overall, as well as the other person. It's win-win.
   As with other voluntary transactions between two individuals, both parties benefit, and so 'society' (meaning: everyone in a society, considered in aggregate) can be said to be better off than before.
   Important caveat: make sure no one else is worse off as a result of your help. Example: helping someone build an outhouse in her garden which will accommodate her grandfather is good for her, and good for him — and good for you, if it makes you feel pleasantly virtuous — but not good for the neighbours if it spoils their view. Once you have to weigh pluses for some people against minuses for others, the goal of 'improving society' ceases to be a simple one, and becomes difficult or impossible.

• One benefit of capitalism is that it can make it easier to provide such help by simply giving an individual money. If markets are sufficiently developed, a better way for the individual to get what they need may be by purchasing it, using money provided by the donor, rather than the donor trying to provide the help directly.

• Is there any way of doing such uncontroversial improving-of-society on a larger scale? You can try to encourage others to follow your example, of providing help on an individual-to-individual basis. Or you could get together with others to provide help to particular individuals. (If your group tries to make assistance available to everyone, it may find itself swamped with excess demand — unless the service is one only required in emergencies, such as sea rescue.)
   There are overseas charities that try to operate on this principle, getting volunteers to give hands-on help in villages, with basic things such as building wells.
   You can try to get other individuals to fund your group's society-improving activities — not the state, however, since that would involve involuntary funding by individuals, via taxation.

• This simple at-least-one-person-is-better-off-and-no-one-is-worse-off formula provides a basic model for interpreting — objectively — the idea of 'social improvement'. Beyond that, we are in the territory of subjectivity. There is no way of objectively adding gains for some to losses for others, in order to determine whether a change produces a net positive increment for 'society'.

• The above types of action are not, however, what most people mean when they think or talk about 'improving society'. What is meant tends to be one of two things.
1. Demanding that the state engage in some activity ostensibly intended to improve the position of the less fortunate, or demanding an increase in the level of an existing activity of this kind. Implicit in such calls for state action — though rarely expressed — is a demand that taxation be increased to finance the activity. In other words, this version of 'social improvement' involves the coercive removal of resources from individual citizens, for the supposed benefit of a subgroup of citizens. The target subgroup may constitute anything from a tiny minority to the majority of the population.
2. The second kind of 'improvement' that gets discussed is one which more blatantly involves removal of resources from one group in society, in order to reduce economic inequality. Such reduction of inequality is supposedly a good thing in its own right. Action of this kind on the part of the state is often described as redistribution, the implication being that it involves taking from some and giving to others, à la Robin Hood. This is misleading since most of the time, nothing is given to individuals in the way of spendable resources as a result of such 'redistribution'.
   What additionally confiscated funds are typically spent on (if it's anything beyond financing the deficits from programmes already committed to) are state-supplied services. Such services are ones for which (a) an individual normally has to demonstrate entitlement, often laboriously, (b) the content is determined by the preferences of service providers, rather than by users.

• There are of course many different ways of arguing in favour of any given policy, including policies of type (1) and (2) above. One can vaguely talk about "making things better", or "making things fairer". Or one could simply say "a lot of people want this", and have the issue put to a vote via an election. It's clear, however, why such concepts as "social improvement", or "increasing social welfare" are relatively attractive. They sound scientific. If a politician wants to look like he has the backing of expert opinion, he is more likely to want to talk about "society" or "social welfare" — because it generates an (erroneous) impression of objectivity — than about something that seems more nebulous or populist like "the good of the nation".

• In the nineteenth century, when social theorising first became all the rage, the issue for many intellectuals was merely one of which candidate system would generate the solution of universal human happiness. Should we have communism or anarchism? Voluntaryism or syndicalism? Given the horrors of the twentieth century, we should by now have grown up, and moved beyond the idea of a single answer that can magically make things marvellous for all. No social formula, when applied, is going to make everyone in a society feel better than before. Any given policy is going to be good for some and bad for others. To press ahead with a policy means, in effect, to write off the concerns of those who disagree. It's inevitable in government. No amount of analysis, science, or emphasis on spurious 'rationality' is going to get round this basic problem of politics. Pretending there is an objective solution to the conundrum, and a way of prioritising some policies as more 'rational' than others, or of regarding some voters' interests as more valid than others', is merely an invitation to authoritarianism.

• Notwithstanding these considerations, an ideology has developed in the West according to which 'improving society' is not only an admired but, increasingly, a required objective. There is now moral pressure to conform to the social improvement goal. Leaving things be and not doing anything (where doing typically means some new state action) is not regarded as an acceptable option, at least not among most bien pensants. If you're not seeking to improve society you should be, the ideology says. It's called caring (and failing to do so not caring), except that you typically express your 'caring' by contracting it out to the state, both in terms of providing the supposed help, and in terms of funding it via enforced subscription from taxpayers.
   Given that these versions of 'social improvement' and 'caring' involve people being coerced, it's not clear whether those who seek social improvement should be regarded with admiration or suspicion.
   An individual may of course feel strongly that something should be done in some area, e.g. climate change, the position of women, freedom of speech. Other individuals — and that includes me — may agree with some of the changes that are urgently proposed. To demand change, whether on behalf of oneself, or on behalf of a group one doesn't belong to, is legitimate. What is questionable is the claim that the change will "make society better".

Take-home message. The concept of a policy improving society is, strictly, illegitimate. (Unless the policy is one to which, implausibly, every individual in the society assents.) As a scientist or an academic, one should avoid making use of the concept, either explicitly or implicitly.
   For a non-professional campaigner, it may be acceptable to talk about improving society — for the reason that he or she may be asked to justify their advocacy in such terms by others. I.e. "will your proposed change improve society?", to which it seems fair enough to respond "yes" rather than "possibly", "no", or "don't know". Provided the response doesn't come labelled as 'expert' or otherwise authorised, the questioner is free to take it or leave it — in contrast to a professional context, where there is an implication that one should accept the response as authoritative.
   I leave it as an exercise for readers to consider which of these two categories should apply to a politician. Is it ethical for politicians to talk about improving society, given the dodginess of the concept? If we think of them as merely campaigners for a particular position which reflects either voter demand or their own opinion, then it may be acceptable. If, on the other hand, as is increasingly common, a politician talks as if his proposals are somehow linked to scientific or other expertise coming out of academia, there is a case that he should make every effort to avoid invoking concepts such as 'social improvement' or 'social welfare'.

* Okay, so perhaps it's not as simple as I've made it sound. But the ways in which it's complicated don't disappear when you start thinking in terms of groups or classes rather than individuals — though they apparently become easier to ignore.

16 March 2023

hyper-rationalists and their biases

• There is a category of person who believes the world would be improved if people could be made to act more rationally. Let us call such persons 'hyperrationalists'. (The prefix hyper- is not intended as pejorative.) Clearly there are many economists and psychologists working in, or associated with, the field of cognitive bias who fall into this category. The problem hyperrationalists need to deal with, but tend to avoid, is: what exactly is rational, and what is irrational? How does one (scientifically) define a 'good' or a 'better' decision, versus a 'bad' or 'worse' one?

• Hyperrationalists need to tread carefully when publicising their conclusions, especially when these come labelled as being 'expert' or 'scientific'. The insistence that some things are indisputably correct and others are not, and the ability of some group to claim authority in this matter, is one of the ingredients of totalitarianism. On a less extreme level, the belief that one is part of a group which has acquired the wisdom to see through illusions can lead to a kind of lazy arrogance. 'Oh yes, we know all about that point of view, we can safely dismiss it.' Or: 'Applying the nudge strategy is fine, because we have worked out what is in your best interests – and you don't even need to know about it!'

• Hyperrationalism is part of the post-Enlightenment programme that believes humans can be improved; and that humans can use logic, rationality, critique and science to make better decisions, improve their own lives, and improve society. But many of the assumptions of this programme have been tested and found wanting. Science and technology don't invariably improve people's lives, at least not without costs that are often not apparent initially. The belief that societies can be improved, or made perfect, has ironically led to human suffering on an appalling scale.
   The basic problem, which a long line of rationalists — culminating in the hyperrationalists — have tended to ignore, is three-fold:
(1) How do we define 'better'?
(2) Is 'improvement' going to be undertaken by individuals, or collectively? If collectively, which individuals will be making the decisions about 'improvement' on behalf of everyone else?
(3) How are differing ideas of what is 'better' to be reconciled?
Ignoring these issues leads to a kind of casual authoritarianism, where potential doubts and disagreements are dismissed or ignored, and the 'correct' answer is simply imposed on others, with or without their consent.

• As a result of the biases/irrationality research programme initiated by behavioural economists such as Daniel Kahneman (and before him Richard Thaler), and the subsequent pop-economics bandwagon (involving such books as Freakonomics and Predictably Irrational), there is now a general presumption that it has been proven that people are irrational. This is far from being the case. Yet the presumption — now habitually treated as a truism — has passed into popular intellectual mythology.
   Take an article published 2016 in online magazine Quanta, and republished by Pocket last year. Entitled 'The Neuroscience Behind Bad Decisions', with the subheading 'Irrationality may be a consequence of the brain's ravenous energy needs', the article simply takes it for granted that humans are irrational, the only thing remaining being to investigate when and why.
   To illustrate its thesis, the article cites research by Paul Glimcher, a neuroscientist at New York University. Glimcher and his colleagues "asked people to choose among a variety of candy bars, including their favorite — say, a Snickers." If offered a small number of competing candy bars along with a Snickers, participants would always choose the Snickers.
But if they were offered 20 candy bars, including a Snickers, the choice became less clear. They would sometimes pick something other than the Snickers [... However, when the experimenter removed all the candy bars] except the Snickers and the selected candy, participants would wonder why they hadn't chosen their favorite.
The results are interesting, and perhaps tell us something about human cognition and decision-making. But like all experiments of this kind they cannot tell us anything about 'irrationality', because there is no objective way of defining it.

• In the case of the NYU experiment, as in many others cited by behavioural economists, 'irrationality' or 'bad decision' is defined in terms of a person's subsequent remorse, or his/her wish to give a different answer after the event. Dan Ariely's book Predictably Irrational is peppered with examples of this kind. E.g. in the evening you make a decision about how much to drink, and the next morning you say that you definitely chose wrong. Then the next evening you repeat the whole cycle — possibly leading others (and/or yourself) to label you a fool.
   In an everyday context, there is nothing controversial about another person commenting, 'you are not acting in your best interests', or 'you are not giving sufficient weight to how you will feel in the morning', or even 'in the morning you are rational but in the evening you are irrational'. As someone being rigorously scientific one cannot make judgments of this kind, and one is not entitled to conclude anything about irrationality, or supoptimal decisions, from the data. The subject may have a good reason for recurrent heavy drinking, which he himself may not even be aware of. Even if he is aware of it he may not tell you, if he doesn't expect it to pass the reasonableness criterion of the average outside observer — let alone that of a scientific investigator.
   The phenomenon of subjects wishing they had made different decisions may tell you something about human psychology, but it cannot tell you anything about human rationality, unless you first assert norms of rationality which have no particular scientific basis. E.g. you impose the requirement that 'for a choice to be rational, one must not express regret about it later'; or: 'for a choice to be rational, it must depend only on material end results and not on the way the options are presented'.
   Of course our drinker may decide to mend his ways, and may do so by deciding his abstaining self is his more rational self. Impartial observers may opine that he has improved his life, by making better choices. What one cannot do is to assert that any of these perspectives is more rational than the one where it seems right for him to go on drinking, and to claim this assertion has scientific backing.

• Identifying one cognitive bias may be useful, as a way of expanding knowledge of psychology — though whether this knowledge can be used to 'improve' anything is a far less straightforward question than many hyperrationalists seem to assume. Collecting together several cognitive biases, and basing a grand theory on your collection, risks generating a bias of your own, given that the individual biases — and your collection — are unlikely to have been selected randomly.
   Daniel Kahneman is happy to let the biases he selects in his book Thinking, Fast and Slow lead him to the conclusion that others should, in general, be more involved in a person's decision-making. Indeed, he goes so far as to argue that rigorous respect for individual autonomy is "not tenable":*
[...] a theory that ignores what actually happens in people's lives and focuses exclusively on what they think about their life is not tenable [...]
However, the biases he chooses to include — or that have previously been picked for experimental investigation, by himself and others — mostly tend towards one particular implication. There are other biases, however, which do not. So far in my reading of his book I have not come across any mention of social biases — biases that arise when people make decisions or judgments in groups, such as the bandwagon effect. It's clear that emphasising such biases would undermine the policy conclusions Kahneman seeks to draw from his data.

* In a book seeking to lecture readers about objectivity and rationality, Professor Kahneman should perhaps have avoided the phrase "not tenable", which sounds like it means "logically inconsistent and hence necessarily false" but in this case merely reflects a subjective reasonableness standard, set by him and others with the same outlook.

• Human psychology is complex. By focusing on findings of a particular kind, it's easy to generate a biased picture. There are experiments purporting to show that, in certain contexts, individuals express overconfidence about their own (erroneous) judgments, and these experiments form part of Kahneman's narrative. But this is only one side of the story. In other contexts, individuals appear unduly willing to devalue their own judgments in favour of those of another person, if that person receives reinforcement either from numbers ('there's more of them than of me') or from some accreditation that supposedly makes him more knowledgeable or otherwise authoritative ('he is a someone, I am a nobody'). The Milgram experiments, where individuals obey an instruction to administer electric shocks in spite of their own misgivings, provide a classic illustration of the latter phenomenon.
   In other words, people may be just as likely to have too little faith in their intuitive judgments (e.g. 'I felt it was wrong to give painful electric shocks to the experimental subject but the scientist from the university told me to go ahead') as too much (e.g. 'I'm certain I remember correctly what happened at the accident'). Highlighting one type of bias at the expense of another in a popular book gives readers – well, a biased perspective.

• The pop-economics bandwagon re bias/rationality can itself be seen as a grand experiment about bias, with the following hypotheses being tested.
Is it possible for the author of a popular economics or psychology book to:
— exploit an emotional bias in readers (call it 'insecurity') in favour of believing they are poorer at making judgments than they thought, and that they would be better off deferring to others, at least in some areas where they previously did not?
— invoke the image of science (experts, experiments, peer-reviewed journals etc.) to create a framing effect, in which people become less critical about what they are reading?
— present information in a way that manipulates readers, so that they believe adequate evidence has been adduced to support a radical thesis, when in fact it has not?
   The reception given to books such as Thinking, Fast and Slow and Predictably Irrational suggests the answer to all three questions is: yes.

Quotation by Daniel Kahneman is from Thinking, Fast and Slow, Farrar Straus & Giroux 2011, p.410.

30 January 2023

de Tocqueville: enervation & stupefaction

When Alexis de Tocqueville published the second volume of his Democracy in America in 1840, democracy was still in its infancy. Some of de Tocqueville's fears and predictions about what it might lead to now seem misplaced. The following extract however still strikes a chord.
Above [the multitude in a democracy] stands an immense and tutelary power, which takes upon itself alone to secure their gratifications, and to watch over their fate. [...] For their happiness such a government willingly labors, but it chooses to be the sole agent and the only arbiter of that happiness: it provides for their security, foresees and supplies their necessities, facilitates their pleasures, manages their principal concerns, directs their industry, regulates the descent of property, and subdivides their inheritances — what remains, but to spare them all the care of thinking and all the trouble of living? Thus it every day renders the exercise of the free agency of man less useful and less frequent; it circumscribes the will within a narrower range, and gradually robs a man of all the uses of himself. [...]

After having thus successively taken each member of the community in its powerful grasp, and fashioned them at will, the supreme power then extends its arm over the whole community. It covers the surface of society with a network of small complicated rules, minute and uniform [...] The will of man is not shattered, but softened, bent, and guided: men are seldom forced by it to act, but they are constantly restrained from acting: such a power does not destroy, but it prevents existence; it does not tyrannize, but it compresses, enervates, extinguishes, and stupefies a people, till each nation is reduced to be nothing better than a flock of timid and industrious animals, of which the government is the shepherd. *
It's not known whether Nietzsche read Democracy in America, but his reflections on the 'Last Man', written four decades later, sound a similar note.
Alas! There cometh the time when man will no longer give birth to any star. Alas! There cometh the time of the most despicable man, who can no longer despise himself.
Lo! I show you THE LAST MAN.
"What is love? What is creation? What is longing? What is a star?" — so asketh the last man and blinketh.
The earth hath then become small, and on it there hoppeth the last man who maketh everything small. His species is ineradicable like that of the ground-flea; the last man liveth longest.
"We have discovered happiness" — say the last men, and blink thereby. [...]
No shepherd, and one herd! Every one wanteth the same; every one is equal: he who hath other sentiments goeth voluntarily into the madhouse.
"Formerly all the world was insane," — say the subtlest of them, and blink thereby. They are clever and know all that hath happened: so there is no end to their raillery.

(from Thus Spoke Zarathustra, transl. Thomas Common)
While Nietzsche's version seems more poetic, and perhaps more profound, de Tocqueville's is the more politically astute. Unlike Nietzsche, who talks of "no shepherd", de Tocqueville recognises that a society in which passivity, compliance, and homogeneity have become norms provides enormous scope for some to have power over others.

* Part 4, Chapter 6, transl. Henry Reeve. Via George H. Smith & Marilyn Moore, Individualism.

08 November 2022

Kahneman: pseudoscience on a grander scale

• I thought it would be interesting to alternate our reading of Paul Collier’s The Future of Capitalism with a book by another highly decorated economist: Daniel Kahneman. Professor Kahneman is a well-known name among economics students. Research carried out by him and Amos Tversky in the 1970s highlighted some of the limitations of conventional economic analysis, by showing that choices made by the average person often fail to conform to what economic theory predicts. But in Kahneman's book Thinking, Fast and Slow, this awareness of theoretical limitations is inverted, and spun into a grand narrative about human rationality.
   The work has become hugely popular with intellectuals. "Daniel Kahneman has done us a great service" is a typical comment by a reviewer working in the humanities. Why has the book struck such a chord with intellectuals? Various explanations are possible, including that proposed by another reviewer, claiming that Kahneman is on a par with greats like Freud in advancing understanding of human psychology.
   I suspect one of the principal reasons the book has proved popular is its central thesis, according to which research shows that humans are irrational. Why does this thesis appeal to intellectuals? Because it provides ammunition for the interventionist-paternalist programme, which tacitly assumes that intellectuals should rule society (in the sense of controlling, among other things, education, medicine and cultural output — supposedly in everyone's best interests) rather than leaving things to the decisions of individuals and the markets.
   It's ironic that, having carried out research which usefully demonstrated that some of the assumptions of economic theory about how humans behave were plain wrong, Kahneman’s book assumes another theoretical model of rationality, and in effect says that where theory and practice differ with regard to behaviour, it is practice which is wrong!

• The issue hinges on the concept of rationality, and whether it is possible to define it objectively. The short answer is: no, it's not. There is no behaviour, or belief, about which it is possible to assert irrefutably "this is irrational".
   Take for example a textbook illustration from economics: you are bargaining with a buyer, who could be an employer, for the sale of an object or your own labour, and the buyer offers a choice between you getting £1000 and £1100, all other things being equal. Some would argue that you are definitely irrational if you strongly prefer the £1000 option — after all, you could (they would say) dispose of the extra £100 easily enough. But you may well have reasons for making that choice which cannot simply be dismissed. You may not even be aware of what the reasons are, but it would be impossible to disprove the proposition that ultimately, in some sense, this choice is in your interests. (All sorts of effects could be present here to complicate the picture but being left out of the equation; some of them known about, such as reputational effects; others not known about.)
   Or take a belief in something supernatural, for which (a sceptic would say) there is no good evidence. How about belief in the existence of God? Richard Dawkins has argued this belief is irrational, but that would make a lot of clever people from history irrational. In any case, the concept of God is too ill-defined to say what would constitute evidence. How conclusive would the evidence have to be? The evidence for global warming, or the carcinogenicity of tobacco, is strong, but not completely conclusive. At what level of evidence does a belief stop being irrational, and start to be rational?
   The point is: the question of what is rational is ultimately subjective. Kahneman, and the psychologists he cites, may have done experiments which comply rigorously with scientific standards and which generate interesting results, but such experiments are — and arguably always will be — incapable of yielding the sorts of conclusion that Kahneman draws.
   Conclusions such as the following; Kahneman is here referring to an experiment in which subjects are asked to express a preference between two types of experience involving mild pain (my italics):
An objective observer making the choice [on behalf of an individual] would undoubtedly choose [differently from the individual].

... The choices that people made on their own behalf are fairly described as mistakes.
Again, there's an irony in the fact that Kahneman at other points in the book criticises evaluations made on a gut basis, in ignorance of reality being more complex, yet is here guilty of the same thing. He appears to think we can obviously dismiss some judgments as being irrational or inadequately thought out, and that some preferences are just wrong. "This person says she prefers strawberry jam because it leaves a nice aftertaste, but she ought to prefer blueberry jam because it is more satisfying while it is being consumed" is a statement Kahneman does not make — but it's analogous to some of the things he does say.

• Some beliefs or preferences may strike a high percentage of ordinary people as bizarre or unjustifiable. Others may strike an even higher percentage of intellectuals as ridiculous. No doubt some intellectuals would like to have conclusive scientific support for rejecting certain beliefs or preferences. Attempts to use science to justify the decisive rejection of one preference over another, however, inevitably involve the abuse of science.
   I don't wish to de-legitimise the concept of 'irrational' as used in an everyday context, but we have to recognise that judgments about rationality are judgments, not scientific findings, and are ultimately not capable of being given irrefutable justification.
   Whether something is true or not may seem simple in some cases (is London the capital of the UK or not?) but most questions do not have easy yes-or-no answers, meaning there is little conclusive basis for assigning irrationality to one answer rather than another.
   With regard to preferences, there is certainly no adequate justification for intruding on individual choices to argue: your preference for A is wrong, our data shows you should be preferring B [*said in severe tone, by figure in lab coat carrying clipboard*]. Believing such intrusions are justified by science is not only wrong, it is dangerous.

• In the next instalment we'll take a look at the experiments on which Kahneman bases his conclusions.

Daniel Kahneman, Thinking, Fast and Slow, Farrar Straus & Giroux, 2011. Quotes are from p.409.

15 September 2022

social mobility

• More on the topic of science and morality, and how getting them muddled can have bad results: a follow-up to my previous article on social mobility.

Social mobility ‘research’:
science vs normativity






• I consume a fair amount of Kindle Unlimited fiction and like to give a plug to anything particularly noteworthy. Harriet Smart's Northminster mysteries are set in Northern England during the first years of Victoria's reign, and feature a police officer and a young surgeon, both male, as the main protagonists. Some of the books could do with additional proofing and I occasionally find them a bit grisly for my taste, but there is a touch of genius in the portrayal of early-Victorian society and of the psychology of the characters, as well as in the complexity of the plots.
   Speaking of male protagonists, I recently read a contemporary sci-fi novel in which all the spaceship team were male (though from diverse alien races). Highly unusual but also highly refreshing. I used to find novels refreshing in which plucky heroines proved they were better than their stuffy male counterparts, but it has now become so monotonously regular a feature that it's getting tedious. I came to realise, in reading this unfashionably androcentric sci-fi book, the possible advantage of leaving female characters out altogether. For most contemporary writers, the moment a female principal character is introduced, there appears to be a need on the part of the writer to demonstrate that she is at least as 'good' as the males, in whatever department. (A type of virtue signalling?) She cannot be allocated a merely supportive role, since this might be taken to imply something about being female, and we cannot have that, even if the something is merely statistical. Thus in practice women in fiction are now largely confined to certain predictable roles — just as they were in the past, except that the predictable roles are now different ones. We see this in Amazon's Rings of Power, for example, where (a) it's fairly inconceivable that the position of lead character could have been assigned other than to a female character, and yet (b) a little digital tweaking of the appearance and voice of Galadriel (ably played by Morfydd Clark), and many viewers could surely be fooled into assuming, from the action and dialogue, that it was a male elf that was being represented.


     Queen Elizabeth II (1926 - 2022)     

10 August 2022

world leaders in inflation

That America has called its latest piece of pro-state legislation an "Inflation Reduction Act" may well come to be seen, in due course, as the defining irony of Joe Biden's presidency. There seems little in the Act likely to significantly impact inflation in the intended way. We know from history that once inflation hits levels where everyone feels the pinch, it tends to become self-sustaining. We also know that trying to control prices or wages under such conditions tends not to work and can make things worse.
   The time to reduce the risk of inflation was before it started, by being aware of the possibility that massive money creation might eventually — under certain triggering conditions — cause problems; and by not getting complacent, as Mario Draghi for example seems to have done, that because we had not had the triggering conditions for a long time, that could be relied on to be permanent. (Comparable to the delusion, popular a couple of decades ago, that busts had been eliminated, simply because the boom had lasted longer than usual.)
   It is unfortunate for America, and for the rest of the world, that the time when caution was, and is, particularly needed has coincided with the White House being occupied by one of the most spendthrift Presidents in US history. In sharp contrast to Donald Trump, Mr Biden's approach receives support from an army of pro-state intellectuals. There still appear to be a few financially responsible politicians in America, otherwise Mr Biden's original $4 trillion plan might have been implemented, rather than the c.$2 trillion committed so far.
   Signs of suppressed inflation have been hiding in plain sight for years, prior to the recent wake-up call. (Among other things, progressive shrinkflation and skimpflation; and consistently faster-than-headline inflation in sectors where efficiency gains from the IT revolution — an effect that will run out eventually, and may already be starting to do so — haven't exerted downward pressure on prices.) Either successive Presidents have chosen not to listen to the warnings of their economic advisers or, more likely, those advisers didn't bother with warnings, choosing instead to look the other way.
   Of course America is not unique in this respect. The UK, the rest of Europe and Japan have all adopted comparable programmes of money expansion and ballooning state expenditure. It's conceivable, however, that they might have felt a bit more restrained without the USA's example.
    Governments' best hope at this point for controlling inflation is to commit to fiscal prudence — not to engage in posturing, let alone indulging in even more state largesse.