16 March 2023

hyper-rationalists and their biases

• There is a category of person who believes the world would be improved if people could be made to act more rationally. Let us call such persons 'hyperrationalists'. (The prefix hyper- is not intended as pejorative.) Clearly there are many economists and psychologists working in, or associated with, the field of cognitive bias who fall into this category. The problem hyperrationalists need to deal with, but tend to avoid, is: what exactly is rational, and what is irrational? How does one (scientifically) define a 'good' or a 'better' decision, versus a 'bad' or 'worse' one?

• Hyperrationalists need to tread carefully when publicising their conclusions, especially when these come labelled as being 'expert' or 'scientific'. The insistence that some things are indisputably correct and others are not, and the ability of some group to claim authority in this matter, is one of the ingredients of totalitarianism. On a less extreme level, the belief that one is part of a group which has acquired the wisdom to see through illusions can lead to a kind of lazy arrogance. 'Oh yes, we know all about that point of view, we can safely dismiss it.' Or: 'Applying the nudge strategy is fine, because we have worked out what is in your best interests – and you don't even need to know about it!'

• Hyperrationalism is part of the post-Enlightenment programme that believes humans can be improved; and that humans can use logic, rationality, critique and science to make better decisions, improve their own lives, and improve society. But many of the assumptions of this programme have been tested and found wanting. Science and technology don't invariably improve people's lives, at least not without costs that are often not apparent initially. The belief that societies can be improved, or made perfect, has ironically led to human suffering on an appalling scale.
   The basic problem, which a long line of rationalists — culminating in the hyperrationalists — have tended to ignore, is three-fold:
(1) How do we define 'better'?
(2) Is 'improvement' going to be undertaken by individuals, or collectively? If collectively, which individuals will be making the decisions about 'improvement' on behalf of everyone else?
(3) How are differing ideas of what is 'better' to be reconciled?
Ignoring these issues leads to a kind of casual authoritarianism, where potential doubts and disagreements are dismissed or ignored, and the 'correct' answer is simply imposed on others, with or without their consent.

• As a result of the biases/irrationality research programme initiated by behavioural economists such as Daniel Kahneman (and before him Richard Thaler), and the subsequent pop-economics bandwagon (involving such books as Freakonomics and Predictably Irrational), there is now a general presumption that it has been proven that people are irrational. This is far from being the case. Yet the presumption — now habitually treated as a truism — has passed into popular intellectual mythology.
   Take an article published 2016 in online magazine Quanta, and republished by Pocket last year. Entitled 'The Neuroscience Behind Bad Decisions', with the subheading 'Irrationality may be a consequence of the brain's ravenous energy needs', the article simply takes it for granted that humans are irrational, the only thing remaining being to investigate when and why.
   To illustrate its thesis, the article cites research by Paul Glimcher, a neuroscientist at New York University. Glimcher and his colleagues "asked people to choose among a variety of candy bars, including their favorite — say, a Snickers." If offered a small number of competing candy bars along with a Snickers, participants would always choose the Snickers.
But if they were offered 20 candy bars, including a Snickers, the choice became less clear. They would sometimes pick something other than the Snickers [... However, when the experimenter removed all the candy bars] except the Snickers and the selected candy, participants would wonder why they hadn't chosen their favorite.
The results are interesting, and perhaps tell us something about human cognition and decision-making. But like all experiments of this kind they cannot tell us anything about 'irrationality', because there is no objective way of defining it.

• In the case of the NYU experiment, as in many others cited by behavioural economists, 'irrationality' or 'bad decision' is defined in terms of a person's subsequent remorse, or his/her wish to give a different answer after the event. Dan Ariely's book Predictably Irrational is peppered with examples of this kind. E.g. in the evening you make a decision about how much to drink, and the next morning you say that you definitely chose wrong. Then the next evening you repeat the whole cycle — possibly leading others (and/or yourself) to label you a fool.
   In an everyday context, there is nothing controversial about another person commenting, 'you are not acting in your best interests', or 'you are not giving sufficient weight to how you will feel in the morning', or even 'in the morning you are rational but in the evening you are irrational'. As someone being rigorously scientific one cannot make judgments of this kind, and one is not entitled to conclude anything about irrationality, or supoptimal decisions, from the data. The subject may have a good reason for recurrent heavy drinking, which he himself may not even be aware of. Even if he is aware of it he may not tell you, if he doesn't expect it to pass the reasonableness criterion of the average outside observer — let alone that of a scientific investigator.
   The phenomenon of subjects wishing they had made different decisions may tell you something about human psychology, but it cannot tell you anything about human rationality, unless you first assert norms of rationality which have no particular scientific basis. E.g. you impose the requirement that 'for a choice to be rational, one must not express regret about it later'; or: 'for a choice to be rational, it must depend only on material end results and not on the way the options are presented'.
   Of course our drinker may decide to mend his ways, and may do so by deciding his abstaining self is his more rational self. Impartial observers may opine that he has improved his life, by making better choices. What one cannot do is to assert that any of these perspectives is more rational than the one where it seems right for him to go on drinking, and to claim this assertion has scientific backing.

• Identifying one cognitive bias may be useful, as a way of expanding knowledge of psychology — though whether this knowledge can be used to 'improve' anything is a far less straightforward question than many hyperrationalists seem to assume. Collecting together several cognitive biases, and basing a grand theory on your collection, risks generating a bias of your own, given that the individual biases — and your collection — are unlikely to have been selected randomly.
   Daniel Kahneman is happy to let the biases he selects in his book Thinking, Fast and Slow lead him to the conclusion that others should, in general, be more involved in a person's decision-making. Indeed, he goes so far as to argue that rigorous respect for individual autonomy is "not tenable":*
[...] a theory that ignores what actually happens in people's lives and focuses exclusively on what they think about their life is not tenable [...]
However, the biases he chooses to include — or that have previously been picked for experimental investigation, by himself and others — mostly tend towards one particular implication. There are other biases, however, which do not. So far in my reading of his book I have not come across any mention of social biases — biases that arise when people make decisions or judgments in groups, such as the bandwagon effect. It's clear that emphasising such biases would undermine the policy conclusions Kahneman seeks to draw from his data.

* In a book seeking to lecture readers about objectivity and rationality, Professor Kahneman should perhaps have avoided the phrase "not tenable", which sounds like it means "logically inconsistent and hence necessarily false" but in this case merely reflects a subjective reasonableness standard, set by him and others with the same outlook.

• Human psychology is complex. By focusing on findings of a particular kind, it's easy to generate a biased picture. There are experiments purporting to show that, in certain contexts, individuals express overconfidence about their own (erroneous) judgments, and these experiments form part of Kahneman's narrative. But this is only one side of the story. In other contexts, individuals appear unduly willing to devalue their own judgments in favour of those of another person, if that person receives reinforcement either from numbers ('there's more of them than of me') or from some accreditation that supposedly makes him more knowledgeable or otherwise authoritative ('he is a someone, I am a nobody'). The Milgram experiments, where individuals obey an instruction to administer electric shocks in spite of their own misgivings, provide a classic illustration of the latter phenomenon.
   In other words, people may be just as likely to have too little faith in their intuitive judgments (e.g. 'I felt it was wrong to give painful electric shocks to the experimental subject but the scientist from the university told me to go ahead') as too much (e.g. 'I'm certain I remember correctly what happened at the accident'). Highlighting one type of bias at the expense of another in a popular book gives readers – well, a biased perspective.

• The pop-economics bandwagon re bias/rationality can itself be seen as a grand experiment about bias, with the following hypotheses being tested.
Is it possible for the author of a popular economics or psychology book to:
— exploit an emotional bias in readers (call it 'insecurity') in favour of believing they are poorer at making judgments than they thought, and that they would be better off deferring to others, at least in some areas where they previously did not?
— invoke the image of science (experts, experiments, peer-reviewed journals etc.) to create a framing effect, in which people become less critical about what they are reading?
— present information in a way that manipulates readers, so that they believe adequate evidence has been adduced to support a radical thesis, when in fact it has not?
   The reception given to books such as Thinking, Fast and Slow and Predictably Irrational suggests the answer to all three questions is: yes.

Quotation by Daniel Kahneman is from Thinking, Fast and Slow, Farrar Straus & Giroux 2011, p.410.