Showing posts with label Pseudo-rationality. Show all posts
Showing posts with label Pseudo-rationality. Show all posts

04 October 2008

Please: no more ‘no-brainers’

I don't get angry about economics related things often, but I am right now (I could not have posted what I was really thinking). Congress needed to get something done. This is completely disfunctional [sic]. Disgusting. This shows no regard for people who might lose their jobs over this. I know a lot of you think we can get through this — you are nuts to take that chance, this is extraordinarily dangerous — why do you think all the economists are so scared, the ones you've trusted in the past ... (Professor Mark Thoma, after Congress voted against a $700bn bailout plan)
There is nothing like a crisis for the production of journalistic and blogospheric hot air. Much chest-beating, little sober reflection. And isn’t it funny how the ‘experts’ just know what the correct solution is, however unprecedented and colossal the problem.

One might have thought a little humility was in order, given the circumstances. Leaving out theories about bankers’ greed, which are unverifiable and hence useless — and leaving also to one side the question of whether the public has been over-encouraged to take on debt by a ‘you too’ ideology — we can put the present mess down to two specific technical factors:
(A) the Fed kept rates below the market-clearing level after 9/11, resulting in excess borrowing and generating various credit-fuelled bubbles, particularly in housing;
(B) banks have engaged in rocket-science activities which turned out to be insufficiently thought through or double-checked for what-if problems, and which have come back to bite them (and everyone else) on the backside.

Now (B) can be put down in large part to excess hubris on the part of people who were too ready to believe in their own models, or trust someone else's just because he or she could show off some clever mathematics. It can also be put down to the mediocratic policy of brushing aside doubters with phrases such as “it’s a no-brainer” or “don’t be a wuss”.

Similarly (A) — credit rates kept artificially low to stave off recession after the tech boom — was no doubt based on faith in trained experts (in this case, macroeconomists) asserting that such a policy would not have adverse consequences. In fact, of course, we see not only adverse consequences but one of the worst post-bubble fallouts in modern history.

You might think this was a time for taking a step back. A time for reckoning we may have to reconsider first principles, given that the micro- and macro-economic activities of the last ten years were clearly based on a pack of dodgy calculations. A time for thinking more cautiously, given that the problems could be said to have arisen from people thinking too quickly, or not thinking at all.

Instead of this, we (the public), reading the pronouncements of the ‘experts’ — from Treasury analysts, to academic economists, to newspaper editors, and econ-bloggers — are now asked to take on trust that it is correct to approve the biggest corporate bailout in history, purely on the basis of their confidence this will make things all right.

“Please”, they say, “if you don’t do this, the world will collapse. We know our mis-analyses are to some extent responsible for the rubbish that has been generated, but we really have thought it through properly this time, and we promise it will work!” US Treasury Secretary Hank Paulson apparently went down on bended knee to beg the Speaker of the House of Representatives to help put the bill through. After Monday's vote went against the plan, the chorus of disgust from the experts was hard to miss. “This Republican Party needs to be burned, razed to the ground, and the furrows sown with salt” opined Professor DeLong. Professor Krugman offered a similarly considered view: “OK, we are a banana republic ... what we now have is non-functional government in the face of a major crisis, because Congress includes a quorum of crazies”.

Various people cite comments by Republican spokesmen to ‘prove’ that those who voted against were motivated by the ‘wrong’ considerations. But what people say and how they vote are two different things, a phenomenon familiar to polling analysts.

Willem Buiter threatened a long list of dire consequences if the bill were to fail: “US stockmarket tanks ... banks will stop providing credit ... banks will collapse ... no bank will be safe ... consumer demand and investment demand collapse ... [we have a] Great Depression”. I don't know how well Professor Buiter understands markets because (a) the US stockmarket didn't continue to fall after the initial negative reaction to the ‘no’ vote, (b) it did however fall sharply after the bill was passed on Friday. Is it possible the market knows better than Professors Buiter, Krugman, Thoma et al what is good for the economy? Perhaps so, not because of the ‘wisdom of crowds’, but because — unlike decisions made by academic economists — choices in the market are made by people with money at stake.

Professor Buiter thinks that those congressmen who rejected the bill on Monday because they queried the effect it would have on the operations of the free market are “mad, but honest and principled. I wish them a good depression”. If the position of some hard-core libertarians on this issue is dotty, as is alleged, are their analyses any more dotty than those of the people under whose approving gaze the financial nest-fouling of the last decade was allowed to take place? While there were a few critics among the academic and other trained analysts who were monitoring goings-on, there were plenty who trumpeted models which supposedly proved all would be well.

According to Professor Buiter, the only other type of nay-sayer on Monday was
populist rabble-rousers or, worse, politicians who know better but follow the whims, fancies and passions of their constituents, even when this means that before long the real economy risks falling off a cliff ... They put re-election before the economic health of the nation and the interests of their constituents ... I wish them a rather nasty depression.
How dare those elected politicians vote according to the “whims, fancies and passions” of their constituents, when they have experts like Professor Buiter to tell them what to do.

We do not know this action will solve the problem. We cannot be certain it won’t make things worse.

The twenty-first century economy has so far been handled, by both government and corporations, like the decision to invade Iraq: plenty of balls, little brains. Time to take stock and reflect is needed. More emoting, and knee-jerk dramatic and irrevocable action, are almost certainly not.

Maybe the bill — now the Emergency Economic Stabilization Act — will save the global economy. Maybe. But please, no more foot-stamping insistence that parliaments should do as they are told. Remember your Politics 101 course? Separation of powers, and bicameral houses which ‘hold up’ legislation that other people think is unquestionably desirable, are generally thought to be a good thing. It's called democracy.

27 July 2008

Humanitarian genocide



The basic point of the mediocracy concept can be summarised as follows:

degradation of culture, law and politics, while maintaining the pretence that it isn't happening, aided by the Orwellian transformation of meanings.

Thus intervention becomes the new liberty, non-art the new art, anti-philosophy is the new philosophy, and so forth. The degradation is ultimately driven, in my view, by an ethos hostile to the individual — notwithstanding the claims of people like Anthony Giddens that we live in a profoundly individualistic society.

Usually the redefining is done surreptitiously. Several decades pass and, hey presto, the meaning of ‘philosopher’ has become: ‘person who is employed by a university to produce verbiage approved by other philosophers, which need not be comprehensible, let alone interesting or useful’. No one actually issued an edict that the definition had changed; the transformation was achieved gradually, by stealth.

Occasionally, however, the redefining process is made explicit. Last year, for example, Professor Ronald Dworkin came out with a book which asserted that the term democracy “doesn't mean just majority rule”, but depends on “whether the level of a community's redistribution of its wealth through taxation is legitimate”.

More recently, we have the strange concept of libertarian paternalism, invented by behavioural economists Thaler and Sunstein, and currently all the rage with politicians from Barack Obama to David Cameron. As many bloggers have already pointed out, the term is patently an oxymoron. Paternalism cannot be libertarian, any more than people can be forced, or manipulated, to be free (though some on the Left actually argue that they can).

The state, in coercing an individual, can claim it is doing it:
1) to prevent him injuring another individual,
2) to pay for a good which everyone wants, but from which he cannot be excluded (e.g. defence, street lighting),
3) to force him to redistribute his assets to others (in practice this mostly means to the state, which then purports to provide services to others),
4) to prevent him injuring himself (e.g. banning drugs), or
5) to force him to do himself good (e.g. compulsory education).

Now the interest value of Thaler’s concept, if any, seems to depend on whether the state manipulating people to obtain a desired result should be regarded as less objectionable than coercing them, particularly with regard to headings (4) and (5).

Personally, I prefer spades to be called spades. If we are to be pushed in directions favoured by the elite, let us have it out in the open. The Health Secretary believes people should eat more vegetables, therefore she delivers a speech to this effect, and the government throws a couple of million at a programme of fruit'n'veg propaganda. Or the Home Secretary believes children need to be indoctrinated with ‘British values’ (only the approved kind, of course), therefore we have compulsory ‘citizenship’ lessons in schools.

We may not like it, but at least we can see what is going on. The idea that it is legitimate to manipulate the decisions that individuals make, without them realising, so that they think they are exercising choice, while those in power smile to themselves knowingly, is not one we should encourage.

It is reminiscent of the way putative patient autonomy is promoted these days. For one thing, it is often superficial and cosmetic, in the manner of phoney consultations. Rhetoric about medicine being more ‘patient-centred’ is fairly meaningless, if a patient has no real power in the relationship. But, worse than this, the concept of autonomy is sometimes used explicitly to justify deceiving and manipulating people.

Here, for example, is bioethicist Gary Weiss in a 1985 paper*, ‘Paternalism modernised’, advocating this sort of pseudo-autonomy:
If the client will do better believing he is in control, the physician should encourage this belief and indirectly facilitate the right choice of action.
So the idea of libertarian paternalism — if not the dodgy phrase — is really not that new.

‘Nudging’ people to do what is supposedly desirable may seem innocuous and trivial in the context of (say) being told on your electricity bill how you compare with other users. It is potentially harmful in other contexts. For example, it is supposed to justify having opt-out organ donation, rather than opt-in. But this only makes sense if you leave out a number of other factors which may be more important, e.g. the fact that a mistake in the database has asymmetric effects on an individual depending on whether it is opt-in or opt-out.

Thaler and Sunstein’s original paper, like the whole cognitive-biases-justify-intervention philosophy, is riddled with dubious assumptions, some of them glossed over, others simply ignored. And the basic empirical findings about behaviour, such as framing, are angled in a particular policy direction, when they could easily have been used to make quite different points. It's a perfect example of pseudo-rationality.

There are various theoretical problems with the idea that an individual can make ‘bad’ choices, since it depends on what the individual’s objectives are, and no one except he can know what they are (possibly not even he, at least not consciously). More importantly, there are serious practical objections: why should we think other people are any good at working out what is best for someone, and why should we think that they would be motivated to do the best if they were given power over him, rather than the opposite?

However, it is no surprise that the il-liberal elite are lapping it up. Who do you think are going to be the people who get to decide what the rest of us should be manipulated into doing?



* Journal of Medical Ethics, vol.11, pp154-157.

11 May 2008

Predictably tendentious



Economics professor Dan Ariely, author of the book Predictably Irrational, conducted an experiment in which students were asked to choose a brand of beer in front of their friends. He found that
when people order out loud in sequence, they choose differently from when they order in private ... Overall, those who made their choices out loud ... were not as happy with their selections as those who made their choices privately. (pp.235-236)
Does the fact that people regretted their choice afterwards prove the choice was 'irrational'? Not really. It may just mean that the weighting given to factors other than the taste of the product, e.g. social esteem, is greater before drinking than after. Asking a person afterwards how “happy” he is with a choice he made is no more reliable an indicator of whether the choice was 'rational' than morning-after regret is a sign you didn't do what was (on some level) in your interest. If there are different 'versions' of you with different tastes, it isn't legitimate to assign greater rationality to one version just because it occurs later.

Yet on the basis of experiments such as this, Ariely — echoing other behavioural economists — feels entitled to opine that
we are pawns in a game whose forces we largely fail to comprehend. We usually think of ourselves as sitting in the driver's seat, with ultimate control over the decisions we make and the direction our life takes; but, alas, this perception has more to do with our desires — with how we want to view ourselves — than with reality. (p.243)
Note how the thesis "we are all less rational than we think" may appear to make state intervention more acceptable. If an individual cannot be trusted to decide for himself, there is less reason to treat his wishes as sacrosanct; other people might be capable of making choices on his behalf which are more aligned with his interests.

* * * * *

In a mediocracy, the concept of the individual is that of a physiological mechanism programmed to satisfy low grade impulses such as sex and aggression. Notions such as self, consciousness and free will are considered delusions that must be corrected by education.

The mediocratic individual has no autonomous inner world, i.e. one that is free from being determined by biological and social factors. His choices, while possibly unpredictable in detail, are ultimately trivial. He has no emotions beyond those implanted and sanctioned by society. He is a sink for everything, and a source of nothing.

But that is just the kind of individual that is wanted in a mediocracy. A person who is no threat to its systems, who subordinates himself to its authority figures, and who can be relied on to uphold its beliefs without question.

26 February 2008

The apologist as hero



Steven Pinker has become a bit of a hero for those not entirely seduced by the mediocratic project. He is perhaps the only prominent academic prepared to trumpet the idea that ability is at least partly inherited. This idea has, apparently, become highly controversial, a fact which a visiting Martian might find rather bizarre. When the Financial Times reviewed The Blank Slate, it treated* Pinker as some kind of firebrand radical, referring to his "dangerous work", and that it "would be best if it didn't get into the hands of those who would use it to terrifying ends".

While it's nice that it's still possible — in the US, that is — to receive a university salary while saying things that are at odds with the il-liberal** consensus, perhaps one shouldn't get too carried away. Pinker is effectively writing as an apologist from an ideological context in which denial of the blank slate is supposed to invoke the horrors of Nazism. (Curiously, assertion of it is not supposed to invoke the horrors of Stalinism or Maoism.)

To be acceptable as an academic, and sell his books, he needs to caveat his remarks and do a certain amount of soft-pedalling. Here, for example, from The Blank Slate:
The politics of economic inequality ultimately hinge on a tradeoff between economic freedom and economic equality. Though scientists cannot dictate how these desiderata should be weighted, they can help assess the morally relevant costs and thereby enable us to make a more informed decision. (p.304)
To say that 'we' can be helped to assess the morally relevant costs for other people, as Pinker here seems to be suggesting, effectively means subscribing to a theory that politics should be about deciding what is best for others. A shame that Pinker doesn't try to analyse this theory, rather than simply assuming it.
... if people's sense of well-being comes from an assessment of their social status, and social status is relative, then extreme inequality can make people on the lower rungs feel defeated even if they are better off than most of humanity ... The medical researcher Richard Wilkinson, who documented these patterns, argues that low status triggers an ancient stress reaction ... Wilkinson argues that reducing economic inequality would make millions of lives happier, safer, and longer. (ibid)
An example of pseudo-rationality: in this case, an incomplete analysis that looks cogent but is actually biased. What it leaves out is (a) that we can only reduce ex post (= after the event) inequality by changing the rules of the game, and (b) that this is certain to have its own associated costs, which are left out of the equation. The need to compete for status is no less likely to be an important human drive than the need for status itself. If you make it harder for people to win, that may also generate stress. While there is plenty of research purporting to show the stressful effects of inequality, I doubt there is much (if any) looking into the stressful effects of intervention, restrictions, red tape, or deselection on ideological grounds (the flip-side of affirmative action).

* * * * *

Pinker, like Noam Chomsky, has some interesting things to say about the relationship between mental activity and social convention — another highly politicised topic. The review of Pinker's latest book, The Stuff of Thought, generated some revealing comments. English professor John Carey claimed that Pinker
has no truck with the idea that the language we speak makes it impossible to think certain thoughts, a belief put about by, among others, Nietzsche (“the prison house of language”) and Wittgenstein (“the limits of my language mean the limits of my world”).
Whether Nietzsche should be identified with the Wittgensteinian view of language is doubtful. Scepticism about the limits of language is not the same as a dogmatic assertion that certain problems are beyond the bounds of analysis.
Pinker hoped he had killed this idea off in his previous book The Language Instinct, and is disappointed to find it still around. But its survival is no surprise. We like to feel we are the victims of a disadvantage.
Now as an explanation of why the constrained-by-language theory has proved popular, this seems unconvincing. More plausibly, the theory has appealed for the same reason that the blank-slate theory did: it generates a reductionist model of the individual. A model of the individual as a somewhat irrational robot, deceived about its own mental capabilities, is in turn appealing to many because it seems to legitimate interference by governments and 'experts'.

Journalist Bryan Appleyard interviewed Pinker in the same issue of the Sunday Times as Carey's review. Commenting on the blank-slate orthodoxy, he noted that
Curiously, it was a man now known primarily for his extreme left-wing views who first began to undermine this orthodoxy ... the linguist Noam Chomsky ... For some reason, Chomsky was not, in general, anathematised by students and the left.
"Curious" — that someone overtly left wing should receive less criticism for the same theory than someone politically neutral? Interestingly, Appleyard quotes Pinker taking an apparently contradictory view on the question of whether we are constrained by our mental apparatus.
he admits there may be one final problem, a problem often referred to as “the hard problem” by philosophers. “There may be problems that lie outside the space of thinkable thoughts, and the hard problem of consciousness may be one of them. I very much like the argument of Colin McGinn [a British philosopher] that there may be an academic discipline of problems the human brain is incapable of solving. It could be called philosophy.”
Again, a distinction needs to be drawn between awareness of the possible limitations of analysis, and the assertion that those analyses are doomed to failure. Mediocratic philosophy is characterised by a certain defeatism which was first observed in the later Wittgenstein: a demand that philosophical problems should go away, and leave us alone. Richard Rorty was another academic philosopher in the same line as Wittgenstein and McGinn, arguing that a "post-philosophical culture"
would contain nobody called ‘Philosopher’ who could explain why certain areas of culture enjoyed a special relation to reality.
As I wrote in the book: real analysis cannot be helpful to mediocracy given that it is predicated on deception. This, presumably, is part of the reason why we see a desire for capitulation in mediocratic philosophy, and why certain philosophers compete in their eagerness to announce that the quest for philosophical knowledge is doomed to failure.



* see second section
** see last Q&A

24 October 2007

Apposite cynicism about "rational" debate

We believe if you talk about your [inalienable] rights, you will defin­itely lose part of them.
- Mahmoud Ahmadinejad

The President of Iran yesterday expressed his scepticism about proposed talks with the European Union regarding his country's nuclear programme.

Now, while I do not mean to endorse Iran's position on nuclear power, I do think Ahmadinejad's interpretation of what is being offered may be correct. Once you start 'discussing' whether an important right or principle (e.g. habeas corpus; torture is wrong; no censorship) should or should not be upheld, you have basically lost the crucial battle of the war.

Inalienable rights, to the extent they exist, and whether they are right or wrong, are not based on rational argument but on feeling. This is something certain people, who go in for strenuous efforts to provide key principles with rational support, seem not to understand. Or perhaps they do, and are in fact ambivalent about those principles?

Once you start having to argue about why torture is wrong, or why liberty from unwarranted surveillance might matter, you know the ground has shifted. Compare contemporary discussions of torture with, say, the hoo-ha stirred up by James Watson's comments about race. On one of the two issues, discussion is out of the question. On the other, the answer is up for grabs. Which is more likely to happen soon in Britain: financing of research into racial differences in IQ, or the legally sanctioned torture of a suspected terrorist?

The point is not the postmodern one that rationality is merely one of many equally valid positions, but that what is presented as rational debate is often a cover for an attempt to force a desired change. (Cf. citizens' juries and similar types of phoney consultation.) There is by now a well-established tradition of pseudo-rationality in the West. Ostensibly, we seem to be dealing with intellectual analysis; in practice, the conclusions are almost always in a particular direction: e.g. in favour of pseudo-egalitarianism and more state intervention, and against Christianity, privacy and any kind of non-collectivised hierarchy. Entire collective blogs are based on this kind of pseudo-analysis.

Let me push this point one step further, though here I am getting speculative. Is it possible that one of the key things which certain non-Westerners hate and fear about the West is precisely this pseudo-rationalism, which tends to question and undermine any values other than its own?

05 September 2007

The pseudoscience of well-being



Is it a school's business to teach "well-being", or other social and emotional skills? The Department for Children Schools and Families clearly thinks so, and is proposing to make lessons in these subjects compulsory in secondary schools.

The answer to the question should be: depends on whether parents want it. (Or possibly, if you want to be radically libertarian, on whether pupils want it.) The usual test for whether something is wanted is: does the market provide it? At first sight it seems it does, since one of the pioneers of 'happiness lessons' is private school Wellington College and its head Anthony Seldon. It's not clear, however, how much this reflects parental demand, as opposed to supplier pressure — i.e. the ideological preferences of Wellington staff. The majority of British public schools do not offer such lessons as far as I'm aware, and another headteacher has described Seldon as belonging to a "lunatic fringe".

According to management consultancy firm 10Consulting,

for many years now, various employment and business related organisations in the UK, such as the Confederation of British Industry, have been highly critical of employees' lack of (so-called) soft skills. In 2004/5, Sir Digby Jones, then Director-General of the CBI, said of new graduates:
“A degree alone is not enough. Employers are looking for more than just technical skills and knowledge of a degree discipline. They particularly value skills such as communication, team working and problem solving. Job applicants who can demonstrate that they have developed these skills will have a real advantage.”
So you could say that the real point of the SEAL [social and emotional aspects of learning] programme in schools is to start providing kids with the necessary tools to develop their self-awareness, empathy, motivation, social skills and ability to manage their emotions, so that ultimately they can become successful members of the community and successful in the workplace. Makes perfect sense now, doesn't it?

Whether it makes sense depends on how you interpret Sir Digby's comments, and similar complaints about 'soft skills'. Is it that young people can no longer cope with the pressures of everyday life, because modern society is changing so fast, as educational experts would claim? Or is it that mediocracy fosters a mindset in which old-fashioned social skills (politeness, deference, not attacking other people's egos) are seen as redundant? Or is it simply the loss of the bourgeois work ethic, as the director-general of the British Chambers of Commerce seems to be implying in the following quote?

As I go round the country, from West Pembroke to Norwich, every company I speak to is using as much migrant labour as it can get hold of. It is always for the same reasons: workers from Poland come with far better skills and a better attitude — they want to work.

Loss of the work ethic is probably not something which can be reversed with lessons to improve 'emotional awareness'. And, while recovery of politeness and other bourgeois manners theoretically could be, judging by the material I have seen this is not what is going to be aimed at.

Is "well-being" a science, as the University of Cambridge's Well-being Institute maintains? Or just another meretricious humanities discipline, like "women's studies" or "peace studies"? Claims have been made for the efficacy of cognitive therapy (CBT) as a treatment for mild forms of mental illness, and well-being lessons are said to build on that. I personally wonder whether CBT is just another way to modify behavioural appearances without addressing underlying problems — though I suppose it's less objectionable than zapping people with drugs. In any case, it is unclear what place a putative treatment for depression has in a conventional school environment.

Pioneer of happiness studies Nick Baylis writes about the individual's "relationship with reality".

Reality is an environment that can put up a lot of resistance to our making progress and, consequently, it can build the mental skills for problem-solving. By contrast, escapist fantasy is an environment in which anything is possible, so our problem-solving skills begin to atrophy if we spend too long there. ... it is only our response to problems that determines their net effect upon us, not the problems themselves. *

This strikes me as the kind of stuff which is either trivially true — or highly speculative and potentially false. In either case, not really hard science.

The DCSF links to a website with resources for primary schools, where well-being lessons are already in force. Some of the resources, like the teaching aid shown on the left, are merely trivial, and remind me of management theory which teaches the obvious. "How can we help John to know what he is feeling, children? I know: let's get him to think about it. Then we'll get him to talk about it." (hypothetical illustration)

Others are more obviously ideological: for example, this resource about change, apparently designed to encourage warm feelings towards multiculturalism. "Azis forgot that changes are a part of life." "Azis stopped complaining and learned to see the world differently."

* in Huppert et al. (eds), The Science of Well-being, Oxford University Press 2005, pp.243-244.

Update: The Sunday Times reports on research which concludes that happiness classes "leave children depressed and self-obsessed ... It finds little evidence that the classes, which encourage children to express feelings openly and empathise with others, lead to any long-term improvement in emotional wellbeing or academic success." A summary of the research is available here.

28 August 2007

Wikipedia: NPOV?

Until now, I have not given much weight to claims that Wikipedia has a 'liberal' (i.e. illiberal) bias. I have taken the view that being criticised by both sides of the culture war (W. is certainly not very popular with most of the il-liberal intelligentsia) is a healthy sign.

This is the first time I have had serious reason to doubt my confidence about this. The article "climate change denial" was created on 27 July. It was nominated for deletion on 31 July. The result of the discussion, on 8 August, was to keep.

I have not gone into the issues in detail, and I do not classify myself as either believer, sceptic or denier on climate change. Although I do find Bjorn Lomborg's point — that there is a strong aura of 'protesting too much' around climate change belief — persuasive:
A good saying among lawyers is: if you have a good case, pound the case; if you have a bad case, pound the table. And this is definitely a case of table pounding … which is kind of revealing about their arguments.
But I have two problems with the Wikipedia article:

1) Surely it would be possible to call it "climate change scepticism" and still have plenty of room for pointing out the weaknesses of the sceptics' position, and all the authorities who argue scepticism has no basis.

2) The article has a flavour of contempt, and of certainty of being in the right, which one does not find even in the article on holocaust denial.
Terms such as "deny global warming" and "climate change denial" have been used since 2000 to describe business opposition to the current scientific consensus. ... [Newsweek] reported that "this well-coordinated, well-funded campaign by contrarian scientists, free-market think tanks, and industry has created a paralyzing fog of doubt around climate change."
How has it come about that dismissing those who question the 'consensus' on climate change has become more acceptable (at least in this context) than with regard to whether the Nazis killed millions of Jews? The argument about sources of finance creating bias is relevant only to the extent that there is no bias in what research university funding bodies will support.

* NPOV = Wikipedia's neutral-point-of-view policy, which founder Jimmy Wales has said is "absolute and non-negotiable".

Update: Some more thoughts on Wikipedia here, from Peter Risdon.

19 August 2007

Enemy Number One?

Richard Dawkins seems to be at risk of turning into a parody of himself. Careful, Professor Dawkins, the mediocratic media doesn't really like intellectuals, and has a tendency to crudify both the things they say and their (the intellectuals') social image. Also, remember that the media's attitude to celebrities is highly ambivalent. They may build you up, then, equally gleefully, collaborate in tearing you down. Still, at least you're (presumably) making some money out of it all.

Dawkins' latest media appearance is in Channel 4's The Enemies of Reason. But I wonder whether, in his rigid but selective cut-offs between science and non-science, he doesn't come out as an enemy of reason himself.

Dawkins talks about "the rigours of logic, observation and evidence" and emphasises "respect for evidence". Yet he fails to produce evidence for his assertions that there is an "epidemic of irrational, superstitious thinking", or that "we live in dangerous times when superstition is gaining ground", or that "primitive darkness is coming back", or that "every day of the week we're encouraged to retreat into the fog of the superstitious past". Without data showing that (e.g.) an increasing proportion of people believe in astrology, these claims are mere hyperbole.

Dawkins shows an experiment on dowsing which proves to be negative. Like most experiments of this kind, the conditions are rather different from those under which the people who claim it works would normally do it. (Several of the dowsers refer to this point, but Dawkins has little time for these "excuses".) In any case, a single negative result doesn’t prove much. *

Yet the refusal to accept from this one experiment that dowsing doesn't work makes Dawkins feel entitled to opine that the dowsers'
state of denial is extraordinary. Even when confronted with hard fact these dowsers prefer not to face up to truth but retain their delusion.
Is this "respect for evidence"? Or just respect for his own prejudices, and hence not much better than the position he attributes to the dowsers?

Invoking illusionist Derren Brown as a source of support may be revealing about the underlying motivations of Dawkins' mission. Brown is shown deceiving members of his audience into believing that he has contacted their dead relatives. An extract is shown from Brown's programme “Messiah”, in which (according to Wikipedia) "Brown tricks three women into believing that he is in contact with deceased loved ones and many tears are shed. Afterwards it was explained to the participants that it was a trick, and those appearing agreed to broadcasting the event." So Brown has effectively made money (big money, rather than the piddling amounts made by most psychics) out of other people's gullibility, even if he enlightens them after the event. However, Brown is (Dawkins tells us) "a celebrated illusionist but also a sceptic". That must make it okay, then.

Why are many people drawn to pseudoscience, as Dawkins complains? Perhaps because they find the de rigueur reductionism of many contemporary scientists and intellectuals oppressive. In The Power of Life or Death I wrote that
It is easy to despise the alternative therapies industry for being unable to offer remedies which are genuinely effective, and many doctors appear to regard it in this way. However, the situation in which a more sympathetic, client‑subordinated service is only available in a completely emasculated form because of legal restrictions is one for which the conventional medical profession is itself largely responsible.
Perhaps there is something similar at work in science, which proselytisers such as Dawkins would do well to bear in mind. The more you insist that science unquestionably excludes interesting exotic things at the margins, the more you may find that you steer your audience into the arms of pseudoscience.

By being polemical and 'passionate', Dawkins comes across as not much better than the breathless enthusiasts for the paranormal that you get on programmes such as Most Haunted. The pressing, cajoling quality of his voice-over verges, at times, on the embarrassing. Is this really designed to sell science to those he deems 'gullible'? If so, I doubt its efficacy.

The suspicion that Dawkins' aim is to sell a particular worldview, rather than the scientific method, is strengthened by his resorting to political correctness in attacking those he deems 'enemies'. Talking about astrology, for example, he compares it to racial stereotyping in being "guilty of facile discrimination".

On the face of it, The Enemies of Reason is addressed to ordinary people who are insufficiently critical about astrology etc. Dawkins' explanation for the attitudes of such people is that "we desperately want to feel there's an organising force at work in our bewilderingly complex world". However, he has so far failed to show why people who don't need to apply rigorous scientific criteria in their everyday lives should choose to adopt his somewhat selective scepticism. Perhaps he will do so in the second instalment.

More thoughts about Dawkins here and here.

* Please note, I have no views about dowsing.

23 July 2007

"Technical specialists"



Friedrich Hayek wrote in The Counter-Revolution of Science:
[In the early nineteenth century] that new type appeared which, as the the product of the German Realschule and of similar institutions, was to become so important and influential in the later nineteenth and the twentieth century: the technical specialist who was regarded as educated because he had passed through difficult schools but who had little or no knowledge of society, its life, growth, problems and values, which only the study of history, literature and languages can give.
I believe what Hayek meant here is that (e.g.) a highly trained climatologist would not necessarily know much about history or politics, and that this might be regrettable.

Peter Klein, commenting on this, writes:
In economics especially but also in sociology, political science, psychology, and other social sciences we have trained many generations of such “technical specialists.” Is this wise? Put differently, would a typical PhD student in one of these fields benefit more, on the margin, from taking a course in history or literature or philosophy instead of one more course in quantitative methods?
But this seems to me to be comparing apples and oranges. The nineteenth century specialists Hayek was referring to in the quote actually had some expertise that was useful, even if they knew about very little outside their field. It is questionable whether the modern average PhD economist has anything much to contribute. That is, apart from dodgy models designed to appear impressive by being presented using abstruse mathematics, which are incomprehensible to more than a handful of insiders, and don't generate any useful conclusions. Probably ditto for sociologists and political scientists. What do these people actually contribute to our understanding of real phenomena?

The answer to Klein's question is, the typical PhD student — and economics in general — would benefit more from taking no quantitative courses whatsoever, and instead spending time thinking about fundamental issues. And not just "at the margin".

PS
Having queried this post by Klein, I should in fairness mention that he does (elsewhere) at least allude to the over-technification of theoretical economics, and therefore represents a distinctly abnormal exception to the rule of contemporary academia. Here, he doesn't quite dare to criticise the 'economist' (= game theorist) Ariel Rubinstein, but at least he draws attention to the fact that for Rubinstein, "economics is primarily an intellectual game, an exercise in puzzle-solving, an attempt to construct clever fables". Fine, but could we please shift people such as Rubinstein (Jean Tirole, Philippe Aghion, Oliver Hart, etc. — i.e. the bulk of the modern Western economics faculty) into a new discipline "mathematical fables for intellectual trivialists", and re-build economics on the ruins of the old neoclassical stuff? Which wasn't that wonderful but at least was reasonably coherent and lucid.

22 July 2007

The bias of cognitive bias theory



This article strikes me as a textbook illustration of the fact that much of post-war academic research in the humanities has a tendentious character. That is to say, it appears to be motivated by ideological considerations. And not in the sense which most humanities academics mean when they talk about "ideology". (They confine their use of this term to criticisms of capitalism.)

What I mean is that such research, although not incorrect, and sometimes even illuminating, appears to be motivated by a desire to make anti-individualist points. And is duly used for this purpose. Sometimes, as in this case, the use may be inappropriate, but it doesn't really matter. The object — to create a different worldview in which the old concept of 'the individual' is undermined — has been achieved.

The article by Professor Norman Siebrasse, on the Oxford-University-associated blog Overcoming Bias, invokes two classic examples of such anti-individual research: (i) the concept of "cognitive bias" and (ii) the Prisoner's Dilemma model.

The idea of cognitive bias is fashionable because it allows society to question the decisions and preferences of the individual. You may observe that it is particular popular with commentators of a leftist persuasion. By contrast, the idea that the state is biased by the motivations of its agents has hardly been explored. To the extent it has, by people like James Buchanan and Eric Nordlinger, the research in question is distinctly unfashionable. Enthusiasm for cognitive bias theory in its usual form could therefore itself be regarded as a form of cognitive bias.

The Prisoner's Dilemma, as I explained here, has received the massive exposure it has — not, as Adam Curtis of The Trap ludicrously suggested, because it promotes the idea that people behave mechanically (although he is right that this mechanistic ideology has been widely peddled by academia) — but because it is one of the few basic models of microeconomics which suggests, at least at first sight, that markets could fail.

Siebrasse invokes these two concepts in critically considering the right to privacy. He complains that "in much of the debate as reported in the media no argument at all is made in favour of privacy — it is just accepted as presumptively good", and that he has "never come across a sound policy argument that justifies a general presumption in favour of privacy".

But to put the argument in this form (i.e. "please give a social justification") already predetermines the type of answer you are going to get. The point is that everyone wants the right to privacy, for themselves, other things being equal. It is a freedom to choose. If you have the right, you can if you wish expose yourself exhibitionistically on the internet (for example). Without the right, it is other people who will determine if you have privacy in practice.

Like any other liberty, privacy should be regarded as a basic right which does not need justification. It may then need to compete against other claims, e.g. the need to detect crime. Demanding that there be utilitarian justifications for the basic right is pointless. If you must pursue that line, take it back to the question of whether there are good enough arguments for liberty per se. Although I happen to think doing so is to be equally avoided, since that too is a type of question the mere posing of which — "can we justify this in terms of the good to society?" — to a large extent shapes the answer.

Having decided that privacy doesn't make a lot of sense, Siebrasse reckons it must be a cognitive bias. This is rather a popular line among contemporary intellectuals. We don't quite approve of what people seem to want? It doesn't fit with normative ideals as formulated by humanities professors? Must be a cognitive bias.

Of the two possible cognitive biases which Siebrasse considers as explanations of the preference for privacy, he leans towards the hypothesis that we have been hardwired for a type of Prisoner's Dilemma.
Suppose that free flow of information is in fact that the best social policy. This would set up a classic prisoners’ dilemma: the best case overall is if no one keeps information private, but the best case for me is that I keep my information private and everyone else reveals theirs. Since everyone has the same reasoning, everyone elects to keep their information private, even though free flow of information would be substantively desirable.
Now labelling the preference as a Prisoner's Dilemma sounds good, because it implies a type of tragedy of the commons in which everyone is worse off, but could be made better off by means of intervention by an outside agency (i.e. the state). But I find the use of PD here somewhat implausible. In PD-type situations, individuals are typically aware of what they are missing out on. Those overgrazing a piece of land would surely say, "yes, it's a shame we can't cooperate about this". Or they would create a framework for cooperating; that is after all the purpose of many aspects of civil legislation. But I see no reason to think that most members of the public would regard current information-sharing between them as seriously suboptimal. Not pre-9/11, and not even now. Suggestions that it would be in our interest to reduce privacy seem to come largely from government representatives or their mouthpieces.

There may of course be a demand from one section of the population that more information about members of another section (e.g. the rich and famous) be made available. That, however, is not a Prisoner's Dilemma.

22 May 2007

People want you to believe stuff

In my experience, a lot of people find it uncomfortable to be agnostic. And not just about the existence of a god or suchlike. They seem to find it difficult to hold the possibility of two mutually incompatible ideas in their minds at the same time. They much prefer holding an opinion, even if there is very little evidence to decide. ("Go, Christians, go!" "Go, atheists, go!")

Not only that. People find it uncomfortable if you are agnostic. And they will try to attribute a belief to you, even when you haven't given them any reason to do so, or even when you've tried very hard to make it clear you aren't defending one position or another. Celia Green has written about this in Advice to Clever Children — relevant extract can be read here.
Suppose that you are discussing solipsism with somebody. You point out that it is an irrefutable possibility that no one other than yourself has a consciousness. He will immediately start discussing this as if you had asserted your personal belief that no one other than yourself has a consciousness. (He, almost certainly, will have a quite definite personal belief that everyone who seems to have a consciousness actually has one.)
I have noticed this a couple of times in connection with this blog. For example, in criticising Richard Dawkins, or in satirising The God Delusion, many of the comments I got were from people who seemed to either (a) be agreeing with me because they thought I was defending Christianity, or (b) be critical because they thought I was failing to subscribe to [what they think is] the rational option i.e. atheism. (Although there was a slight variation on this theme from Tony F, who accused me of making "the common mistake of thinking that in any particular argument involving two extreme opinions, the truth must necessarily lie somewhere in the middle". In other words, Tony was attributing to me a third opinion, namely that the answer is "half-and-half".)

In response to my mentioning the modern tendency to treat free will and consciousness as "delusions", someone commented that "I don't agree with the notion that we have free will ... but I do agree that there are and should be individuals with individual selves and consciousness".

Er, "should be"? Actually, I have never (in writing) said that "I believe we have free will" or "I believe there are individual selves which have consciousness". As I said in response to this comment, I was careful in the Mediocracy book to confine myself to highlighting/criticising fashionable ideologies, without getting bogged down in whether or not they are true.

When I did a little cartoon some time ago about climate change scepticism, the first comment I got seemed to assume that I was saying I disbelieve the consensus view. The idea that one could strongly oppose the suppression of dissident viewpoints, while not necessarily subscribing to those viewpoints, seems not to enter a lot of people's heads. They just want to argue about which belief one should subscribe to. And, sadly, the blogosphere doesn't exactly set a good example in this respect. Look at the online comments of most blogs, and you will see debate which in many ways is little different from people arguing in a pub about the relative merits of soccer teams.

PS
spotted on Jeremy's blog:
Celia Green aphorism: "People have been marrying and bringing up children for centuries now. Nothing has ever come of it."
Blog reader: "I think life would be very boring and unsatisfying without relationships and children."
Kind of misses the point.