Back

How Do I Know What To Believe?

It's not as easy as sceptical authors sometimes seem to imply. Martin Parkinson draws a quick sketch map of the difficulties of real-world reasoning.


First appeared in: The skeptic Vol 18 no 3 Autumn 2005


In my article Power, Arcana, and Hypnobabble (The Skeptic 2004) I avoided technical discussion of the effectiveness of hypnosis by stating that, for the purposes of that article, I was taking a particular book as an authority on the matter. My conscience is nagging me about this because baldly citing something as an authority, with no additional discussion, begs a rather large question. I probably got away with it in this instance because one of the authors of the book I cited is the chair of the Association for Skeptical Enquiry and the other was a well-known sceptic but the general question of how we judge the trustworthiness and adequacy of information we are given is by no means a simple one.

This is important because so much of the information that we have to use, even in the most everyday of tasks, is second hand. Antoni Diller, an AI researcher, has been looking at this question as one of the central problems which would have to be solved in order to make it possible to design a useful android and he has found that the rules governing 'belief-acquisition' are complicated (Diller 2003). This is not surprising and is relevant not just to android-designers but to anyone interested in the relations between science and society - and that means anyone concerned with the skeptical project.

Once one moves even a short way away from the really obvious cases (such as breatharianism and creationism) it seems to me that people believe dubious things for reasons that are certainly not stupid, and may even be good. This point is expanded by Gilovich (1993) who describes the reasoning strategies we apply and spells out why in certain cases a sound, or unavoidable, strategy can produce poor results. It is often the case, for example, that the reasoning used is perfectly sensible, it's just that we are applying it to incomplete or poor quality information. But how could one know that one's information is inadequate, or what gives one confidence that one has made appropriate allowances for the gaps?

Let's take my old friend Neuro Linguistic Programming as an example (The Skeptic Volume 16 Number 3, 2003 ). To recap, I argued that NLP is unlikely to be, as claimed "as profound a step forward as the invention of language" and I think it probable that many readers will have accepted this conclusion for two reasons. The first is overall context: it appeared in The Skeptic, and even though I am not an experimental psychologist, the editors who accepted the article are. The second reason is internal textual evidence: the literary tone was right, I clearly shared assumptions with my readers, my arguments indicated that I understood scientific reasoning, and my references showed that I had covered a good range of material.

So far so dull, but how would I get on with an intelligent NLP convert out in the wild?

PARKINSON. Aren't you at least a bit embarrassed by the name - surely "Neuro linguistic programming" sounds like a piece of dodgy science fiction?

NLP FAN [puzzled]. No it doesn't. It doesn't sound any 'dodgier' than say, Human Information Processing, a psychology textbook you used as an undergraduate all those years ago (Lindsay & Norman 1977).

PARKINSON: But look here, despite all the jargon - which is just a bunch of linguistic go-faster stripes - it has no academic credibility…

NLP FAN. Oh but it certainly has - there was a series of articles about it in the British Medical Journal… (Walter & Bayat 2003)

PARKINSON. [wincing at the problems of explaining the difference between 'doctor' and 'scientist']. But…

NLP FAN…and you can study it at Birkbeck College - I'd say the University of London has credibility. (Birkbeck College 2003a, 2003b)

PARKINSON. Yes but those aren't regular undergraduate classes -

NLP FAN.[gleeful] Well at Portsmouth University you can take postgraduate modules in NLP (University of Portsmouth 2003)

PARKINSON. But that's in the business faculty

NLP FAN. Even better! Business folk are notoriously hard-headed and only use stuff that's been tested and works - don't you know anything?

You can see I would be in for a long haul, and it would involve explaining not just the 'disciplinary matrix' but how I, a non-academic, can plausibly say anything about these matters. It would also involve explaining why the demonstrations of 'eye-accessing cues' that he will have seen in his training course do not in fact demonstrate anything much. To do this, not only would I have to make quite sophisticated points about the peculiar difficulties of the experimental method when applied to psychology, but I would also have to overcome a very sensible assumption that is inculcated by schooling. When we are shown an experiment in a science class in school everything about the context makes it clear that it is a demonstration of something that is rock solid established fact; we are being shown something valid and quite properly so. Why should the situation of an NLP seminar be any different? The trainer is obviously not a charlatan and she has as much confidence as any chemistry teacher. Diller's model of belief acquisition starts with the default that you believe what you are told. This rule is of course defeasible but what is there in this situation to defeat that rule?

Let's now look at an example that actually matters, the MMR vaccine. In early 2003 I visited an exhibit at the London Science Museum which dealt with this issue. I recollect it as a superb example of clear, non-patronising, non-propagandising science communication and if I were a parent it would have left me in no doubt about the safety of the vaccine (I found the information that there had been a similar scare in the 1970's about the whooping cough vaccine particularly persuasive). However, I would never have guessed from reading the newspapers that the matter was so clearcut.

Think about the information available to most parents. They have incomplete and poor quality substantive information given them by newspapers and the internet but they also have plenty of quite good quality information (partly obtained from direct personal observation) about human behaviour. This latter information, which is possibly not even articulated, tells them that the actions of politicians, like everyone else, are driven by motivations other than the disinterested pursuit of truth and the common weal, however well-intentioned they may sincerely feel themselves to be. It tells them that scientists, like everyone else, can become emotionally attached to ideas and that therefore "trust me I'm a scientist" on its own is not necessarily a good reason for belief if there is countervailing information (although I do think that scientists still command a good degree of popular respect - paradoxically, the existence of pseudoscience pays tribute to this). In a situation where we have limited information these sorts of considerations are legitimate. In the light of the poor information which is the most readily obtainable and given what is thought to be at stake, people are making a rational choice in not taking up the triple vaccine.

(The anthropologically aware will have noticed that I have ignored the immensely powerful role of cultural pressure in belief creation. The philosophically alert will have noticed that, throughout this article, I have avoided defining the term 'belief'. These are very valid considerations which there simply isn't space to discuss here.)

I have argued that it is possible, in real-life situations, to acquire a questionable belief for sound reasons, but so what? Thinking about these matters has suggested a (purely personal) answer to the question posed by Tad Clements (The Skeptic 1992 vol 6 no 6 p.8)

"Perhaps what we need to aim for are approaches which manage to reveal the absurdity of positions without at the same time making the credulous person feel like an object of ridicule"

The strategy that this suggests to me is one of resolute politeness, or at the very least not using words such as "absurd" or "credulous" until I've made some enquiries as to why exactly a belief is held. (Apart from the arguments presented in this article if, as has been suggested, individual variation in 'superstitiousness' is a reflection of individual neurochemistry then it seems a little unfair to call people rude names for a quirk they cannot entirely help). Mind you there are limits to this approach and I'm not sure how long my pose as a born-again nice guy will last. I am certainly no stranger to the pleasures of indignation, and although Indignation and myself have recently split up and I no longer return her calls, I do sometimes miss her dreadfully.

References

Birkbeck College, University of London Faculty of Continuing Education.(2003 a) Psychotherapy: Themes and Approaches: Neuro-Linguistic Programming (NLP) Retrieved 16 November 2003 from http://www.bbk.ac.uk/study/fce/psychology/idxpsychology.html

Birkbeck College, University of London, Faculty of Continuing Education(2003 b) Interpersonal Communication Skills: Diploma: Advanced Study Retrieved 16 November 2003 from http://www.bbk.ac.uk/study/fce/commskills/commsklicd.html

Diller, Antoni (2003). Designing Androids Philosophy Now, 42, pp 28-31

Gilovich, Thomas (1993). How We Know What Isn't So. The Fallibility of Human Reason in Everyday Life. New York, The Free Press, Simon & Shuster Inc.

Lindsay, P., & Norman, DA. (1977) Human Information Processing. New York. Academic Press.

University of Portsmouth (2003) Index of units at University of Portsmouth at level M. Retrieved 16 November 2003 from http://www.tech.port.ac.uk/tud/db2003/UnivPort_M.htm

Walter, Joanne & Bayat, Ardeshir. (2003) How to use the language of the mind. Retrieved 16 November 2003 from http://bmj.bmjjournals.com/cgi/content/full/326/7389/S83


Home... More articles

© copyright 2005 Martin Parkinson, all rights reserved; moral rights asserted. Last updated November 2005