[SystemSafety] Fwd: Contextualizing & Confirmation Bias
Denis Besnard
denis.besnard at mines-paristech.fr
Thu Feb 6 20:53:23 CET 2014
Dear all,
I am a silent follower of this list and I apologise for having lost
track of where the the confirmation bais came from in this thread.
Anyway, I thought I would humbly inject the following.
Nancy wrote:
> It would be helpful if you would provide references if you are going to
> provide definitions that differ greatly from the widely accepted.
As far as I know, one reference goes back to the Wason task. He (she?)
designed a task where subjects had to decide which of a set of 4 cards
would have to be turned over to test whether a logical statement holds.
You can find a recap of this seminal experiment at
http://en.wikipedia.org/wiki/Wason_selection_task
There is controversy over the interpretation of the results but all in
all, it is taken as the first demonstration that people, when reasoning,
have a tendency to select cases that confirm (as opposed to reject)
their hypotheses.
An example of the above I saw with my own eyes was that of a researcher
who came to Newcastle University (UK) for a talk (1). He plugged the VGA
cable into his Mac laptop with a adapter. After a while, the signal was
lost and the picture on the screen when black. He moved the laptop
around a bit, touched the adapter and the picture came back on. This
happened twice in a row and he treated the problem in the same way each
time. At that point, it was obvious that his adapter was a bit loose.
Now, had he wanted to know for sure if the adapter was at fault, a
powerful test would have been that of assuming that the adapter is OK
and that another cause is at play. He did not even have to try. The case
came uninvited. The screen went black once more but this time, for some
reason, this person accidentally touched the touchpad and the picture
came back on right away. The laptop monitor was simply going to sleep.
What I get from the above is that testing a hypothesis on the basis of
information that would confirm a suspected fault can lead to flawed
conclusions. A complementary test would be one where one accepts the
hypothesis on the basis of information that fails to reject the
suspected cause.
An industrial example of all this happened in the cockpit of the B737
that crashed at Kegworth in 1989.
http://en.wikipedia.org/wiki/Kegworth_air_disaster
The crew throttled back one engine that was suspected to vibrate. Very
shortly after that, the vibration level decreased on that engine,
leading the crew to believe that they had diagnosed the problem right.
They had just tested a hypothesis of fault about this engine and
accepted it on the basis of confirming evidence. The counterpart of the
reasoning would have been to try to find a test that would reject this
hypothesis. This could have been e.g. increasing the thrust on the same
engine and see what would happen. Their prediction would have been that
vibrations would resume but chances are they would NOT have. Indeed, the
AAIB report established that the engine they had throttled back was the
healthy one. The drop in vibrations only came from the erratic behaviour
of the faulty engine, the only one they had to finish their fatal flight
with.
The whole process of overestimating the weight of confirming evidence is
a flaw that I have seen in many reasoning tasks. Troubleshooting is a
wonderful source in this respect and I have spent some time studying
it. I suppose other classes of reasoning such as experimental plan
design might be sensitive to the confirmation bias although I cannot
recollect any right now.
Regards
DB
(1) It was in room 911, Comp Science Dept in Claremont tower; hi folks!
--
Denis Besnard
Co-Director of the post-Master's degree in Industrial Safety: FHOMSI
Mines-ParisTech
Rue Claude Daunesse
BP 207
06904 Sophia Antipolis Cedex
FRANCE
Tel. +33 (0)4.93.95.74.86
Email denis.besnard at mines-paristech.fr
http://perso.crc.mines-paristech.fr/~denis.besnard/
More information about the systemsafety
mailing list