[SystemSafety] NYTimes: The Next Accident Awaits
Patrick Graydon
patrick.graydon at gmail.com
Mon Feb 3 09:07:28 CET 2014
On 3 Feb 2014, at 02:36, Matthew Squair <mattsquair at gmail.com> wrote:
> There is for example experimental evidence going back to Slovic and Fischoffs work in the 70s and Silveras follow up work in the 00s on how the structuring of fault trees can lead to an effect known as omission neglect, see here (http://wp.me/px0Kp-1YN) for further discussion of the effect. I see no reason why such graphical techniques as GSN should be immune to the same problem, or safety cases in the broader sense.
I don’t see how those experiments (either the original or the follow-up work) are particularly relevant. In all of them, the subjects were given the fault trees and told to use them as an aid to a subsequent task; the experimenters were measuring how presentation in them biased their performance in that task. But in none of them was anyone explicitly tasked with checking the given fault trees, as an ISA or a regulator would a safety case. Because no-one took on the role of a skeptical critic, I don’t see the experimental context as particularly analogous to safety-case regulatory regimes.
Moreover, if this was really to weigh in on the question of whether a safety case regime systematically accepts more shoddy systems after regulator/ISA review than a so-called ‘prescriptive’ system would, the experimental context would have to clearly be more analogous to the context of one of those than the other. But in *both* we have people presenting information (that might be framed one way or another) to regulators/assessors.
Don’t get me wrong, I am not claiming to have the answer here. But I find the evidence that has been offered to date so weak as to be useless. I second Drew’s call for serious, systematic study of this.
As to arguments that a system is unsafe, could you explain how that would work? Trying to discover all of the ways that a system is dangerous is a good way to find them, as trying to discover all of the ways that an argument is flawed is how we find flaws in arguments (safety and otherwise). But what are the criteria on which we decide whether something is good enough?
This approach seems to be a case of demonstrating a negative. In an inductive argument, you do this by showing how many possibilities you have examined and discarded. E.g., if I wanted to claim that there are no Ferraris in my bedroom, I could back that up by claiming that I have looked into every space in that room big enough to hold one in such a way that I would expect to see one if it was there and that my search revealed nothing. In the case of safety, wouldn’t you have to argue over how you’d gone about looking for hazards (and dealt with all you’d found), how you’d gone about looking for causes to those (and dealt with all of those), how you’d gone about verifying that your system as deployed did what your analysis (and the resulting safety requirements) required, etc. This sounds an awful lot to me like the standard guidance for safety case structure. Or do you have something else in mind?
— Patrick
More information about the systemsafety
mailing list