[SystemSafety] Qualifying SW as "proven in use"
Patrick Graydon
patrick.graydon at gmail.com
Mon Jun 17 14:49:44 CEST 2013
On 17 Jun, 2013, at 14:06, Steve Tockey <Steve.Tockey at construx.com> wrote:
> Calling something "proven in use" is patently absurd. Calling it "suitably demonstrated by adequate real-world use that the probability of serious defects is sufficiently low" would be much more appropriate (however one would need to seriously word-smith the second statement--smile).
Is it even that simple? Unless I am misunderstanding something, ‘proven in use’ seems to be an argument by analogy: X worked in situation Y1, situation Y2 is ‘like’ situation Y1, therefore X will work in situation Y2. (In this characterisation, reliability definitions, calculations, and figures express a meaning and confidence in ‘work[ed]’.)
The problem, if my characterisation is true, is that an argument by analogy is inherently weak. I have heard some philosophers go so far as to label all arguments by analogy fallacious. Their reasoning is that the conclusion only holds if you use the right operational definition of ‘like’, but if you knew what what was, you could simply appeal to it directly (e.g. X works when C1, C2, and C3; in Y2, C1, C2, and C3; therefore X works in Y2). They label analogous reasoning fallacious because there is always a stronger direct version.
Perhaps, for some systems, we know what ‘like’ means. I am skeptical about whether we know this about many software systems, much less most or all. If someone presented me with such an argument, I would be very curious about how they justify their operational definition of ‘like’.
I haven’t read all of Prof. Ladkin’s PDF document, but it seems to me that he is saying exactly the same thing without using the words ‘analogy’ or ‘fallacy’.
— Patrick
Dr Patrick John Graydon
Postdoctoral Research Fellow
School of Innovation, Design, and Engineering (IDT)
Mälardalens Högskola (MDH), Västerås, Sweden
More information about the systemsafety
mailing list