[SystemSafety] Collected stopgap measures
Peter Bernard Ladkin
ladkin at causalis.com
Sat Nov 17 09:41:07 CET 2018
On 2018-11-17 09:06 , DREW Rae wrote:
>
> You are misrepresenting both Boehm's work and the Knight-Leveson n-version experiments.
I don't think I am. I stand by what I wrote.
> Boehm was talking about collecting data on past estimates to improve future estimates, not about
> evaluating the generic effectiveness or efficiency of specific practices.
Boehn most certainly talks about the generic effectiveness of specific practices. What on earth do
you think the partial-prototyping Spiral Model and its comparison with Win Royce's Waterfall model
is all about?
> Knight and Leveson were evaluating whether n-version software was prone to common errors in each
> version. Their results say nothing about whether controlled trials are good scientific practice.
Knight and Leveson showed that errors in software by supposedly-independently-working teams are
often correlated. Exactly what the confounding factors are in such processes and how you identify
them is an unsolved problem (they speculated on one or two). First we have to find out what such
processes could show and how, that is, obtain a much better idea of possible confounding factors.
*If* we can do that, *then* it might make sense to perform such experiments if someone somewhere can
find the resources to do so. What does not make sense is to rule out other data analysis because it
doesn't meet some so-far-unattainable ideal that someone has in hisher head.
PBL
Prof. Peter Bernard Ladkin, Bielefeld, Germany
MoreInCommon
Je suis Charlie
Tel+msg +49 (0)521 880 7319 www.rvs-bi.de
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20181117/3ca8b08d/attachment.sig>
More information about the systemsafety
mailing list