[SystemSafety] Collected stopgap measures
DREW Rae
d.rae at griffith.edu.au
Sat Nov 17 11:13:48 CET 2018
Peter,
Boehm's discussion of the models and his empirical work on software estimation are not the same thing, and don't have the same evidence status.
His large body of work that you were claiming as evidence that controlled comparisons were unnecessary is all about size, time and cost estimation,
not about efficacy of practices.
I have no idea why you think Knight and Leveson's experiment is evidence that comparisons are a bad idea. A comparison is exactly what they did. If anything,
the amount that a toy experiment with a few groups of students get cited or used in arguments just shows how thirsty the software engineering world is
for anything that looks like proper evidence.
I remain amused that you are happy to call comparative evaluation of practice an unrealistic ideal just because hardly anyone does it, but you want to accuse
the entire software world of malfeasance just because they won't adopt practices they consider unrealistic.
Drew
________________________________
From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de> on behalf of Peter Bernard Ladkin <ladkin at causalis.com>
Sent: Saturday, 17 November 2018 6:41:07 PM
To: systemsafety at lists.techfak.uni-bielefeld.de
Subject: Re: [SystemSafety] Collected stopgap measures
On 2018-11-17 09:06 , DREW Rae wrote:
>
> You are misrepresenting both Boehm's work and the Knight-Leveson n-version experiments.
I don't think I am. I stand by what I wrote.
> Boehm was talking about collecting data on past estimates to improve future estimates, not about
> evaluating the generic effectiveness or efficiency of specific practices.
Boehn most certainly talks about the generic effectiveness of specific practices. What on earth do
you think the partial-prototyping Spiral Model and its comparison with Win Royce's Waterfall model
is all about?
> Knight and Leveson were evaluating whether n-version software was prone to common errors in each
> version. Their results say nothing about whether controlled trials are good scientific practice.
Knight and Leveson showed that errors in software by supposedly-independently-working teams are
often correlated. Exactly what the confounding factors are in such processes and how you identify
them is an unsolved problem (they speculated on one or two). First we have to find out what such
processes could show and how, that is, obtain a much better idea of possible confounding factors.
*If* we can do that, *then* it might make sense to perform such experiments if someone somewhere can
find the resources to do so. What does not make sense is to rule out other data analysis because it
doesn't meet some so-far-unattainable ideal that someone has in hisher head.
PBL
Prof. Peter Bernard Ladkin, Bielefeld, Germany
MoreInCommon
Je suis Charlie
Tel+msg +49 (0)521 880 7319 www.rvs-bi.de<http://www.rvs-bi.de>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20181117/cc924d82/attachment-0001.html>
More information about the systemsafety
mailing list