[SystemSafety] Another question

Steve Tockey Steve.Tockey at construx.com
Thu Sep 20 15:18:30 CEST 2018


Martyn Thomas provided an insightful observation:

“An engineer wants their system to be fit for purpose and chooses methods, tools and components that are expected to achieve fitness for purpose. It's poor engineering to have a system fail in testing, partly because that puts the budget and schedule at risk but mainly because it reveals that the chosen methods, tools or components have not delivered a system of the required quality, and that raises questions about the quality of the development processes.”


Similarly, C. A. R. (Tony) Hoare said:

“The real value of tests is not that they detect [defects] in the code, but that they detect inadequacies in the methods, concentration, and skills of those who design and produce the code.”


In my own work with software organizations I look at “Rework Percentage” (R%): the percent of project labor hours that are spent later fixing things that were earlier claimed to be correct but found to be deficient. Estimates of rework normally average around 50%. I’ve actually measured R% in five different software organizations:

• 350-developer organization measured 57%
• 60-developer organization measured 59%
• 125-developer organization measured 63%
• 100-developer organization measured 65%
• 150-developer organization measured 67%

All for a weighted average of about 62%.

This means that rework is the single largest contributor to project cost and schedule, and it is bigger than all other contributors combined.


All I can say based on all of this is that the people in most software organizations are seriously delusional. . .



— steve




From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de<mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de>> on behalf of Olwen Morgan <olwen.morgan at btinternet.com<mailto:olwen.morgan at btinternet.com>>
Date: Thursday, September 20, 2018 at 5:34 AM
To: "systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>" <systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>>
Subject: [SystemSafety] Another question


Another baffler:

Lots of developers have software processes that are not truly fit for purpose. (A good, simple diagnostic sign is to look at the technical effectiveness of error-prevention and -detection measures at each stage of the process.) Hardly surprising, then, that in such processes a lot of errors are left to be discovered in testing. Now the question:

If your testing keeps on taking longer than you planned, why do people pay only lip-service to adopting coding styles that seek proactively to minimise the size of relevant test coverage domains?

Obviously, the best course is to strength error detection everywhere in the process, but actively minimising the size of coverage domains makes economic sense if you rely mostly on testing to find bugs.


O

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20180920/88e3a7f5/attachment.html>


More information about the systemsafety mailing list