[SystemSafety] Another question
Olwen Morgan
olwen.morgan at btinternet.com
Thu Sep 20 15:33:58 CEST 2018
I wasn't ware of those quotes but I agree with them entirely. AFAI am
concerned, *if someone finds a bug in my code during testing, then I
shall have f*cked up big-time*.
Actually I wasn't making quite the same point that MT and CARH were
making. It's easy to see why finding a defect by static analysis is
typically 2 orders of magnitude cheaper than finding it by testing.
Static analysis looks at all the code whereas a test looks at only the
part of the code that it executes. So you have to have lots of tests AND
you have to code them up in test scripts. This is where the 2 orders of
magnitude cost-escalation occurs.
My point about coding to minimise the size of test coverage domains is
simply that it helps to reduce test effort by keeping down the numbers
of required tests. If you're not doing static analysis on your code,
it's insanity not to try the second-best option of minimising the sizes
of test coverage domains.
That was the motivation behind my proposed extra MISRA C rule to ensure
that any test that achieves 100% strong robust boundary-value coverage
also achieves 100% MCDC and 100% simple path coverage. The MCDC and
simple path test coverage domains are white-box domains, typically
covered at unit-testing time. The boundary-value coverage domains are
black-box domains that tend to get covered at higher levels of
integration testing. If you follow my proposed rule (which, admittedly
is draconian and not always possible), you ensure that all levels of
testing above unit testing are still achieving high levels of whit-box
coverage.
O
On 20/09/18 14:18, Steve Tockey wrote:
>
> Martyn Thomas provided an insightful observation:
>
> “/An engineer wants their system to be fit for purpose and chooses
> methods, tools and components that are expected to achieve fitness for
> purpose. It's poor engineering to have a system fail in testing,
> partly because that puts the budget and schedule at risk but mainly
> because it reveals that the chosen methods, tools or components have
> not delivered a system of the required quality, and that raises
> questions about the quality of the development processes./”
>
>
> Similarly, C. A. R. (Tony) Hoare said:
>
> “/The real value of tests is not that they detect [defects] in the
> code, but that they detect inadequacies in the methods, concentration,
> and skills of those who design and produce the code./”
>
>
> In my own work with software organizations I look at “Rework
> Percentage” (R%): the percent of project labor hours that are spent
> later fixing things that were earlier claimed to be correct but found
> to be deficient. Estimates of rework normally average around 50%. I’ve
> actually measured R% in five different software organizations:
>
> •350-developer organization measured 57%
> •60-developer organization measured 59%
> •125-developer organization measured 63%
> •100-developer organization measured 65%
> •150-developer organization measured 67%
>
> All for a weighted average of about 62%.
>
> This means that rework is the single largest contributor to project
> cost and schedule, and it is bigger than all other contributors combined.
>
>
> All I can say based on all of this is that the people in most software
> organizations are seriously delusional. . .
>
>
>
> — steve
>
>
>
>
> From: systemsafety
> <systemsafety-bounces at lists.techfak.uni-bielefeld.de
> <mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de>> on
> behalf of Olwen Morgan <olwen.morgan at btinternet.com
> <mailto:olwen.morgan at btinternet.com>>
> Date: Thursday, September 20, 2018 at 5:34 AM
> To: "systemsafety at lists.techfak.uni-bielefeld.de
> <mailto:systemsafety at lists.techfak.uni-bielefeld.de>"
> <systemsafety at lists.techfak.uni-bielefeld.de
> <mailto:systemsafety at lists.techfak.uni-bielefeld.de>>
> Subject: [SystemSafety] Another question
>
>
> Another baffler:
>
> Lots of developers have software processes that are not truly fit for
> purpose. (A good, simple diagnostic sign is to look at the technical
> effectiveness of error-prevention and -detection measures at each
> stage of the process.) Hardly surprising, then, that in such processes
> a lot of errors are left to be discovered in testing. Now the question:
>
> If your testing keeps on taking longer than you planned, *why do
> people pay only lip-service to adopting coding styles that seek
> proactively to minimise the size of relevant test coverage domains?*
>
> Obviously, the best course is to strength error detection everywhere
> in the process, but actively minimising the size of coverage domains
> makes economic sense if you rely mostly on testing to find bugs.
>
>
> O
>
--
Olwen Morgan CITP, MBCS olwen.morgan at btinternet.com +44 (0) 7854 899667
Carmarthenshire, Wales, UK
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20180920/845b45ec/attachment-0001.html>
More information about the systemsafety
mailing list