[SystemSafety] A small taste of what we're up against
Olwen Morgan
olwen at phaedsys.com
Wed Oct 24 15:27:43 CEST 2018
On 24/10/2018 13:32, Derek M Jones wrote:
>
> I'm willing to nail gluteus maximus to the floor.
>
Who is Gluteus Maximus and in what legion does he serve?
I don't have a problem with being rigorous on evidence. It is very
difficult technically, let alone financially, to conduct properly
controlled experiments in this area. That is quite apart from problems
associated with what you actually choose to measure and what you use it
as a proxy for. There is also the problem that disorder in one part of
project may swamp accurate assessment of efficiency gains in another.
There is, however, a deontic justification for using the best methods
and tools you can lay your hands on. If, in your own experience, using
good methods and tools helps you detect and remove errors before leaving
them to be found in testing (where the detection cost per error can be 2
orders of magnitude higher), then you should press for the use of those
methods and tools. As I think I've said elsewhere on this list, I know
of a project that saved a telco UKP 6.3m by fixing an error diagnosed by
QAC that was not revealed by testing.
Your argument that the problems do not lie in the design of languages is
simply mistaken. Apt language design helps to limit the complexity of
the decision problems whose solution is required to locate and diagnose
errors. Clear evidence of this is apparent in the severity of language
restriction required to render C code tractable to proof of freedom from
runtime errors. It's not a matter of deciding the undecidable. Rather it
is about making it fast and cheap to decide what *is* decidable.
Regardless of whether this all scales up to be reflected in
dependability in use, it still makes sense to do all you reasonably can
to detect errors as early as possible. Even if you end up producing the
same old cr at p, at least it's cheaper cr at p.
Olwen
More information about the systemsafety
mailing list