[SystemSafety] What do we know about software reliability?
yorklist at philwilliams.f2s.com
yorklist at philwilliams.f2s.com
Wed Sep 16 14:23:10 CEST 2020
Peter,
Apologies for putting my question quite so bluntly - I'm up against the clock on a deliverable so don't have the time I'd like to engage.
I find the discussion very useful for some of the current standards work I'm engaged in.
My concern with such testing is that the temporal aspect massively increases the test space, and therefore when looking at coverage and statistical significance I can't see how testing in the intended environment can produce something useful for any non-trivial software functionality.
I can see that if you have a 'white box' approach and understand the potential mechanisms of failure you can direct the testing to useful cases, but a black box approach where there is no knowledge of what temporal dependencies there may be is tricky.
You've cited Y2K and other classic known time thresholds - and I've used these in directing functional testing, but other non-specific thresholds exist for the likes of security certificates, which has caused common-mode failure in systems (none safety related as far as I'm aware, but banking and comms certainly).
I think this subject is worthy of more considered thinking than can be achieved over this forum, and would love to see a fully fledged debate to see where software reliability has moved on from when I first looked into it. I know Bev made a point of distinguishing the use of statistics on historic data sets to differentiate measurement of the distribution of events, and the use of that information to make useful predictions. As I recall the constraints to make it useful predictively were quite tight and difficult to apply in the industry in which I work.
I also recall presentations about software reliability where there was complete opposite of opinion in the theories prevailing at the time as to whether a greater number of detected defects in software indicated more or less reliable software. If there is now consensus about the direction of the trend, maybe there is hope for assessments of the magnitude.
I have many views on the similarities and differences between hardware, software and system reliability and hope to be able to contribute more once my deadlines are past. In the meantime I hope the discussions remain useful in exploring this topic.
Phil
-----Original Message-----
From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de> On Behalf Of Peter Bernard Ladkin
Sent: 16 September 2020 11:33
To: systemsafety at lists.techfak.uni-bielefeld.de
Subject: Re: [SystemSafety] What do we know about software reliability?
On 2020-09-16 11:46 , yorklist at philwilliams.f2s.com wrote:
> If A is dependent on some temporal event, and the testing is conducted
> prior to that event – what does the testing tell you about the outcome after that event?
Such things are issues, but I am not sure of the value of posing it so abstractly.
The abstract answer is that the occurrence of the temporal event TE is an environmental predicate:
the characteristic pre-TE or post-TE is part of the environment. So the answer to your question logically is: it tells you nothing at all because the environment has changed.
But that is hardly helpful. Here is a more concrete example. What does statistical testing prior to Y2K tell you about how your system works post-Y2K? Or, what does statistical testing prior to 2038 tell you about the operation of your 32-bit Unix system in 2038?
The answer is: design your statistical tests so that both environmental states are represented. Then you know.
Those are known possible-dependencies. You can - and did, and would - shield your system from such effects.
Then there are unknown ones. An easter egg triggered by the clock. A GPS-dependency tracing its way through a library you used.
I don't know of any general answer/prophylaxis in abstract terms. The known dependencies you just handle individually in whatever way is appropriate. I think you may be able to detect easter eggs by modified dead-code analysis. I think you handle internal temporal dependencies by performing an impact analysis on clock values. GPS dependencies can be detected through jamming in the environment E. And so on.
PBL
Prof. Peter Bernard Ladkin, Bielefeld, Germany Styelfy Bleibgsnd
Tel+msg +49 (0)521 880 7319 www.rvs-bi.de
More information about the systemsafety
mailing list