[SystemSafety] "Ripple20 vulnerabilities will haunt the IoT landscape for years to come"
Olwen Morgan
olwen at phaedsys.com
Wed Jul 1 16:56:45 CEST 2020
I agree entirely. Given full choice, I would use the best static
analysis tools I could get to analyse all possible execution instances
of my code. Then I would use the best testing tools I could get to
implement a thoroughly designed set of tests that 100% covered all
relevant input-output behaviour (by strong, robust boundary-value
testing) and simultaneously all simple paths through the code (having
deliberately designed the code such that every set of tests that
achieves the said 100% boundary value coverage also achieves 100% simple
path coverage).
If the tests passed, my confidence in the assertion of program
correctness would increase in the same way that a scientific experiment
that fails to falsify a hypothesis strengthens confidence in the truth
of the hypothesis.
If any test failed, then I would start looking for a problem somewhere
... not excluding the possibility that the static analysis and test
coverage measurement tools might have got it wrong.
What else could I reasonably do? ...
... and WTF can anyone do if shait-breaned project managers do not let
one approach software quality with comparable rigour?
Olwen
On 01/07/2020 15:36, Steve Tockey wrote:
>
> Quoting Boris Beizer:
>
>
> ³It only takes one failed test to show that the software doesn¹t work, but
> even an infinite number of tests won¹t prove that it does²
>
>
>
> Quoting Cem Kaner:
>
> ³If you think you can fully test a program without testing its response to
> every possible input, fine. Give us your test cases. We can write a
> program that will pass all of your tests but still fail spectacularly on
> an input you missed. If we can do this deliberately, our contention is
> that we or other programmers could do it accidentally²
>
>
>
> Quoting Boris Beizer (again):
>
> ³Our objective must shift from an absolute proof to a suitably convincing
> demonstration²
>
>
>
> Alternatively, quoting me:
>
> ³Depending on testing alone‹as the sole means of determining code
> correctness‹is a hopelessly lost cause²
>
>
>
>
>
> -----Original Message-----
> From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de>
> on behalf of Olwen Morgan <olwen at phaedsys.com>
> Date: Wednesday, July 1, 2020 at 7:17 AM
> To: Martyn Thomas <martyn at 72f.org>
> Cc: "systemsafety at lists.techfak.uni-bielefeld.de"
> <systemsafety at lists.techfak.uni-bielefeld.de>
> Subject: Re: [SystemSafety] "Ripple20 vulnerabilities will haunt the IoT
> landscape for years to come"
>
> Good question.
>
> As far as I can see, all I can possibly know is that a (hopefully
> well-designed) set of tests has failed to falsify the assertion that the
> software meets its specification.
>
> What else could one claim of any experiment?
>
> Olwen
>
>
> On 26/06/2020 21:46, Martyn Thomas wrote:
>> I like to ask ³what do you know after your software has passed your
>> tests that you didn¹t know before - other than that it passes these
>> specific tests run in this specific order today? And if there is
>> anything, how do you know that?²
>>
>> I have never received an answer that addresses the question..
>>
>> Regards
>>
>> Martyn
>>
>>> On 26 Jun 2020, at 20:35, Olwen Morgan <olwen at phaedsys.com> wrote:
>>>
>>>
>>> On 26/06/2020 19:36, paul_e.bennett at topmail.co.uk wrote:
>>>>> A lot of software source code I have seen from others would
>>>>> immediately fall
>>>>> into the rejected category. Mainly through lack of included
>>>>> documentation,
>>>>> very high MCC scores and lack of a clear enough interface.
>>> Arghhh ... another perennial hobby-horse of mine!
>>>
>>> Why do so few software engineers never even think of using test metrics
>>> to help them *minimise* the number of test cases they require?
>>>
>>> I usually try to design my own code so that every set of test cases
>>> that attains 100% boundary value coverage also attains 100% simple path
>>> coverage. It means that you have only the number of simple paths you
>>> need to make the relevant logical distinctions among the input
>>> conditions (easy to achieve in functional languages and, alas, easier
>>> still to fail to achieve in imperative languages).
>>>
>>> But when I suggest this to other software "engineers", they usually ask
>>> me what "boundary value coverage" and "simple path" mean. ...
>>>
>>>
>>> ... and they wonder why I fantasise about their suffering long and
>>> excruciating deaths ... ?
>>>
>>>
>>> Brooding in dark, technostalinist hyperbole,
>>>
>>> Olwen
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> The System Safety Mailing List
>>> systemsafety at TechFak.Uni-Bielefeld.DE
>>> Manage your subscription:
>>> https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> Manage your subscription:
> https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
>
More information about the systemsafety
mailing list