[SystemSafety] Scalability of Formal Methods [was: Logic]
Peter Bernard Ladkin
ladkin at rvs.uni-bielefeld.de
Thu Feb 27 08:04:30 CET 2014
Prof. Peter Bernard Ladkin, University of Bielefeld and Causalis Limited
> On 27 Feb 2014, at 05:41, Patrick Graydon <patrick.graydon at mdh.se> wrote:
>
>
>> On 26 Feb 2014, at 22:33, Peter Bernard Ladkin <ladkin at rvs.uni-bielefeld.de> wrote:
>>
>> Fetzer claims to show that programs can never be proved correct. Here is a simple, trivial counterexample to that claim.
>
> I can’t see how this example actually tells us anything interesting. .... nobody actually cares whether you can ‘prove’ that a program fulfils a spec that says ‘my program might do nothing or anything at all’. So if that is the counterexample, all you’ve succeeded in doing in my mind is demonstrating that I am interested in a narrower class of specifications, namely those that serve some engineering purpose.
Let's go at it slowly. The point of a counterexample is narrowly epistemic, as follows.
Someone (often a mathematician, here a philosopher) claims A and shows you the argument or proof P. Someone else produces a counterexample B to A. Suppose it is obvious that B is correct. Then you know NOT A. And, further, you know that P is incorrect.
So now you know not to go around claiming A on the basis of P, for both are incorrect. This happens in math all the time, sometimes to the very best people on the most interesting work.
This is all the more effective when B is as obvious as possible. Making B as obvious as possible works against making B as interesting and (otherwise) informative as possible. If someone has a complicated argument that 2+2=5, best to take four oranges and put them on the table in 2 groups of 2. The observer learns nothing from it apart from seeing it is blindingly obvious that 2 and 2 make 4 altogether, not 5. If you use an argument in Peano arithmetic, your argument may be more interesting than oranges on the table, but it may equally be that your observer doesn't as easily see the lesson.
Bits of P may still sound good to you. Indeed, a modification of A might sound good to you. By all means, make the mod, and rescue what you can of P, if you can.
What we are discussing more generally is all the bits that need carefully to be put together to make a claim that some engineering artefact (design, source code, object code, installed kit including hardware) as been *proven* to fulfil its purpose. That's a piece of engineering metaphysics, to which Fetzer contributed nothing, and to which my counterexample contributes one thing which didn't appear so obvious to many people in the early 1990's but which seems to be obvious to us here now, namely that prove-correct is a binary relation among two engineering artefacts (prove-correct-with-respect-to) on the left side of the V, not a property of one.
PBL
More information about the systemsafety
mailing list