[SystemSafety] Scalability of Formal Methods [was: Logic]
Martyn Thomas
martyn at thomas-associates.co.uk
Wed Feb 26 20:46:19 CET 2014
I suggest that we should consider the behaviour of the compiler, the
libraries, other system components and (Fetzer's focus) the hardware as
issues that can be addressed and assured separately from whether the
source code of the software is an accurate refinement of the
specification, and whether the specification describes a system that,
with hindsight, many years later, we would agree was the system that
should have been built.
I'm an engineer. Fetzer was a philosopher (and maybe he still is). My
interest in formal methods is whether they can help me to carry out part
of the system development task better than alternative methods. (By
better, I mean faster, cheaper and with fewer errors or some
advantageous balance of these). I'm not interested in absolute
certainties - show me an engineer who says he/she is and I'll show you
someone who isn't an engineer.
My experience is that they can help me to carry out part of the system
development task better than alternative methods, and I believe that I
can explain why this should be unsurprising. But no, using Z and Spark
obviously won't guarantee a successful outcome if the requirements are
wrong, if the compiler generates bad code, if your microprocessor
doesn't perform the way the compiler-writer thought it would, and you
have imported libraries and othe system components that are full of
bugs, or if your version control resembles a waste-tip in a hurricane.
(And I'm not aware that anyone has ever asserted that they would solve
these problems).
And I believe that there is no upper limit on the size of systems where
such benefits could be obtained, though I can only argue that from first
principles as (obviously) there is an upper limit on my actual experience.
Martyn
On 26/02/2014 18:55, Patrick Graydon wrote:
> Perhaps it has only not come up in this conversation because everyone already understands it, but at the risk of stating the obvious I’ll point out that neither SPARK proofs nor any other formal verification efforts are deductive proof that the software will operate as specified when deployed on real target hardware. Fetzer's Program Verification: The Very Idea, Comm. of the ACM, 31(9):1048–1063, Sept. 1988 discusses general reasons for this. More specifically and obviously in the case of SPARK, the proofs relate to the source code, not the object code. I have personally seen a compiler take SPARK source code that had been proven to refine a specification and generate dangerously flawed object code. (Testing easily revealed the problem.)
>
> There is also the issue of mixed non-SPARK code in a process and its impact on conclusions that can be drawn from SPARK proofs. There might be cases where an entire executable can be written purely in SPARK, but I’ve seen cases where practical matters forced blending in a little C or plain Ada. For example, the compiler I was using automatically generated calls to memset and memcpy. I can’t see how you could possibly write these in SPARK and a bug in them has the potential to corrupt any memory that the processor’s memory system allows write access to. Because the proof obligations generated by the SPARK tools assume that non-volatile memory locations do not change unless the SPARK code changes them, a memory-stomping bug in non-SPARK code has the power to undermine conclusions about the SPARK code’s behaviour that a complete set of proofs might appear to support.
>
> Then there is the issue of floating-point arithmetic. I am far from an expert in verifying this, but my general understanding was that if you turn on the SPARK support for it, what you get is obligations that show that the arithmetic would be correct if the floating-point unit performed infinite-precision rational-number arithmetic. But real floating point units have limited precision. Useful though limited, and I can see why the manuals I have for the SPARK tools direct the reader to enquire about floating-point support.
>
> If all of this sounds a bit critical of SPARK or formal verification generally, please understand that I am a big fan of the language and deeply appreciate formal verification’s power to find defects. I am also conscious that there are some kinds of problems it is by nature blind to. Appreciation of the classes of defects a given formal verification technique /can’t/ find is needed to plan a V&V approach that covers all of the bases.
>
> — Patrick
>
> Dr Patrick John Graydon
> Postdoctoral Research Fellow
> School of Innovation, Design, and Engineering (IDT)
> Mälardalens Högskola (MDH), Västerås, Sweden
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
>
More information about the systemsafety
mailing list