[SystemSafety] Correctness by Construction - tortoises all the way down
Les Chambers
Les at chambers.com.au
Mon Jul 20 23:01:39 CEST 2020
Michael
>From minute to minute human beings model the real world in patterns. We
pattern match then behave accordingly. If I met you I would quickly match you
as a Non-threatening human not a tiger and compute there was no need to run.
If you came to me in pain unable to track your money I would quickly solve your
problem with the double entry accounting pattern (origin circa 1300 AD).
I once automated a latex reactor system with a pattern Developed in The
Netherlands. Christopher Alexander modelled built solutions with architectural
patterns . Refer his book The Timeless Way of Building. And he was dealing
with the most abstract phenomena of all : human feelings. What built
environments make us feel alive.
You could say pattern matching is a class of inductive reasoning, a
mathematical concept. For example, the state engine is a formalism but is not
the process of matching a real world problem to the state engine as a solution a
formalism in itself?
The math is a formal process whereby we reach truth beyond reason. Working
from inputs we mechanically apply mathematical process to compute outputs.
All real world problems belong to classes that have properties and participate in
relationships. The is-solved-by relationship being the one that leads us to the
solution. The core requirement of all control systems is that the controlled object
must be confined to a set of known , predictable , safe states and must
transition between those states in a predictable way. So the control systems
engineer pulls out his book of equations and does a match to the state engine.
Its mechanical and highly predictable.
Just a thought.
Les
PS: To throw a grenade into my own argument consider Tesla motor vehicles
with seven neural nets.
Ask em, theyll tell ya, We dont actually know how our neural networks
differentiate a human being from a paper bag but they do. Kinda. Trust us.
Hail Mary full of grace ... help us!
> Les (and John):
>
> Many thanks for your very interesting and illuminating email.
>
> Your story approach is rich, and illuminates much human behaviour in
software engineering. I liked the story (read in a quick peep at
www.specnative.com) of the manager who ignores risks of system failure to
avoid what he thought far worse---late delivery to the market. There are surely
many stories about the 'small boy with hammer' syndrome, about resolute
persistence in error because the whistle blower was not a paid-up member of
the imperial clothing guild, and about every kind of human frailty in play in
system development. I admire the breadth of your emphasis on human factors,
embracing everyone involved from the initial project motivation to the final
acceptance of the finished system; and I like the power of the story metaphor to
instruct and motivate.
>
> But my chief point is more technical, in the broadest sense of that term.
Formal models of the governed world of a cyber-physical system, and of its
behaviour, are necessarily imperfect---simply because the physical and human
world is itself non-formal. Formalism can avoid errors of logic, and can support
powerful techniques for verification and simulation. But logical consistency of a
formal model does not guarantee fidelity to its subject matter. A realistic cyber-
physical system is potentially subject to an unbounded zoo of instances and
occasions of failure in operation when the reality and the formal model disagree.
Minimising these instances and occasions of disagreement, or mitigating them,
demands a serious modelling discipline explicitly focused on exactly that task.
The challenge of devising or evolving such a discipline is similar in nature to the
challenge of avoiding certain kinds of programming error, as addressed, for
example, by SPARK Ada.
>
> So my questions about "Correctness by Construction"---and about the
accompanying form and content of requirements and specifications---were
motivated not by mischief, but by honest puzzlement at the use of the word
"correctness". I had supposed that the word connoted the certainty of
mathematical proof, but this certainty cannot be attained in developing a cyber-
physical system. As Einstein wrote: "as far as the propositions of mathematics
refer to reality, they are not certain; and as far as they are certain, they do not
refer to reality."
>
> -- Michael
>
> www.theworldandthemachine.com
>
> > On 17 Jul 2020, at 16:22, SPRIGGS, John J <John.SPRIGGS at nats.co.uk>
wrote:
> >
> > ,,, and if you want to follow up on Lesâs foray into story theory, I
recommend David Baboulene's story
bookhttp://www.amazon.co.uk/gp/product/0955708923
> >
> > I proofread it, but do not get royaltiesâ¦
> >
> > Is proofreading a sub-process of Construction by Correction?
> >
> > From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de>
On Behalf Of Les Chambers
> > Sent: 16 July 2020 02:23
> > To: 'Michael Jackson' <jacksonma at acm.org>
> > Cc: systemsafety at lists.techfak.uni-bielefeld.de
> > Subject: Re: [SystemSafety] Correctness by Construction - tortoises all the
way down
> >
> > Michael
> > Thank you for initiating this conversation. Somewhat mischievously I
> > suspect. But it's good mischief. Achieving correctness is essential to any
> > successful automated system.
> >
> > "the limits of correctness" is an example of the problem of infinite regress
> > where the truth of proposition P1 requires the support of proposition P2,
> > the truth of proposition P2 requires the support of proposition P3, and so
> > on, ad infinitum.
> >
> > It is well known in systems engineering that correctness means compliance
> > with some pre-existing specification (read proposition - P). Most projects
> > have a long chain of them - concept of operations, system requirements,
> > software requirements, software system architecture, detailed design ...
> > correctness of the system depending on each one of these artefacts being
> > complete, correct, unambiguous and verifiable. The verifiability of a
> > specification is a direct function of its formality. Formal languages/models
> > being the best we have at the moment. In my experience there have been
> > levels of formality. In my view you've reached complete formality when you
> > can press a button and generate code from the formal specification. This is
> > true for the finite state engine. It can be both understood by a human being
> > and compiled into executables.
> >
> > The core of our problem is that this proposition chain is not infinite. It
> > ends at the real world - a phenomenon that tech actors find difficult to
> > formalise. Don't feel bad, humanity has struggled with this problem for
> > centuries.
> > I cite an 1854 transcript of remarks by preacher Joseph Frederick Berg
> > addressed to Joseph Barker:
> > "My opponent's reasoning reminds me of the heathen, who, being asked on
what
> > the world stood, replied, "On a tortoise." But on what does the tortoise
> > stand? "On another tortoise." With Mr. Barker, too, there are tortoises all
> > the way down. "
> >
> > With requirements we have reached "the limits of correctness" on the back
of
> > the root tortoise. Where to from here?
> >
> > The beginnings of a solution to "How can software developers reason
> > reliability about the physical world?" lie in a deeper study of how humans
> > engage with metaphor. (Definition: a metaphor describes this in terms of
> > that where this is new and that is familiar - Robert Frost).
> >
> > Marooned on the back of your route tortoise it's instructive to recast the
> > problem statement.
> > "How can software developers reason reliability about the physical world?"
> > becomes "How can software developers help their users reason reliability
> > about the physical world?" In my experience a large proportion of
> > ambiguities errors and omissions arise from users who don't know what
they
> > want at the level of detail required to build an automated system.
> > Technologists fill in the gaps, sometimes erroneously. In this way we can
> > miss the fundamental behaviours of the system that justify its existence.
> >
> > The solution lies in shared metaphors. Concepts understood by both the
> > non-technical subject matter expert and the technologist that provide a
> > framework for discourse and improve the precision and the completeness of
> > the specification. A good metaphor helps people visualise a problem
sparking
> > inspiration as they wander through its n degree space. Good metaphors
also
> > colonise the subconscious which proceeds to work on the problem while
you're
> > asleep. The truest statement I have ever heard is, "they never remember
what
> > you said only what they were visualising while you were talking." So give
> > them something to imagine.
> >
> > The state engine is a good example. With a minimal amount of training I
have
> > taught plant operators without a high school education the basic concepts of
> > the finite state model. This metaphor then helped us eliminate any
> > ambiguities errors and omissions that typically occur in rambling free text.
> >
> > But the state engine is not a solution to everything (Steve Tockey breathes
> > a sigh of relief).
> > There is, in fact, a higher-order metaphor that does model real life
> > facilitating informed reflection on the core functionality required. The
> > fascinating thing about this metaphor is that knowledge of it is tacit in
> > all humans, not explicit. They can't tell you why they understand it they
> > just know truth when they see it (heard that before?). Applied thoughtfully
> > this metaphor can not only govern development of early concepts but also
> > inform the way a project is delivered. And if your project does not follow
> > its dictates the probability of failure is high.
> >
> > It is of course the mono myth the overarching pattern that describes the
> > underlying architecture of most stories you have ever heard, most movies
you
> > have ever watched.
> >
> > Guys, if you are still with me please suspend judgement for a few more
> > lines. What follows may explain why Microsoft trains all its people in story
> > theory.
> >
> > To engage with this metaphor you need to accept that embarking on a
complex
> > software and electronic systems project casts you as a character in a story
> > that will, whether you like it or not, unfold as neatly described by the
> > mono mythical pattern. I've further developed these ideas here:
> > https://www.specnative.com/story/Story-Tutorial-V4.0.pdf
> >
> > Here is one use case:
> > All stories feature a hero who embarks on a quest for some elixir that will
> > solve a problem in their normal world. The need for the quest becomes
> > apparent when their normal world is in some way thrown out of balance by
> > forces outside their control. The elixir is expected to restore that
> > balance. The hero is usually aware either consciously or unconsciously of
> > the imbalance but refuses to embark on the quest until forced by occurrence
> > of some horrendous inciting incident. Example: Luke Skywalker's auntie and
> > uncle are murdered by Imperial storm troopers causing him to take Obi-Wan
> > Kenobi's advice to become a Jedi Knight.
> >
> > So in you're first meeting with the client ask them:
> > You've been living in a normal comfortable world, what has happened lately
> > to throw it out of balance?
> > Have there been any horrendous incidents that have convinced you that
> > something has to be done?
> >
> > Another use case that speaks to a productive attitude in a requirements
> > engineer:
> > A story never rings true to an audience unless they know who the hero is,
> > they know what the hero wants and they want the hero to have it?
> > Empathy guys, empathy.
> >
> > Story theory gives you a form, it is not a formula. Understanding the story
> > pattern will not help you write a bestseller (or build the perfect system),
> > it will however help you identify what is missing in your candidate
> > bestseller. In contrast a formula allows you to push the button and generate
> > code.
> >
> > So Michael I'll leave it with you. Hope this helps.
> >
> > Cheers
> > Les
> >
> > -----Original Message-----
> > From: systemsafety
> > [mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de] On Behalf Of
> > Michael Jackson
> > Sent: Thursday, July 16, 2020 1:15 AM
> > To: David Crocker; Olwen Morgan; Brent Kimberley; Dewi Daniels;
> > njt at tudorassoc.com; Peter Bernard Ladkin; Martyn Thomas
> > Cc: systemsafety at lists.techfak.uni-bielefeld.de
> > Subject: Re: [SystemSafety] Correctness by Construction
> >
> > Thank you all for your emails on this topic. I have quoted below what I
> > think are the most relevant parts. My own comments follow the double
dashed
> > line after the last quote.
> >
> > ----------------------
> >
> > > On 14 Jul 2020, at 20:20, David Crocker <dcrocker at eschertech.com>
wrote:
> > >
> > > On 14/07/2020 17:06, Brent Kimberley wrote:
> > >> >> how are the software developers to reason reliably about the physical
> > problem world where the important requirements are located and defined,
and
> > will---or will not---be satisfied?
> > >>
> > >> An automated World view inquiry framework? epistemology automation?
;)
> > > The way I suggested doing that when using Perfect Developer to develop
and
> > (to some extent) verify the software requirements is to include a model of
> > the external physical system in the requirements specification. It is then
> > possible to reason and construct proofs about the combined behaviour of
the
> > software + physical system, including the effects of external inputs. This
> > approach is in principle applicable to other tool-supported formal
> > specification languages, for example Z.
> > >
> > > Although this is conceptually simple, the problem lies in constructing a
> > model of the physical system and possible external inputs that is
> > sufficiently accurate and complete to make the proofs meaningful and
useful.
> > >
> > > Cheers
> > >
> > > David Crocker, Escher Technologies Ltd.
> >
> > ----------------------
> >
> > On 15 Jul 2020,at 04.21, Brent Kimberley wrote:
> >
> > Hi Michael.
> > Perhaps I misinterpreted the question. I though the question was how can
> > software developers reason reliability about the physical world?
> >
> > My response was write more code. ;)
> > More precisely have the software to develop inferences WRT sensor /
actuator
> > / FRU / logic / bus - failure & train operators. ;)
> >
> > For example write the requested logic plus an engine to progressively refine
> > world view.
> >
> >
> > For example for a fly by wire or environmental controls system: there may
be
> > certain assumptions about the physical sensors, actuators, structural
> > members, buses, DAQs, energy sources, clocks, FRUs, by which the logic
> > engages with the physical world. The epistemology engine would
continuously
> > update it's world view assumptions and inform an ontology layer - used by
> > the requested software logic - to interact with the physical world.
> >
> > For four engines delivering power (and data) to four propulsion units -
> > transmission grid. The epistemology engine identifies events such as
> > faults, updating the ontology model, and the requested software logic using
> > the ontology layer could to control available propulsion units - routing
> > power (and data) around faults - in a way which ideally preserves assets.
> >
> > -----------------------
> >
> > On 15 Jul 2020, at 21.10, Olwen Morgan wrote:
> >
> > On 14/07/2020 20:20, David Crocker wrote:
> > <snip>
> > >
> > > include a model of the external physical system in the requirements
> > specification. It is then possible to reason and construct proofs about the
> > combined behaviour of the software + physical system, including the effects
> > of external inputs. This approach is in principle applicable to other
> > tool-supported formal specification languages, for example Z.
> > >
> > > Although this is conceptually simple, the problem lies in constructing a
> > model of the physical system and possible external inputs that is
> > sufficiently accurate and complete to make the proofs meaningful and
useful.
> > >
> > Just so. And in my experience an even greater problem lies in the difficulty
> > of persuading people that it is, in some cases , necessary. I once walked
> > out of and blew the whistle on a train safety project where fellow engineers
> > ignored me on this issue.
> >
> >
> > Olwen
> >
> > ----------------------
> > ----------------------
> >
> > Some comments:
> >
> > 1. One way to think about a bipartite cyber-physical system is to regard
> > the interface between the processor hardware and the physical problem
world
> > as providing a (bidirectional and hugely complex) API allowing the
processor
> > to monitor and control phenomena of the problem world. This API needs a
> > full, maximally reliable, model of the physical domains and phenomena of
the
> > problem world, showing the causal chains terminated at the actuators and
> > sensors and involving phenomena that are not directly connected to the
> > interface.
> >
> > 2. As David says: "Although this is conceptually simple, the problem lies
> > in constructing a model of the physical system and possible external inputs
> > that is sufficiently accurate and complete to make the proofs meaningful and
> > useful." Yes. But this problem is exactly parallel to the problem of
> > describing the semantics of the programming language---both the source
code
> > and the binary code executed by the processor. It can't be relegated to an
> > optional extra: without a solution the software developer is programming a
> > machine---the physical problem world---for which there is no adequate
> > semantic definition.
> >
> > 3. For a realistic system, the problem world is a large and complex
> > system, and modelling demands some major help from structure. One
approach
> > to the problem is to structure the physical model according to the structure
> > of the desired system behaviour. (This is what I aim at in my current work.)
> > This approach makes sense because the physical world model supports the
> > behaviour exactly as the processor supports the software execution. Each
> > 'constituent behaviour' (my terminology) relies on certain properties of the
> > problem world expressed in an associated model.
> >
> > 4. Perhaps the structuring of the physical model according to the
> > structure of the system behaviour is also a characteristic of Brent's
> > suggestion: "... to include a model of the external physical system in the
> > requirements specification." That depends, of course, on the structure,
> > format, and content of the Requirements Specification. I am keen to hear
> > more about Requirements from anyone here who deals with them either as
a
> > consumer or a producer.
> >
> > 5. A fundamental difficulty for formal models of the physical world is
> > that it is not itself a formal system. Brian Cantwell Smith explained the
> > difficulty in his 1986 paper "The Limits of Correctness", pointing out the
> > absence of a theory of informal-to-formal modelling. I think that a
> > discipline, rather than a theory, is needed (and I aim at this too in my
> > current work).
> >
> > -- Michael
> >
> > www.theworldandthemachine.com
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > _______________________________________________
> > The System Safety Mailing List
> > systemsafety at TechFak.Uni-Bielefeld.DE
> > Manage your subscription:
> > https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
> >
> >
> > _______________________________________________
> > The System Safety Mailing List
> > systemsafety at TechFak.Uni-Bielefeld.DE
> > Manage your subscription: https://lists.techfak.uni-
bielefeld.de/mailman/listinfo/systemsafety
> >
> >
> > If you are not the intended recipient, please notify our Help Desk at Email
Information.Solutions at nats.co.uk immediately. You should not copy or use this
email or attachment(s) for any purpose nor disclose their contents to any other
person.
> >
> > NATS computer systems may be monitored and communications carried on
them recorded, to secure the effective operation of the system.
> >
> > Please note that neither NATS nor the sender accepts any responsibility for
viruses or any losses caused as a result of viruses and it is your responsibility to
scan or otherwise check this email and any attachments.
> >
> > NATS means NATS (En Route) plc (company number: 4129273), NATS
(Services) Ltd (company number 4129270), NATSNAV Ltd (company number:
4164590) or NATS Ltd (company number 3155567) or NATS Holdings Ltd
(company number 4138218). All companies are registered in England and their
registered office is at 4000 Parkway, Whiteley, Fareham, Hampshire, PO15 7FL.
> > _______________________________________________
> > The System Safety Mailing List
> > systemsafety at TechFak.Uni-Bielefeld.DE
> > Manage your subscription: https://lists.techfak.uni-
bielefeld.de/mailman/listinfo/systemsafety
--
Les Chambers
les at chambers.com.au
+61 (0)412 648 992
More information about the systemsafety
mailing list