[SystemSafety] Putting Agile into a longer perspective
Olwen Morgan
olwen at phaedsys.com
Tue Oct 22 13:50:02 CEST 2019
Steve Tockey wrote:
>
> Unfortunately, building just the first one is really expensive. Who
> ever really throws it away and builds it again?
>
Agreed - but why is it expensive? I said I had some ideas on this, so
here goes:
One thing that has often struck me about software development is how
many different formalisms we use within it. And when we use different
formalisms, we end up translating between them. As it happens, I do
occasionally do professional translations (FR-EN and DE-EN). Translation
is the most difficult thing in the humanities. Mathematics is the most
difficult thing in the sciences. In software engineering, if we want to
get the translation right, we have to accomplish the most difficult
thing in the humanities using the most difficult thing in the sciences.
Short wonder that it causes so much brain-ache.
Therefore, I have long thought that software engineering would be a lot
easier, and therefore cheaper, if we used but one formalism for
specification, design and implementation. It would probably need two
forms, one graphical and one textual, but I cannot see any essential
impediment to this. Obviously the textual form would be compilable (but
more importantly also analysable), so we are thinking about something
with the properties of a very high-level programming language - of a
level comparable with that of, say Z alloyed with CCS. With modern
languages, I can't see this to be much of a problem, since once you have
sets of tuples (properly defined - not alla SQL), you have all the
expressive power you need in that respect. Also, proof annotations
should be an integral part of such a language.
Most importantly, however, such a language should support a concept that
I call incremental binding, whereby not all the attributes of processes
and objects need be declared all at once. In this way, following, say, a
method such as SSADM (obviously heavily cut down - it's technical ideas
are fine but the overblown documentation system is a pain) we could
produce specification and design documents that have exact, analysable
textual equivalents at all stages of development. When all the detail is
there, the textual artefacts would be executable.
To do this, you have to abandon traditional PL design, and sadly also
the concept of refinement. I'll give an example:
Consider the system context diagram. It names the system and its inputs
and outputs. The textual form of this could be something like:
*example: system fubar : inputs ( foo1, foo2), outputs (foo3) ;*
Later on in the development more detail could be provided by binding the
input and output names to particular file types. e.g.:
*foo1 : gpio;*
*foo2 : network;*
*foo3 : file;*
This would establish that foo1 comes from a gpio interface, foo2 from a
network interface, and foo3 is a file. Then later (temporally but not
spatially in the code) one could write:
*foo1 : gpio sequence of integer;*
*foo2 : network sequence of packet;*
*foo3 : file sequence of record;*
(If you're beginning to think that here I have in mind a data-logging
application, you're right.)
In this way, the attributes of program objects would be defined
incrementally by successive addition of attribute detail - but not by
proof-requiring decomposition into substructures. During development
balancing checks would determine where detail is missing and/or
inconsistent across parts of the specification/design. This also implies
that the analysis tools can determine, from the textual form of the
evolving spec/design/program, which kinds of analyses can and cannot be
performed on it. my idea is that you perform analyses every time you
change the artefact and revert back to the previous version if something
is wrong (continuous integration devops would help here). The language
would require that relevant proof annotations be present at every stage
of development - which would support early detection of errors by the
soundest available means.
If you do programming in this way, you can have a textual form for every
graphical (or tabular) specification and design artefact, that is
analysable as soon as it it created. OK, only when you've got all, or at
least most of, the detail do you have anything you can execute - but on
the way to the executable artefact, you have stayed entirely within /a
single/ formalism - serving specification, design and coding - and you
use a /coordinated/ set of analytical tools from one end of the process
to the other.
Such and end-to-end language, supported by a consistent set of tools,
would, I believe, reduce process costs by an order of magnitude (big
claim but no evidence - but that's how all progress begins).
Want to start throwing stones at the idea? ... Feel free ... especially
if you're Derek Jones ... :-))
Olwen
*PS: This idea is not by any means original.* I can trace it back as
least as far as Kit Grindley's /Systematics: A New Approach to Systems
Analysis/, Petrocelli Books, 1978, ISBN-10:0894330209,
ISBN-13:978-0894330209 - IMO a seminal book decades ahead of its time.
If he'd written it about 15 years later, by which time formal methods
had a lot more traction, its main weakness - lack of formal semantics -
might not have got it unjustly ignored.
*
*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20191022/3e43f160/attachment.html>
More information about the systemsafety
mailing list