[SystemSafety] The Fosse of Babel
Steve Tockey
Steve.Tockey at construx.com
Tue Sep 25 17:53:40 CEST 2018
Olwen,
“I've had a brief look at it but am pressed right now, so will take a while to look more fully.”
Understood. My reply isn’t exactly coming back to you at light speed either . . .
“I personally wouldn't touch UML with a double-length, disinfected barge-pole. It, and most other manifestations of OO, have always seemed to me like an unnatural and byzantine solution looking for a problem. My beef with it is that it is unnecessarily complex for what it achieves”
Well, the saying, “A horse is a camel designed by a committee” would be a very appropriate description of UML as a whole. But I don’t use the breadth of UML, it’s not necessary. Only a very limited subset is sufficient.
“its formal semantics had to be retro-fitted (where they are used at all)”
Yes. UML according to the as-published specs has semantic holes in it large enough to fly a 747 through. Appendix L of the book is an attempt to patch as many of those holes as possible. Without that patching, I agree, UML is fairly useless.
“Besides, I think it is a fundamental mistake to have the notion of "object" as an organising principle for expressing requirements and designs. Abstract data types are fine.”
In the policy and process semantic model, the “classes” really are more abstract data types than anything because they are implementation technology-independent concepts. Just like ADTs.
“It's when you get into multiple inheritance and polymorphism that, in my experience, the whole thing becomes unstuck.”
Agreed. Both are not allowed in a semantic model. For exactly the reason you cite: becoming unstuck.
“The most opaque and muddled designs I've ever seen have been formulated in UML.”
UML is merely the language. He surface syntax, if you will. Chosen because it is fairly widely recognized. But just as Shakespeare can write literature in English, some duffer can write drivel. It’s not necessarily the fault of the language, more the fault of the author.
“Why go to OO when most systems can be designed perfectly well and more simply as action-systems?”
You admit to avoiding larger systems. I don’t have the luxury of sticking with small systems. The concept of Class (well, ADT, really) is a tremendous help in scaling up to industrial-sized systems like I have to work with. You simply cannot be successful with Boeing P-8 Mission Systems with the Structured Stuff. It will collapse under it’s own weight. And that’s the space I have to deal with. If you prefer to stay in the realm of systems where Structured Stuff is appropriate, that’s your choice. Not everybody has that luxury, and they need appropriate help.
“My preference is for something that is leaner and more mathematical from the start. I find that going from a cut-down SSADM to Coloured Petri-Nets (CPNs) works fine - the translation of artefacts can be quite direct and the CPN formalism is a lot closer to the underlying mathematics to start with than UML.”
Again, a matter of style? My preference is for a Jeannette Wing style of approach (http://www.computing.dcu.ie/~hamilton/teaching/CA648/papers/specifier_intro_FM.pdf) where the underlying formal mathematics are present but they are hidden behind convenient surface syntax such as state machines and (logical) data models. One can translate, if needed, a semantic model into the underlying “Upside down A’s and Backwards E’s” if desired. For larger systems, the formal methods notations become unwieldy, UML is simply being used as a short-hand.
“In my experience, the larger a project is, the more likely it is that you'll come across a project in which key initial decisions have been taken by some ignoramus who commits the project to a fundamentally mistaken approach from the outset.”
Yes, what I call “Resume Driven Development” because said ignoramus is only interested in padding their CV. That’s why my first book was on Engineering Economy: to train those bone-heads to consider the economic perspective of their technical decisions.
“And this is not to mention the lumpenprogrammers you may end up having to work with.”
A favorite paper of mine: http://pyxisinc.com/NNPP_Article.pdf Paragraphs 1 and 4, in particular. Interesting, however, when we use the approach in the book on large projects—up to 350 developers—we don’t see any of Schulmeyer’s NNPPs. A good, solid process does help make a difference.
“If someone asks me to use anything that's ever referred to as a "methodology", then, with a few noble exceptions, I'll tend to consider him/her incompetent until proven otherwise - especially if all his/her technical decisions are framed in terms of the said methodology rather than in mathematically-based methods.”
That’s why Appendix L is in the book: to provide the translation from the convenient surface syntax of UML into the underlying formal mathematics ala Jeanette Wing.
“IMO "methodologies" are for those whose command of mathematics is so p!ss-poor that they shouldn't be let anywhere near critical systems engineering in the first place.”
Those same people should not even be allowed near non-critical systems either. I can rant incessantly about the “highly paid amateurs” in Redmond, WA.
“Finally, and you may think paradoxically, I don't put this out as an attack on your approach. I agree with such of your principles as I've seen in what I've looked at in your manuscript. It's just that my professional experiences with projects that have used UML/OO has almost always been depressing. I now avoid such projects as much for my mental health as for anything else - and that is not hyperbole - I mean it literally.”
Agreed. But it’s partly the Shakespeare vs. duffers issue. So if/when you do get the time to look into the book a bit more, I do sincerely hope you will be pleasantly surprised by what you find.
Best,
— steve
From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de<mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de>> on behalf of Olwen Morgan <olwen.morgan at btinternet.com<mailto:olwen.morgan at btinternet.com>>
Date: Sunday, September 23, 2018 at 1:28 PM
To: "systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>" <systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>>
Subject: Re: [SystemSafety] The Fosse of Babel
Steve,
I've had a brief look at it but am pressed right now, so will take a while to look more fully.
I agree with the principles ...
... but I personally wouldn't touch UML with a double-length, disinfected barge-pole. It, and most other manifestations of OO, have always seemed to me like an unnatural and byzantine solution looking for a problem. My beef with it is that it is unnecessarily complex for what it achieves, its formal semantics had to be retro-fitted (where they are used at all), and anything that I might try to achieve in UML, I can achieve more concisely and more clearly in structured methods (my favourite being a radically cut-down subset of SSADM with direct translation from semi-formal artefacts to a soundly-based modelling formalism).
Besides, I think it is a fundamental mistake to have the notion of "object" as an organising principle for expressing requirements and designs. Abstract data types are fine. It's when you get into multiple inheritance and polymorphism that, in my experience, the whole thing becomes unstuck. The most opaque and muddled designs I've ever seen have been formulated in UML. Moreover, to my mind at least, the notion of "process" is a more useful abstraction than that of "object". Why go to OO when most systems can be designed perfectly well and more simply as action-systems?
My preference is for something that is leaner and more mathematical from the start. I find that going from a cut-down SSADM to Coloured Petri-Nets (CPNs) works fine - the translation of artefacts can be quite direct and the CPN formalism is a lot closer to the underlying mathematics to start with than UML.
There is, I admit, a bias here. For the latter quarter of my professional life, I have as far as possible avoided getting involved in large projects unless they are being done by exceptionally capable companies. In my experience, the larger a project is, the more likely it is that you'll come across a project in which key initial decisions have been taken by some ignoramus who commits the project to a fundamentally mistaken approach from the outset. Also, large projects on large machines often use COTS software with poorly-documented, cr at p APIs that really p!ss me off to use. And this is not to mention the lumpenprogrammers you may end up having to work with.
Focussing on smaller projects, latterly mostly bare-metal embedded stuff, and having significant input at the front end of the project, it's easier to steer people clear of daft decisions from the outset. Also, when programming for bare metal, I don't have to shoe-horn software into using some badly-design API. I write what I need myself.
I've been familiar with formal methods for over 40 years. (In fact, I was a founder member of the BCS-FACS Specialist Group). Perhaps hubristically, I consider myself quite able to pick methods and tools that will best serve my purpose. If someone asks me to use anything that's ever referred to as a "methodology", then, with a few noble exceptions, I'll tend to consider him/her incompetent until proven otherwise - especially if all his/her technical decisions are framed in terms of the said methodology rather than in mathematically-based methods.
I do not use "methodologies". I use methods and tools with a pretty direct basis in mathematics. IMO "methodologies" are for those whose command of mathematics is so p!ss-poor that they shouldn't be let anywhere near critical systems engineering in the first place.
Finally, and you may think paradoxically, I don't put this out as an attack on your approach. I agree with such of your principles as I've seen in what I've looked at in your manuscript. It's just that my professional experiences with projects that have used UML/OO has almost always been depressing. I now avoid such projects as much for my mental health as for anything else - and that is not hyperbole - I mean it literally.
End of compact and bijou rantette ;-))
O
On 22/09/18 11:54, Steve Tockey wrote:
Olwen,
“What software engineering needs more than anything else is not new research but better application of what we already know”
Yes. Agreed.
“ . . . it's quite clear that we're NOT EVEN TRYING to design EELs - when the state of knowledge in computer science is almost certainly capable of giving an EEL suitable semantics and making compilers for it”
Maybe most people are not even trying, but some are. I think a lot of progress has already been made in that area.
“I posit that, if we ever get a firm grip on software quality, it will be because we have developed what I call, "end-to-end" languages - call them EELs for short. An EEL is a language into which we translate natural language requirements such that thereafter, all tasks in the software lifecycle can be supported directly by the EEL.
That means that an EEL must support:
1. Statements of requirements
2. Description of designs
3. Low-level processing
4. Unit and integration testing
5. System testing
6. Proof of correctness conditions at all life cycle stages
7. Identification of configuration items (COBOL's Identification Division was prescient here - three cheers for Auntie Grace)
... and generally any other task in the life cycle related to product quality.”
While I don’t claim to hav fully solved the EEL problem, I think I have a pretty good solution to the problem that has been used on a number of real-world, significantly-sized projects. It is all written up in a manuscript that’s under contract with IEEE Computer Society Press (aka Wiley) titled, “How to Engineer Software". Since it’s not published yet, I will continue to make the manuscript available “for review purposes”. It is available on DropBox:
https://www.dropbox.com/sh/jjjwmr3cpt4wgfc/AACSFjYD2p3PvcFzwFlb3S9Qa?dl=0
There is a companion demonstration model editor and model compiler available at
https://www.dropbox.com/sh/7vbg4dgzf1ipqua/AABYnykze04x4VqEuzIBvcxba?dl=0
I think these are the key features relevant to your notion of an EEL:
*) It all centers around the notion of a “Semantic Model” that captures the precise policy and process semantics that are the reason for building the software in the first place. Software exists to automate enforcement of some set of “policies” and carry out some set of “processes”. For the developers to automate the right policies and processes, those developers need to understand those semantics at least as well (if not better) than the domain experts understand it. Part II of the book, chapters 7 through 12, explain how to build, evaluate, and maintain these semantic models. One of the sections in Chapter 12 does talk about how to derive acceptance test cases from the semantic model.
*) The semantics of the semantic models (the “meta-model”) is presented in Appendix L. It shows how the semantic modeling language is grounded in fundamental discrete mathematics and computer science like Set Theory, Finite Automata Theory, Measurement Theory, etc. It is almost certainly not at a level of formalism that some esteemed members of this group would necessarily approve of, but at least it is a pretty solid start.
*) Part III, chapters 13 through 22, show how a semantic model can be translated into design and code. Chapters 14 through 17 discuss by-hand translation. Chapters 20 and 21 discuss “open model compilation” automation translation where derivation of executable code is controlled by an open, rule-based compiler. If you don’t like the generated source code, you don’t edit the generated code. You edit the production rules and have the compiler generate the code that you do like.
*) Chapter 18 discusses a number of formalisms in software: “Programming by Intent”, proper use of assertions, proofs of correctness, . . .
If you read the manuscript, I would appreciate hearing your impressions of it. You might want to start with the outline (“aOutline.pdf”) to get a sense of the overall structure of the book and then move into the detailed chapters.
Best,
— steve
From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de<mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de>> on behalf of Olwen Morgan <olwen.morgan at btinternet.com<mailto:olwen.morgan at btinternet.com>>
Date: Friday, September 21, 2018 at 7:25 AM
To: "systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>" <systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>>
Subject: [SystemSafety] The Fosse of Babel
" ... let us go down, and there confound their language, that they may not understand one another's speech.So the Lord scattered them abroad from thence upon the face of all the earth ..."
Genesis 11:7-8
(No, by quoting part of the Tower of Babel story I've not suddenly gone all god-bothering - I'm more of a Buddhist than anything else). But this serves to introduce an idea:
Ask around and I expect you'd find a reasonable consensus that mathematics is the hardest thing in the sciences and translation is the hardest thing in the humanities. (As an occasional translator, FR-EN and DE-EN, I can certainly vouch for this as regards translation.)
Now, by the standards of hard mathematics, the kind of mathematics that you need to do software engineering rigorously isn't particularly hard. On the other hand something that is done in every software process I've seen is translation. We start with requirements in natural language, we do design in another formalism, programming in another and testing using yet further formalisms. Much of the use of mathematics in software engineering is to provide a lingua-franca that allows us to reason about whether a description of something in one kind of formalism is actually consistent with its description in a different formalism.
This makes pertinent the question of WHY the software engineering process is one of successive translations between different formalisms. Are we not asking for trouble if we have to use the most difficult thing in the sciences to keep us afloat because we're doing the most difficult thing in the humanities?
I posit that, if we ever get a firm grip on software quality, it will be because we have developed what I call, "end-to-end" languages - call them EELs for short. An EEL is a language into which we translate natural language requirements such that thereafter, all tasks in the software lifecycle can be supported directly by the EEL.
That means that an EEL must support:
1. Statements of requirements
2. Description of designs
3. Low-level processing
4. Unit and integration testing
5. System testing
6. Proof of correctness conditions at all life cycle stages
7. Identification of configuration items (COBOL's Identification Division was prescient here - three cheers for Auntie Grace)
... and generally any other task in the life cycle related to product quality.
An obvious thing to do is to start designing languages that include annotations for; proofs, testing, configuration control, etc. Proof annotation are well established in SPARK Ada and within the Frama C project. It would be no great task to include proof annotations in a language itself. Proof annotations could also support automatic test generation.
Yet, looking at current programming languages, it's quite clear that we're NOT EVEN TRYING to design EELs - when the state of knowledge in computer science is almost certainly capable of giving an EEL suitable semantics and making compilers for it. What software engineering needs more than anything else is not new research but better application of what we already know.
O
--
Olwen Morgan CITP, MBCS olwen.morgan at btinternet.com<mailto:olwen.morgan at btinternet.com> +44 (0) 7854 899667 Carmarthenshire, Wales, UK
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20180925/73833e78/attachment-0001.html>
More information about the systemsafety
mailing list