<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">
hi steve,
<div><br>
</div>
<div> Sorry for the delay, I was out yesterday.<br>
<div><br>
</div>
<div> My view is relatively narrow, my “specialty” by accident of being a warm body in the right place and the right decade for schooling, </div>
<div><br>
</div>
<div>is embedded software and systems integration, I worked the software that was </div>
<div><br>
</div>
<div> loaded before the application software that accessed the low level hardware. I worked to proved the viability of the hardware at the </div>
<div><br>
</div>
<div>black box to black box level (as part of a team). If we did our work correctly, hardware engineers could conduct their own</div>
<div><br>
</div>
<div>tests using our software of the black box. The funding typically came out of the testing phase WBS but was applied early </div>
<div><br>
</div>
<div>(to prove the toolset) and then latee during hardware integration (to prove the hardware interfaces and end-to-end paths), but b</div>
<div><br>
</div>
<div>efore the application (where all the application processes apply).</div>
<div><br>
</div>
<div> If the hardware doesn’t work, if the API between blacke boxes do not match or are missing features, the real application software </div>
<div><br>
</div>
<div>will not work until black box changes are made, no matter how well the higher level applications have been planned and processed. </div>
<div><br>
</div>
<div> What I found in practice was hardware faults that were never going to be resolved or were resolved in closed door meetings that I was not allowed</div>
<div><br>
</div>
<div> to attend, because the hardware were signed off as working, or the politics of billion dollar procurement and need to keep the money</div>
<div><br>
</div>
<div> flowing for an open relook at the hardware design. </div>
<div><br>
</div>
<div>I don’t see where the processes below matched the particular instances of work I participated in. </div>
<div><br>
</div>
<div>I’m realizing now (due to this discussion) my/our work was/is invisible and the need for this work “officially” doesn’t exist, or exists as</div>
<div><br>
</div>
<div>generic “testing".</div>
<div><br>
</div>
<div>bob s</div>
<div>
<div><br>
<blockquote type="cite">
<div>On Feb 24, 2025, at 6:35 PM, Steve Tockey <steve.tockey@construx.com> wrote:</div>
<br class="Apple-interchange-newline">
<div>
<div style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">
Bob,
<div>I’ll propose some of the reasons. These include, but are not limited to:</div>
<div><br>
</div>
<div>*) The process we use avoids—to the maximum extent possible—requirements written in a natural language. As much as possible, (functional) requirements are specified using a precise, concise, and unambiguous specification (i.e., modeling) language. Vague,
ambiguous, incomplete requirements are the #1 cause of difficulty on most software projects, we can essentially sidestep that problem.</div>
<div><br>
</div>
<div>*) The relationships between the specifications and the code is clear, direct, and obvious (although not necessarily identical in all cases). The point is that these specifications are intentionally value-added, and 100% not "check-a-box-on-some-contract
busy-work documentation for documentation’s sake". It is not a case of “self-documenting code”, instead it is literally a case of “self-coding documentation”. </div>
<div><br>
</div>
<div>*) The specification language has been intentionally set up to assist in controlling complexity, the #3 cause of difficulty on most software projects.</div>
<div><br>
</div>
<div>*) The specification language has been intentionally set up to allow partitioning of very large systems into small, manageable, bite-sized chunks that can be worked on by one or two individuals in as much isolation as possible. Specifically, interfaces
between the chunks are identified early and controlled not only syntactically but also semantically (Design by Contract). This is another dimension of controlling complexity, the #3 challenge.</div>
<div><br>
</div>
<div>*) We have a base of good historical data that predicts, with reasonable accuracy early on, the effort needed to develop the software based on counts of key elements in the specification content. Note that this process has been used in a wide variety of
industry verticals with the estimate and actual outcomes being very consistent regardless of specific vertical. This helps us avoid part of the #2 cause of difficulty, inadequate project management. Specifically, we can void overly-optimistic estimation.</div>
<div><br>
</div>
<div>*) Work-products in this process are defined in terms of a set of specification templates and accompanying checklists (aka “Definitions of Done”). There are four separate but closely related specifications for each chunk of deliverable code. We make extensive
use of peer review (inspection, to be precise) of the specifications to identify and resolve errors and incompleteness in the specifications as quickly and cheaply as possible. Two of the specifications get two separate partial peer reviews, meaning that between
starting work on a chunk and having code available for integration there are up to 7 separate peer review events. Rather than trying to test quality into the code after it has been written, the goal is to (1) avoid as many potential defects as are possible
and (2) identify and remove the unavoidable defects as soon as possible. An excellent measure of a software process is its “Rework Percentage” (“R%”), or, how much of total project effort is spent fixing things that were broken earlier. Our experience is that
the average project spends around 60% of their capacity to do technical work on the project in rework to fix defects. This process averages under 10% rework.</div>
<div><br>
</div>
<div>*) As a former Software Engineering professor of mine once said, “If it’s not obvious how to test it, can your specification be any good?”, these specifications are intentionally structured to make effective test cases obvious. Effective test cases literally
fall out from the specifications—another dimension of proof of the value-added nature of the specifications.</div>
<div><br>
</div>
<div><br>
</div>
<div>There are other reasons that have to do with other elements of avoiding inadequate project management, the #2 cause of difficulty on most software projects. In a nutshell, this process is very intentionally an engineering process that avoids the vast majority
of the inherent chaos that exists in most software processes. It’s allowing chaos that’s the real problem. And the bigger the project, the exponentially greater the chaos tends to be. Control the chaos at all levels and things scale up quite nicely.</div>
<div><br>
</div>
<div><br>
</div>
<div>Cheers,</div>
<div><br>
</div>
<div>— steve</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br id="lineBreakAtBeginningOfMessage">
<div><br>
<div>On Feb 24, 2025, at 12:31 PM, Robert P Schaefer <rps@mit.edu> wrote:</div>
<br class="Apple-interchange-newline">
<div>
<div style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">
hi Steve,
<div><br>
<div> I believe you, but there is not enough public information to determine the spread, the common case, the mean, the mode, etc. for a range of </div>
<div><br>
</div>
<div> projects across a industry type, nor the issues raised later that can be tied back to processes not followed or new processes needed. </div>
<div><br>
</div>
<div> I don’t have enough clues to explain why my experience with scaling was bad and yours good.</div>
<div><br>
</div>
<div>bob s<br id="lineBreakAtBeginningOfMessage">
<div><br>
<blockquote type="cite">
<div>On Feb 24, 2025, at 3:06 PM, Steve Tockey <steve.tockey@construx.com> wrote:</div>
<br class="Apple-interchange-newline">
<div>
<div style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">
<div><br>
</div>
<div>Robert,</div>
<div>I don’t agree that all processes cannot be scaled from small to large projects. I would say that this depends entirely on the process in question. Some processes actually scale quite well. Case in point, I have a process that was used in development of
the Mission Systems for the P-8 Poseidon (<a href="https://en.wikipedia.org/wiki/Boeing_P-8_Poseidon">https://en.wikipedia.org/wiki/Boeing_P-8_Poseidon</a>) which involved about 350 developers building around 7 million lines of code over a 7 year project where
that same basic process works fine on single developer, few weeks, under 5000 lines of code projects.</div>
<div><br>
</div>
<div><br>
</div>
<div>— steve</div>
<div><br>
</div>
<br id="lineBreakAtBeginningOfMessage">
<div><br>
<div>On Feb 24, 2025, at 7:38 AM, Robert P Schaefer <rps@mit.edu> wrote:</div>
<br class="Apple-interchange-newline">
<div>
<div>Hi, Peter,<br>
<br>
short comment, processes that are viable on small projects do not (and I believe cannot) scale to large projects<br>
<br>
longer explanation:<br>
<br>
in my experience (26 years but ended 15 years ago in the defense industry - and much happier now)
<br>
<br>
The processes that work for small programs do not scale to large developments for various reasons<br>
<br>
including for the firm that is responsible for the processes as prime contractor, gets overwhelmed,
<br>
<br>
by factors that cannot be controlled including: large requirement sets that contain latent<br>
<br>
contradictions (through chains of dependencies) that aren’t discovered until after requirements are signed off, <br>
<br>
not enough engineers to keep track of 1000s of requirements from concept to execution<br>
<br>
that are allocated distant in time (from concept to execution) from where they are defined,<br>
<br>
implicit information lost when the developers in one phase are re-assigned after that phase completes, <br>
<br>
funding for testing (in the future) that is reassigned to earlier phases that go long in the present,<br>
<br>
as previously mentioned when overwhelmed by lists of faults, faults that are regraded as not-faults,<br>
<br>
and (what really opened my eyes to the high level bs going on) adversarial subcontractors who want to be prime in the next go-round.<br>
<br>
bob s<br>
<br>
<blockquote type="cite">On Feb 24, 2025, at 9:53 AM, Prof. Dr. Peter Bernard Ladkin <ladkin@causalis.com> wrote:<br>
<br>
On 2025-02-24 14:54 , Robert P Schaefer wrote:<br>
<blockquote type="cite">I hear you, i have no answers.<br>
</blockquote>
<br>
I do.<br>
<br>
Back when CapGemini, formerly Altran UK, was still called Praxis, they regularly estimated the achieved reliability of delivered products (and still did for iFACTS when they were Altran, a decade ago. Probably still do.) There is a very public project called
Tokeneer, undertaken with the NSA, where the attempt was made to develop a bug-free small (10K LoC, as I remember) biomeasurement system for access control. They almost succeeded (I recall Rod Chapman saying that two bugs were belatedly discovered).<br>
<br>
There are lots of ways, increasingly accessible, in which objective properties of code and its documentation can be rigorously established. You of course need the right kind of tools, right choice of programming language, right compiler, and so on.<br>
<br>
<br>
On Feb 24, 2025, at 8:47 AM, Derek M Jones <derek@knosof.co.uk> wrote:<br>
<br>
<blockquote type="cite"><br>
In systems safety there is the belief that following a process<br>
will lead to reliable code. And the evidence for this is?<br>
</blockquote>
<br>
In system safety there is the standard IEC 61508-3 which says formal methods are highly recommended for high-reliability requirements. It (rather, the definition of "formal methods" in IEC 61508-4 NextEd) refers to IEC 61508-3-2 which describes methods for
establishing objective properties of documentation and code. There are four steps in this "waterfall", namely requirements, design, source code, and object code, and the key relation of "fulfils".<br>
<br>
The evidence for this approach succeeding lies in, for example, the entire project histories of Praxis/Praxis HIS/Altran UK/CapGemini.<br>
<br>
It astonishes me that there are still people who claim some kind of software expertise who deny the efficacy of all this.<br>
<br>
And of course it is not the only example. Modern civil aerospace is full of very-highly-reliable software-based kit, developed according to evolutionary company practices following DO-178C and DO-333 (or EUROCAE ED-12C and ED-216). Evidence, again, in the operational
histories of all this kit.<br>
<br>
PBL<br>
<br>
Prof. Dr. Peter Bernard Ladkin<br>
Causalis Limited/Causalis IngenieurGmbH, Bielefeld, Germany<br>
Tel: +49 (0)521 3 29 31 00<br>
<br>
_______________________________________________<br>
The System Safety Mailing List<br>
systemsafety@TechFak.Uni-Bielefeld.DE<br>
Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety<br>
</blockquote>
<br>
_______________________________________________<br>
The System Safety Mailing List<br>
systemsafety@TechFak.Uni-Bielefeld.DE<br>
Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety</div>
</div>
</div>
<br>
</div>
_______________________________________________<br>
The System Safety Mailing List<br>
systemsafety@TechFak.Uni-Bielefeld.DE<br>
Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety</div>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</div>
</div>
<br>
</div>
</div>
</div>
</blockquote>
</div>
<br>
</div>
</div>
</body>
</html>