[SystemSafety] Comparing reliability predictions with reality
Robert P Schaefer
rps at mit.edu
Mon Feb 24 21:31:59 CET 2025
hi Steve,
I believe you, but there is not enough public information to determine the spread, the common case, the mean, the mode, etc. for a range of
projects across a industry type, nor the issues raised later that can be tied back to processes not followed or new processes needed.
I don’t have enough clues to explain why my experience with scaling was bad and yours good.
bob s
On Feb 24, 2025, at 3:06 PM, Steve Tockey <steve.tockey at construx.com> wrote:
Robert,
I don’t agree that all processes cannot be scaled from small to large projects. I would say that this depends entirely on the process in question. Some processes actually scale quite well. Case in point, I have a process that was used in development of the Mission Systems for the P-8 Poseidon (https://en.wikipedia.org/wiki/Boeing_P-8_Poseidon) which involved about 350 developers building around 7 million lines of code over a 7 year project where that same basic process works fine on single developer, few weeks, under 5000 lines of code projects.
— steve
On Feb 24, 2025, at 7:38 AM, Robert P Schaefer <rps at mit.edu> wrote:
Hi, Peter,
short comment, processes that are viable on small projects do not (and I believe cannot) scale to large projects
longer explanation:
in my experience (26 years but ended 15 years ago in the defense industry - and much happier now)
The processes that work for small programs do not scale to large developments for various reasons
including for the firm that is responsible for the processes as prime contractor, gets overwhelmed,
by factors that cannot be controlled including: large requirement sets that contain latent
contradictions (through chains of dependencies) that aren’t discovered until after requirements are signed off,
not enough engineers to keep track of 1000s of requirements from concept to execution
that are allocated distant in time (from concept to execution) from where they are defined,
implicit information lost when the developers in one phase are re-assigned after that phase completes,
funding for testing (in the future) that is reassigned to earlier phases that go long in the present,
as previously mentioned when overwhelmed by lists of faults, faults that are regraded as not-faults,
and (what really opened my eyes to the high level bs going on) adversarial subcontractors who want to be prime in the next go-round.
bob s
On Feb 24, 2025, at 9:53 AM, Prof. Dr. Peter Bernard Ladkin <ladkin at causalis.com> wrote:
On 2025-02-24 14:54 , Robert P Schaefer wrote:
I hear you, i have no answers.
I do.
Back when CapGemini, formerly Altran UK, was still called Praxis, they regularly estimated the achieved reliability of delivered products (and still did for iFACTS when they were Altran, a decade ago. Probably still do.) There is a very public project called Tokeneer, undertaken with the NSA, where the attempt was made to develop a bug-free small (10K LoC, as I remember) biomeasurement system for access control. They almost succeeded (I recall Rod Chapman saying that two bugs were belatedly discovered).
There are lots of ways, increasingly accessible, in which objective properties of code and its documentation can be rigorously established. You of course need the right kind of tools, right choice of programming language, right compiler, and so on.
On Feb 24, 2025, at 8:47 AM, Derek M Jones <derek at knosof.co.uk> wrote:
In systems safety there is the belief that following a process
will lead to reliable code. And the evidence for this is?
In system safety there is the standard IEC 61508-3 which says formal methods are highly recommended for high-reliability requirements. It (rather, the definition of "formal methods" in IEC 61508-4 NextEd) refers to IEC 61508-3-2 which describes methods for establishing objective properties of documentation and code. There are four steps in this "waterfall", namely requirements, design, source code, and object code, and the key relation of "fulfils".
The evidence for this approach succeeding lies in, for example, the entire project histories of Praxis/Praxis HIS/Altran UK/CapGemini.
It astonishes me that there are still people who claim some kind of software expertise who deny the efficacy of all this.
And of course it is not the only example. Modern civil aerospace is full of very-highly-reliable software-based kit, developed according to evolutionary company practices following DO-178C and DO-333 (or EUROCAE ED-12C and ED-216). Evidence, again, in the operational histories of all this kit.
PBL
Prof. Dr. Peter Bernard Ladkin
Causalis Limited/Causalis IngenieurGmbH, Bielefeld, Germany
Tel: +49 (0)521 3 29 31 00
_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE
Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE
Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE
Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/pipermail/systemsafety/attachments/20250224/778432e4/attachment-0001.html>
More information about the systemsafety
mailing list