[SystemSafety] At least PBL is now talking to me again ...
Olwen Morgan
olwen at phaedsys.com
Sat Jul 11 17:26:41 CEST 2020
Comments interspersed:
On 11/07/2020 10:18, Peter Bernard Ladkin wrote:
> Yes, well, aeronautical engineers are quite used to hubristical non-aeronautical-engineers coming
> along and telling them they have it all wrong. Most of them no longer even bother to reply
> "actually, we have it more or less right."
The danger in assuming that you've got it right is that it can blind you
to the need constantly to consider whether there are things that you
might have missed and whether experience in other areas of engineering
might helpfully be adapted and adopted in your own. I do not think that
safety engineers anywhere pay adequate attention to reasonably
foreseeable failure scenarios that they might arrive upon by use of
methods from engineering disciplines that are not their own.
To give an example, there have been several air incidents in which
planes have run out of fuel. The notorious Air Transat Flight 236
incident at Lajes in the Azores in 2001 comes to mind. Yet when, in 2006
and 2007, I was working at the Airbus Fuel Systems Test Facility in
Filton, I was told that Airbus fuel monitoring systems at that time did
not monitor fuel-on-board and the rate of fuel consumption (by
combustion or loss) against flight plan and current position. I then
came up with a handful of suggestions that could give pilots early
warning of unexpected loss or deficiency of fuel by monitoring just a
few readily available scalar parameters. AFAI could tell from what I was
told, Airbus was, 5 years after Lajes, only just getting round to
looking at these issues. Up till then, AFAI could tell, Airbus fuel
systems engineers had considered their then on-board fuel SCADA systems
to be entirely adequate for keeping the pilot aware of his fuel status.
As a software engineer, I found that one of the first questions that
occurred to me about fuel systems instrumentation was, "How does the
pilot know he has enough fuel to complete the flight plan?" At the
time, the answer, "Oh, don't worry, we know." would have been hopelessly
wrong.
"Actually, we have it more or less right." ... ? ..... Maybe but maybe
not ... Like the categories of negligence, the categories of hazard are
ever open.
> The proof of that lies before our eyes in this case. As I noted, Boeing knew all they needed to know
> technically about the specific safety properties of MCAS in March 2016.
What they "needed to know" was that the system was potentially very
dangerous (to put it mildly). Did they know it? If they did, why did
they wait for the crashes to happen? I think that they believed MCAS was
safe when it wasn't but simply failed adequately to consider any reasons
why that belief might be mistaken. Also, the question arises as to what
is covered by your use of the term, "specific safety properties". Right
now, it's not very clear to me what you intended that terminology to
encompass. Are we talking short-span or long-span properties?
> Lots of people besides yourself have suggested ways they could have identified MCAS issues. I see
> that as pointless: they knew.
> However, they assumed that the symptoms of the condition would be identified by the crew. This
> assumption was right in the simulator and wrong in the real world. This phenomenon has been
> highlighted in those terms by Michael.
And that assumption could have been shown to be shaky by using HMI
expertise to devise out-of-left-field (OOLF) crew reactions or
non-reactions to throw against the system in stress testing. FFS, I've
worked with testers of commercial systems who had no technical education
in software engineering but who now look better at trying OOLF test
scenarios than Boeing appeared to be. How do you think ethical hackers
earn a living? (Oh, sorry, that's system security - nothing to do with
aviation safety.) ...... er ...... ?
If you say that you "know (sic) how system A behaves and that it is safe
on the assumption that B" but you do not perform *robust* checks as to
whether assumption B holds, then what is the epistemic status of your
claim to know that system A is safe? Or are you saying that Boeing did
know it was unsafe and deliberately ignored the issue? And is that
possibly a cause of ambiguity that might, understandably, arise from
your not wanting to say things for which Boeing might sue you? ...
(though fair enough if it is) ...
... I honestly now haven't the foggiest clue where you are coming from
on this.
Still confused,
Olwen
>
> PBL
>
> Prof. Peter Bernard Ladkin, Bielefeld, Germany
> Styelfy Bleibgsnd
> Tel+msg +49 (0)521 880 7319 www.rvs-bi.de
>
>
>
>
>
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
> Manage your subscription: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/pipermail/systemsafety/attachments/20200711/a32a829f/attachment.html>
More information about the systemsafety
mailing list