<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p><br>
</p>
<p>Comments interspersed:</p>
<p><br>
</p>
<div class="moz-cite-prefix">On 11/07/2020 10:18, Peter Bernard
Ladkin wrote:
</div>
<blockquote type="cite"
cite="mid:7f56535e-e93a-b8d7-a0aa-3ad4e3abe7f7@causalis.com">
<pre class="moz-quote-pre" wrap="">Yes, well, aeronautical engineers are quite used to hubristical non-aeronautical-engineers coming
along and telling them they have it all wrong. Most of them no longer even bother to reply
"actually, we have it more or less right."</pre>
</blockquote>
<p>The danger in assuming that you've got it right is that it can
blind you to the need constantly to consider whether there are
things that you might have missed and whether experience in other
areas of engineering might helpfully be adapted and adopted in
your own. I do not think that safety engineers anywhere pay
adequate attention to reasonably foreseeable failure scenarios
that they might arrive upon by use of methods from engineering
disciplines that are not their own.<br>
</p>
<p>To give an example, there have been several air incidents in
which planes have run out of fuel. The notorious Air Transat
Flight 236 incident at Lajes in the Azores in 2001 comes to mind.
Yet when, in 2006 and 2007, I was working at the Airbus Fuel
Systems Test Facility in Filton, I was told that Airbus fuel
monitoring systems at that time did not monitor fuel-on-board and
the rate of fuel consumption (by combustion or loss) against
flight plan and current position. I then came up with a handful of
suggestions that could give pilots early warning of unexpected
loss or deficiency of fuel by monitoring just a few readily
available scalar parameters. AFAI could tell from what I was told,
Airbus was, 5 years after Lajes, only just getting round to
looking at these issues. Up till then, AFAI could tell, Airbus
fuel systems engineers had considered their then on-board fuel
SCADA systems to be entirely adequate for keeping the pilot aware
of his fuel status. As a software engineer, I found that one of
the first questions that occurred to me about fuel systems
instrumentation was, "How does the pilot know he has enough fuel
to complete the flight plan?" At the time, the answer, "Oh, don't
worry, we know." would have been hopelessly wrong.<br>
</p>
<p>"Actually, we have it more or less right." ... ? ..... Maybe but
maybe not ... Like the categories of negligence, the categories of
hazard are ever open.</p>
<blockquote type="cite"
cite="mid:7f56535e-e93a-b8d7-a0aa-3ad4e3abe7f7@causalis.com">
<pre class="moz-quote-pre" wrap="">The proof of that lies before our eyes in this case. As I noted, Boeing knew all they needed to know
technically about the specific safety properties of MCAS in March 2016.</pre>
</blockquote>
What they "needed to know" was that the system was potentially very
dangerous (to put it mildly). Did they know it? If they did, why did
they wait for the crashes to happen? I think that they believed MCAS
was safe when it wasn't but simply failed adequately to consider any
reasons why that belief might be mistaken.
Also, the question arises as to what is covered by your use of the
term, "specific safety properties". Right now, it's not very clear
to me what you intended that terminology to encompass. Are we
talking short-span or long-span properties?<br>
<blockquote type="cite"
cite="mid:7f56535e-e93a-b8d7-a0aa-3ad4e3abe7f7@causalis.com">
<pre class="moz-quote-pre" wrap="">Lots of people besides yourself have suggested ways they could have identified MCAS issues. I see
that as pointless: they knew.
</pre>
</blockquote>
<blockquote type="cite"
cite="mid:7f56535e-e93a-b8d7-a0aa-3ad4e3abe7f7@causalis.com">
<pre class="moz-quote-pre" wrap="">However, they assumed that the symptoms of the condition would be identified by the crew. This
assumption was right in the simulator and wrong in the real world. This phenomenon has been
highlighted in those terms by Michael.</pre>
</blockquote>
<p>And that assumption could have been shown to be shaky by using
HMI expertise to devise out-of-left-field (OOLF) crew reactions or
non-reactions to throw against the system in stress testing. FFS,
I've worked with testers of commercial systems who had no
technical education in software engineering but who now look
better at trying OOLF test scenarios than Boeing appeared to be.
How do you think ethical hackers earn a living? (Oh, sorry, that's
system security - nothing to do with aviation safety.) ......
er ...... ?<br>
</p>
<p> If you say that you "know (sic) how system A behaves and that it
is safe on the assumption that B" but you do not perform *robust*
checks as to whether assumption B holds, then what is the
epistemic status of your claim to know that system A is safe? Or
are you saying that Boeing did know it was unsafe and deliberately
ignored the issue? And is that possibly a cause of ambiguity that
might, understandably, arise from your not wanting to say things
for which Boeing might sue you? ... (though fair enough if it is)
...</p>
<p><br>
</p>
<p>... I honestly now haven't the foggiest clue where you are coming
from on this.<br>
</p>
<p><br>
</p>
<p>Still confused,<br>
</p>
<p>Olwen<br>
</p>
<p><br>
</p>
<blockquote type="cite"
cite="mid:7f56535e-e93a-b8d7-a0aa-3ad4e3abe7f7@causalis.com">
<pre class="moz-quote-pre" wrap="">
PBL
Prof. Peter Bernard Ladkin, Bielefeld, Germany
Styelfy Bleibgsnd
Tel+msg +49 (0)521 880 7319 <a class="moz-txt-link-abbreviated" href="http://www.rvs-bi.de">www.rvs-bi.de</a>
</pre>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<pre class="moz-quote-pre" wrap="">_______________________________________________
The System Safety Mailing List
<a class="moz-txt-link-abbreviated" href="mailto:systemsafety@TechFak.Uni-Bielefeld.DE">systemsafety@TechFak.Uni-Bielefeld.DE</a>
Manage your subscription: <a class="moz-txt-link-freetext" href="https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety">https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety</a></pre>
</blockquote>
</body>
</html>