[SystemSafety] AI and safety
Peter Bernard Ladkin
ladkin at causalis.com
Sat Nov 10 15:35:00 CET 2018
On 2018-11-10 12:32 , Olwen Morgan wrote:
>
> On 10/11/2018 04:50, Peter Bernard Ladkin wrote:
>
> <snip>
>
> There is no connection made from these features which consitute "AI safety" to harm caused to or the
> environment, and damage to things, avoidance of which is the usual definition of safety.
>
> <snip>
>
>
> With due respect, Peter, this seems to me to be missing the wood for the trees. The only way we'll
> ever address the problems associated with using AI in critical systems is to build experience of
> what can go wrong.
Not at all.
Another way - indeed *the* way according to the safety standard IEC 61508 - is to demonstrate that
putting a DLNN in the specific functional pathway of system S poses an acceptable or an unacceptable
risk. If the risk is acceptable, you can go ahead. If the risk is unacceptable, you either have to
take your DLNN out of the functional pathway again, or you have to otherwise mitigate the risk.
These are options concerning safety, but not (or not much) reliability. Which is another reason why
looking at reliability parameters and imagining you are dealing with safety is a wrong approach.
The advantage of staring hard at the trees is that you find out what makes them grow.
NASA and Boeing flew an MD-11 with NN FBW and reconfigurable control so that, in the event of a loss
of control authority, other control mechanisms would compensate, even though the control commands
applied through the control column remained the same. This was after Al Haynes and Denny Fitch's
recovery attempt of UA232 which landed at Sioux City, IA in July 1989 after losing all
control-surface authority and attempting to land purely on differential engine power from two
engines. The MD-11 reconfigurable-control experiment was successful. Pitch, bank and yaw commands at
the control column were translated into differential thrust which effected what had been commanded.
Similarly, NASA flew a reconfigurable F-15 (I think it was). The point about this particular
experiment was to enable continued control capability using the intended command input of a damaged
fighting aircraft.
Johan Schumann of NASA Ames Robust Software Engineering Group has a book including papers on these
aircraft, as well as other papers on NNs in aerospace-control applications. Summary; statically
trained nets, maybe, if you can sass the Liapunov functions; dynamically-trained nets, no;
dynamically-trained unsupervised nets, forget it for the foreseeable future.
The reasons for this are that the risk criteria I stated above need to be used: the acceptable means
of compliance (as it is known in Europe. AC-120 in the US) to airworthiness regulations require you
to show low risk (for control systems of the 10^(-9) variety) and you cannot with
dynamically-trained or with unsupervised-learning nets.
PBL
Prof. Peter Bernard Ladkin, Bielefeld, Germany
MoreInCommon
Je suis Charlie
Tel+msg +49 (0)521 880 7319 www.rvs-bi.de
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: OpenPGP digital signature
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20181110/503db196/attachment-0001.sig>
More information about the systemsafety
mailing list