[SystemSafety] Fwd: Re: AI and safety
Olwen Morgan
olwen at phaedsys.com
Sat Nov 10 17:08:32 CET 2018
On 10/11/2018 14:35, Peter Bernard Ladkin wrote:
> On 2018-11-10 12:32 , Olwen Morgan wrote:
>> On 10/11/2018 04:50, Peter Bernard Ladkin wrote:
>>
>> <snip>
>>
>> There is no connection made from these features which consitute "AI safety" to harm caused to or the
>> environment, and damage to things, avoidance of which is the usual definition of safety.
>>
>> <snip>
>>
>>
>> With due respect, Peter, this seems to me to be missing the wood for the trees. The only way we'll
>> ever address the problems associated with using AI in critical systems is to build experience of
>> what can go wrong.
> Not at all.
>
> Another way - indeed *the* way according to the safety standard IEC 61508 - is to demonstrate that
> putting a DLNN in the specific functional pathway of system S poses an acceptable or an unacceptable
> risk. If the risk is acceptable, you can go ahead. If the risk is unacceptable, you either have to
> take your DLNN out of the functional pathway again, or you have to otherwise mitigate the risk.
>
> These are options concerning safety, but not (or not much) reliability. Which is another reason why
> looking at reliability parameters and imagining you are dealing with safety is a wrong approach.
>
> The advantage of staring hard at the trees is that you find out what makes them grow.
<snip>
> Summary; statically trained nets, maybe, if you can sass the Liapunov functions; dynamically-trained nets, no;
> dynamically-trained unsupervised nets, forget it for the foreseeable future.
<snip>
Given the volume of research that has taken place on NNs, I'm inclined
to think that statically trained ones would be the least risky,
especially in applications as well researched as flight control.
On the other hand, NNs are not the only techniques used in AI. My
worries centre on how you assess risks. OK you may be able to do that
for statically trained NNs - but what of large rule-based systems? - or
pattern recognition systems? - or genetic algorithms? - or any other
novel technique that AI might seize upon?
This seems to me to be a classic dialectical situation. AI's great
strength is that it finds solutions that humans don't - but by the same
token it can engender risks that humans never even suspect. All hazard
analysis depends on the ability to identify failure modes. If AI
engenders hazardous failure modes that we are not even aware of, then a
major plank of safety engineering is undermined because we fail to
mitigate the unsuspected risks.
That's why I favour the idea of a database of incidents in which AI
systems have behaved in totally unexpected ways *regardless of any
particular relation to dependability issues*. Surely if we want to be
able to identify exotic failure modes, such a database is where we
should start?
Again, I have to acknowledge that this is out of my field, so apologies
if the better informed here think I'm spouting BS.
Olwen
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20181110/aa733f7d/attachment.html>
More information about the systemsafety
mailing list