[SystemSafety] Stuart Russell on AI Safety ... "I don't know"

Prof. Dr. Peter Bernard Ladkin ladkin at causalis.com
Fri Aug 23 10:08:56 CEST 2024


On 2024-08-23 07:13 , Les Chambers wrote:
> Hi All
> Stuart Russell spoke at the Paris AI safety breakfast, July 2024.

Good spot, Les. Thanks!

Stuart has been for decades one of the most cogent commentators on AI and safety, and by "safety" 
here I mean all the social implications, including military uses, not just "our" field of 
engineering-system functional safety. Recall the infamous "Slaughterbots" video: Stuart noted that 
all the technology to achieve it was currently available (at the time of release).

He is the coauthor, with Peter Norvig, of what has become the foremost textbook in AI, first 
published 29 years ago. "AI" here means all aspects, not just neural networks and machine learning, 
and most certainly not just LLMs.

> Watch and weep as 50 years of safety critical systems engineering process development are ignored 
> as AI companies beat on.

It is not just AI companies. There are plenty of engineering companies which are incorporating 
components using AI technologies into their safety-related systems. Consider automated road 
vehicles. That is not being driven (so to speak) by AI companies, that is being driven by automobile 
companies, and they have been at it for some two decades almost. It started with sensor-fusion 
technology for capturing driving-relevant aspects of the environment.

Some engineering colleagues, who have little or no computer science background let alone in AI, have 
turned themselves into "experts" on "AI and functional safety" and have been giving lectures on it 
at functional safety conferences. This concerns me. One of the reasons for them doing so is that 
there is a big push in standardisation in AI and functional safety at the moment. Maybe one, maybe 
two of the people I know who are involved in that had any experience with any AI technology before 
the standardisation activity started. It is inappropriate to blame people for what they initially 
don't know, but I think it reasonable to expect that functional safety people acquaint themselves 
with AI technology if people are pushing to use it in safety-related systems. The first step is 
surely to try to understand "AI" technologies and their capabilities, and nobody I know is familiar 
even to a modest degree with the technologies explicated in Stuart's text (latest edition 2020).

It is a bit like someone trying to tell people how to build compilers without having any 
understanding of how to parse formal languages. I mean, would you listen to that?

On the most basic level, many colleagues do not/cannot yet distinguish between the deep-learning 
artificial neural network "adaptive control" technology, which has been demonstrated for example at 
NASA for non-standard enhanced flight control of military jets as well as large commercial airplanes 
over some three decades now, and the transformer/word-embedding algorithms used to construct large 
language models such as ChatGPT. Duuuh.

I have a paper coming out in the SCSC eJournal "any day now" on functional safety and (what I call) 
oracular subsystems. ISO and IEC have been developing a Technical Report (informational, not 
normative) on functional safety and AI for many years now. ISO/IEC TR 5469 was finally published in 
January 2024. According to my interpretation, the "model" of AI subsystems used in TR 5469 does not 
match the architecture which is used in AI-assisted control systems. That is a pretty basic disconnect.

 From what I understand, there is going to be a standard (that is, with normative parts) to follow 
TR 5469. IEC SC65A has a new subcommittee J21 on "functional safety and artificial intelligence"; it 
has overlap with the ISO/IEC joint committee SC42 but I don't know exactly what the formal relation 
is. Its Convenor is Audrey Canning, who is also the Convenor of the Maintenance Team MT 61508-3 for 
61508 Part 3 (software). There was a "new work proposal" for a Technical Specification of 
Requirements for AI and FS, circulated about a year ago. The designation is PNW TS 65A-1100 and the 
name of the document will be ISO/IEC TS 22440. From what I understand (which is not much at this 
point), TS 22440 will be a further development of TR 5469.

There may be people here who know more about these organisational developments than I do.

Having just finished my own involvement with developing a new standard (it was supposed to be 
published on Wednesday, but there needed to be a couple of typos fixed, so it will be August 29th), 
I am somewhat loathe to recommend that people get involved, because I can't say my experience has 
been uplifting. But the way that AI will be constrained in safety-related systems is, as far as I 
can see, through technical standards, which might well be backed up by laws. Technical standards are 
what govern the current deployment of safety-related digital-technology systems, ultimately backed 
up by regulation/law, and I don't expect that it will be any different for AI subsystems of 
safety-related systems.

There are two main aspects limiting any involvement of mine. One is that in the last few years I 
have been feeling overexposed to standards committees. The second is that ISO appears to be 
essentially involved, and the ISO mirror in Germany, DIN, levies fees for participation in any of 
its standardisation activities (frankly I'd rather people were paying me for any expertise of mine 
than the other way round).

PBL

Prof. Dr. Peter Bernard Ladkin
Causalis Limited/Causalis IngenieurGmbH, Bielefeld, Germany
Tel: +49 (0)521 3 29 31 00



More information about the systemsafety mailing list