[SystemSafety] ISO and IEC Technical Specifications on Functional Safety and AI

Les Chambers les at chambers.com.au
Mon Oct 27 17:18:40 CET 2025


Bruce, 
Thanks for your thoughts. 
I'm surprised at how little pushback we are seeing from the safety-critical 
systems engineering profession on Tesla's fraudulent claims that supervised 
FSD is a viable option for controlling a motor vehicle.
In my time working on road tunnel safety systems, I was educated by the road 
authority engineers on how seriously they take the process of shutting down a 
freeway. It's a dangerous move requiring careful handling. The core risk lies 
with the human tendency when driving on a freeway to be lulled into a false 
sense of security, a somnambulistic-like state where drivers assume that 
vehicles ahead will keep moving at their current speed. When they have, in 
fact, stopped, the cognitive delay is longer than normal. This is a well-known 
phenomenon. In this mental state, reaction times are extended, triggering what 
they call back-of-queue collisions. So Tesla's claim that drivers will 
effectively supervise their vehicle under all driving conditions IS 
TRANSPARENT BS, regardless of what nifty so-called safety features they might 
incorporate. 
The difficulty with this situation is the temporal paradox. As a pragmatist, I 
accept that there is no way of banning this technology; it is, after all, so 
cool. In their enthusiasm (so cool frenzy), our governments seem determined to 
allow Elon to stress test FSD/S on the public. In future, as it matures with 
gold standard neural nets, it will probably reduce the occurrence of back-of-
queue collisions. Hence the paradox.
Until that time, I remain depressed and grumpy.

Les

> Sorry Les, can't cheer you up yet...
> 
> Coolness always always seems to Conquer Caution, which Constrains
> Convenience, Convincing the need for Coolness :-)
> 
> Certainly the Australian press is impressed FINALLY: Tesla Full
> Self-Driving unlocked in Australia... for some | Drive
> <https://www.drive.com.au/news/tesla-full-self-driving-unlocked-in-
australia-for-public-use-but-not-for-all-customers/>
> 
> Others are more hesitant,
> https://www.abc.net.au/news/2025-09-24/tesla-self-driving-technology-rules-
differ-around-australia/105808104,
> stating "The National Transport Commission is developing a framework to
> govern the use of autonomous vehicles Australia-wide before technology
> evolves further". Maybe this should have been done before FSD (Supervise)
> mode feature was released for downloading. Just a little software update
> costing only $10K...
> 
> From what I understand, the "supervised" bit is surveillance of the
> driver's eyes, dropping out of FSD if it "sees" it not being focussed on
> the road. I hope this is also considered a safety function.
> 
> I miss the days of fully proven functional safety systems where we are
> dependably protected from dangerous failures. In a similar vein, I am
> seeing IoT and Cloud system applications appearing in gas pipeline
> protection systems without the application of functional safety standards.
> 
> It goes way back to what I was taught as a very young engineer - "just
> because you can doesn't mean you should; just because you should doesn't
> mean you can."
> 
> Maybe I'm just getting too sceptical at my age...
> 
> Bruce Hunter
> 
> On Tue, 21 Oct 2025 at 09:52, Les Chambers <les at chambers.com.au> wrote:
> 
> > Michael
> >
> > Thanks for your concern regarding the integration of AI with
> > Safety-Critical
> > systems.
> >
> > Tesla full self driving/supervised is now available in Australia. There
> > have
> > been some crazy behaviours.
> >
> > Refer: https://bit.ly/CrazyTeslaFSD
> >
> > It seems the classical engineering definition of the term, "technology"
> > remains axiomatic. "Technology is a thing that was invented after you were
> > born and … remains a thing that doesn't quite work yet."
> > Elon is yet to get his functional safety act together.
> >
> > To me, it is annoying that these vehicles, guided as they are with
> > immature
> > technology, were allowed on Australian roads by naive regulators.
> >
> > I'm yet to discover any AI driven guidance system that can deliver the SIL
> > 4/ASIL D Performance that these vehicles should provide. It seems to me
> > that
> > attempts to develop functional safety standards for this class of system
> > are
> > fruitless given:
> > 1. No one, including their developers, fully understands why or how they
> > work
> > at all.
> > 2. The black box that is AI is a demonstrably unreliable agent in the
> > service
> > of public safety. Further, the most effective risk management strategy in
> > assuring SIL 4/ASIL D performance is to remove the AI black box 
altogether.
> >
> > I have faith that, in the fullness of time, this problem will be solved
> > with
> > Gold Standard neural networks that can be trusted. Waymo is heading in
> > this
> > direction but with a US$250,000 vehicle geofenced to territory thoroughly
> > LIDAR-mapped and bristling with LIDAR sensors.
> >
> > In the meantime, shareholder value and the "but-it's-so-cool!!" zeitgeist
> > has
> > trumped public safety guiding vehicles with neural networks that class
> > somewhere south of bronze standard.
> >
> > After 50 years engineering safety-critical systems, beginning with direct
> > software control of potentially explosive chemical reactions in the 1970s,
> > I'm
> > watching my profession seemingly discard decades of lessons we paid for
> > dearly
> > —in nervous breakdowns, blood and treasure.
> >
> > A sad state of affairs.
> >
> > If anyone on this list could cheer me up, I would be most grateful.
> >
> > Cheers
> > Les
> >
> > > Hello everybody!
> > >
> > > Generally I like to stay out of the discussion here although I
> > appreciate
> > this resource very much. As my company works on automation devices and
> > especially small proximity sensors I would get a quite narrow viewing
> > angle so
> > it is important to get information like I get it here, from avionics
> > through
> > railway through other systems.
> > >
> > > Unfortunately I don't have further information on the self-driving car
> > industry and what they do to justify what they put on the market. In
> > Germany
> > you would have the institution of the Kraftfahrtbundesamt as an
> > institution
> > and I am very sure that they are quite conservative about that. But this
> > shows
> > why everybody is eager on rulesets regarding AI in functional safety so we
> > are
> > working on that.
> > >
> > > Very interesting that we standardize safety and AI while there still is
> > debate about what AI really is. But actually there seem to be some systems
> > that might save lives so we need to look under which circumstances that
> > can be
> > allowed. I think if I would need to defend the use of AI in safety at the
> > moment people would ask why I took that risk and used AI. Somewhere in the
> > future it could be that I need to defend myself because I didn't use AI
> > because it would have prevented the accident. Some say that moment will
> > never
> > come. Some say this moment is in the past, maybe not for something as
> > complex
> > as self-driving cars but perhaps in other applications with perhaps less
> > dimensional inputs.
> > >
> > > Some would say risk reduction below SIL 1 should not be regulated too
> > strongly. But when I think about how people put some data into a system
> > and
> > cheer if it showed nice behaviour it might be good to ask about
> > requirement
> > management, configuration management and solid proof of validity of the
> > concept - so speak about functional safety management to the AI audience.
> > That's the direction now that the ISO/IEC group will meet in Sydney this
> > week
> > for TS 22440.
> > >
> > > But not to bore with this stuff - maybe not a safety application but a
> > case
> > where people are detected using WLAN routers - even if they don't have
> > WLAN
> > devices with them. A little scary but they say in a different article that
> > they could distinguish people walking into a room even when they carried a
> > container with bottles of beer. That showed me that this is a realistic
> > University scenario here in Germany. This is the only resource I found in
> > English but it is actually also distributed from other sources.
> > > https://interestingengineering.com/innovation/wifi-tech-can-identify-
> > individuals
> > >
> > > Curious where AI and safety will go. Have a good week!
> > > Michael
> > >
> > > --
> > > Michael KINDERMANN (he/him)
> > > Head of Functional Safety
> > > Team Leader Safety & Security
> > > Dpt. Global Compliance
> > > Pepperl+Fuchs SE, Mannheim
> > >
> > > Pepperl+Fuchs SE, Mannheim
> > > Vorstand/Board member: Dr. Wilhelm Nehring (Vors.), Tobias Blöcher,
> > Lutz
> > Liebers, Reiner Müller, Florian Ochs, Martin Walter
> > > Vorsitzende des Aufsichtsrats/Chairwoman of the supervisory board:
> > Monika
> > Müller-Michael
> > > Registergericht/Register Court: AG Mannheim HRB 737016 ∙ UST-ID 
Nr. DE
> > 143877372
> > >
> > > Wichtiger Hinweis:
> > > Diese E-Mail einschliesslich ihrer Anhaenge enthaelt vertrauliche und
> > rechtlich geschuetzte Informationen, die nur fuer den Adressaten bestimmt
> > sind. Sollten Sie nicht der bezeichnete Adressat sein, so teilen Sie dies
> > bitte dem Absender umgehend mit und loeschen Sie diese Nachricht und ihre
> > Anhaenge. Die unbefugte Weitergabe, das Anfertigen von Kopien und jede
> > Veraenderung der E-Mail ist untersagt. Der Absender haftet nicht fuer die
> > Inhalte von veraenderten E-Mails.
> > >
> > > Important Information:
> > > This e-mail message including its attachments contains confidential and
> > legally protected information solely intended for the addressee. If you
> > are
> > not the intended addressee of this message, please contact the addresser
> > immediately and delete this message including its attachments. The
> > unauthorized dissemination, copying and change of this e-mail are strictly
> > forbidden. The addresser shall not be liable for the content of such
> > changed
> > e-mails.
> > > _______________________________________________
> > > The System Safety Mailing List
> > > systemsafety at TechFak.Uni-Bielefeld.DE
> > > Manage your subscription: https://lists.techfak.uni-
> > bielefeld.de/mailman/listinfo/systemsafety
> >
> >
> >
> > --
> > Les Chambers
> > les at chambers.com.au
> >
> > https://www.chambers.com.au
> > https://www.systemsengineeringblog.com
> >
> > +61 (0)412 648 992
> >
> >
> >
> > _______________________________________________
> > The System Safety Mailing List
> > systemsafety at TechFak.Uni-Bielefeld.DE
> > Manage your subscription:
> > https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety
> >



--
Les Chambers
les at chambers.com.au

https://www.chambers.com.au
https://www.systemsengineeringblog.com

+61 (0)412 648 992





More information about the systemsafety mailing list