[SystemSafety] Waymo cars coming to London
Prof. Dr. Peter Bernard Ladkin
ladkin at techfak.de
Thu Oct 16 18:13:39 CEST 2025
On 2025-10-16 17:12 , Jonathan Ostroff wrote:
> Is there a publicly available externally verified safety assurance case for Waymo?
I don't know. I think there will have been a separate safety assurance case for California and
Arizona, because the autonomous-vehicle experiments in California were being overseen by the
Californian DMV. There will surely be a separate case for London.
The Uni York has a Centre for Assuring Autonomy, and Simon Burton, who is now Professor at York, is
the Convenor of the ISO ISO TC22/SC32/WG14 committee on safety and AI for road vehicles. I also know
that there is a furious amount of work going on on the successor to ISO/IEC TR 5469, on functional
safety and AI systems in OT. Much more, in fact, than I can keep up with. There are three parts to
the forthcoming ISO/IEC TS 22440, and there is a further TS 25223. These are being worked on
intensively in Germany and I know attitudes in many other countries on the ISO/IEC level are just as
intense.
These documents are substantial. Concerning quality, parts of them are insightful and use experience
in FS and OT well, but I confess to not knowing all the details. And there are substantial remaining
questions -- the good news is that people appear to be concerned with them.
I can't get stuck in too far, because to work on ISO stuff in Germany you (your company) have to pay
for participation (for IEC standards you can play for free.....) But there is at least one
mailing-list participant who is -- he is the Chair of the German IEC mirror committee doing FS and
AI and is active on the IEC level also.
This activity is ferocious enough that I am moderately convinced that potential overseers are doing
the best they(we) can and that the companies are working with all this. That is in part why Uber and
GM quit. But I must say I am surprised that Apple quit also. Maybe they figured out that
Google/Alphabet/Waymo was just so far ahead on the mapping issue that they couldn't catch up in the
foreseeable future (see below).
> If it uses AI/machine-learning, how is safety assured?
I'd say, for a London "engagement", through the standards that Brits are helping develop at the ISO
and IEC.
> Presumably there are non-AI backup systems in the control logic?
Oh yes. When they were introduced to Phoenix, one YouTuber called them for lots of trips and made
videos of all the "awkward moments". They mostly involved vehicles driving slowly and stopping in
the middle of busy thoroughfares and causing traffic jams. But they also had remote operation as a
backup .... and the videos show that sometimes the remote operators couldn't get the vehicles to
move either.
> Also, is it constraints such as operation in well-mapped areas that aids the safety case (say
> better than Tesla EV?).
I am personally quite sure that the extensive road-environment mapping Google has been performing
for two decades now is an important component in their operations. I might even guess (see above)
that that is why everyone else has quit. But I don't know that for a fact.
>
> How do we know that in their limited Level 4 domain these cars are as safe (or safer) than human
> drivers?
In August 2024 we didn't. Now that they have been operating for over a year, there are statistics.
> This forum has the expert knowledge to weigh in.
Yes, but York CAA is heavyweight in comparison with us.
PBL
Prof. i.R. Dr. Peter Bernard Ladkin, Bielefeld, Germany
www.rvs-bi.de
More information about the systemsafety
mailing list