[SystemSafety] Autonomously Driven Car Kills Pedestrian

Steve Tockey Steve.Tockey at construx.com
Thu Mar 29 18:48:36 CEST 2018


John,
I wish there was an easy answer to the questions in your last paragraph. In the end I think it will take many things together, like:

*) Holding companies and, in some cases the developers themselves, liable for damages caused by any defects they deliver. Unless there is some kind of incentive other than what we called “technical conscience” when I was at Boeing (One does things that way for no other reason than it’s the right way to do it—not because you are being required by some external entity), there will be no reason to expect things would ever change.

*) The Software Engineering Institute at CMU used to talk in terms of “People, process, and tools”. The idea was that you can either
--impose requirements on the people: that is essentially the approach that SWEBOK and PE (I.e., Chartered) engineering take, or
--you can impose requirements on the process: that’s the approach DO-178 and ISO 26262 take, or
—you can impose the use of certain tools
I don’t think that imposed tool use is effective, the old adage “A fool with a tool is still a fool” comes to mind. Maybe a combination of imposing people- and process- requirements?

*) Identifying classes of software product based on the potential for harm caused by defects. We kind of have that implicitly, but make it explicit.
--Mission critical & safety critical could be one category
--Software that deals with finance might be another category (possibly broken into high- vs. low-value?)
--Software of minor consequence might be another, I mean things like Facebook, Twitter, Instagram, LinkedIn, etc.
--The last category might be personal, hobby, or educational and could be considered “of no consequence”.

Anyone should be able write and deliver software in the “no consequence” category. I could suggest that it be the same with the “Minor consequence” category, but since there is explicit liability for defects being imposed then developers or their employers would have an incentive to establish their own minimal requirements possibly based on some kind of certification. The “finance” category might need the developers to possess something along the lines of a legitimate software engineering degree or an equivalent. Work on mission / safety critical software should be like it already is for licensed Professional Engineers / Chartered engineers: not that everyone would need licensing / chartering, but at least one person would and they would take on personal liability for the software.


I am generally in favor of the kind of things being required in DO-178C. My comment here is that of the 66 objectives for the most critical systems (“Level A”), the vast majority of them are actually just sound software engineering practice. MC/DC test coverage is only needed for highly critical code. The 2 objectives about FAA / JAA liaison (PSAC, …) are unique to avionics development. As well, the “with independence” aspects don’t always apply. But, as I said, other than those I don’t see why essentially all projects don’t do those kinds of things all the time.

I generally agree with the content in SWEBOK v3 although I personally think they kinda botched the Economics knowledge area.

I would still take it even one step further. I would also expect that projects in the safety / mission critical and financial categories use “model-based requirements” as I present in my new book. It’s not an accident that the book is titled, “How to Engineer Software”.


Cheers,

— steve




From: John Howard <jhoward at roboticresearch.com<mailto:jhoward at roboticresearch.com>>
Organization: Robotic Research LLC
Date: Wednesday, March 28, 2018 at 9:15 AM
To: Steve Tockey <Steve.Tockey at construx.com<mailto:Steve.Tockey at construx.com>>, "systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>" <systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>>
Subject: Re: [SystemSafety] Autonomously Driven Car Kills Pedestrian


Thanks for clarifying Steve.  Perhaps my perception is different because of the industry I am most closely involved with (defense).  My only insight into the rest of the industry was the Waymo report from last year, which I thought outlined a reasonable approach given the lack of any other software safety standards specific to the automotive industry.  (They at least claim to use ISO 26262 and MIL-STD-882E as basis for their own System Safety Program.)  {On The Road to Fully Self-Driving: Waymo Safety Report, Oct. 2017}

In regard to our own processes, I obviously can't share details, but the governing safety standard is MIL-STD-882E.  It is not sufficient on it's own so we are also developing internal processes which borrow heavily from ISO 26262, and use MBSE (SysML with MagicDraw).  In addition, the systems we develop are evaluated by the Army Test and Evaluation Center (ATEC), which gives every system a safety rating related to hazard risk.  I need to be careful what I say here since I am fairly certain that some ATEC folks are also on this list. ;-)

That said, I am curious how you thing this should be done?  Should the entire industry wait for someone to develop a process similar to DO-178 for the automotive industry?  Should software developers be prohibited from producing safety critical software without some kind of certification?  I am very serious about these questions.  While I am skeptical that the industry is quite as bad as you make it seem, I can certainly agree it isn't what it should be.  I just don't know what the answer is, and am eager to learn from others to improve our own internal processes in the mean time.

--
John Howard, Sr. Systems Engineer
Robotic Research, LLC
22300 Comsat Dr.
Clarksburg, MD 20871
Office: 240-631-0008 x274


On 3/27/2018 5:03 PM, Steve Tockey wrote:

John,
You can interpret my email as a scathing indictment of the mainstream software industry as a whole. I am already on record as saying,

“We are an industry of highly paid amateurs”.

The company I work for, Construx, interacts with hundreds of “professional” software development teams each and every year. Having met in person the people inside these companies (both high-profile companies and not), it is clear that the vast majority of the people & projects I have met with are square in the middle of “highly paid amateur” syndrome. That includes the likes of, for example, Google, Facebook, Amazon, Microsoft, Alibaba, and so on. Given how pervasive corporate cultures are, I would be shocked and amazed if the software developers in Google’s self-driving car team are in any way different that those in the rest of that company. I could be wrong, but I don’t think there is a very high probability of that.

To hear that your organization is different is very good news. I am very happy to hear that there is at least one organization in the self-driving car software space that is actually taking things seriously. I would also exclude from “highly paid amateur” syndrome, generally speaking, avionics vendors (Honeywell, Rockwell Collins, . . .) and to a slightly lesser extent the medical device companies because of the externally imposed standards.

That being said, are you willing to provide any more detail about how your organization is doing things differently than in the mainstream software industry? For example:

*) Are you following any specific software safety standards, ISO 26262, DO-178C, . . .? If so, which one(s)?

*) Have you adapted your software development processes / procedures around any of those standards? If so, are you willing to share how? Specifically, I mean that DO-178C requires that teams produce written requirements and design documentation but it is left up to the organization to determine what that documentation looks like. Might you be willing to share what your document templates look like?

*) How are you determining that any given software project is actually complying with the applicable standards / processes / procedures? Are there independent audits of any kind?

*) Do you have any personnel qualifications on the developers? For example, say, you hire, train, and promote developers around a set of qualifications derived from the Software Engineering Body of Knowledge (SWEBOK)?

*) How is someone with, can I say, legitimate “engineering” experience and expertise involved in the software development activities? Are said people actually licensed / chartered engineers? If so, what engineering disciplines are they licensed / chartered in?

*) Do the engineering teams determine realistic project schedules based on given project scope (or, alternatively, determine realistic scope based on given project schedules)? Or, are project scopes and schedules imposed by external stakeholders?


— steve




From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de<mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de>> on behalf of John Howard <jhoward at roboticresearch.com<mailto:jhoward at roboticresearch.com>>
Organization: Robotic Research LLC
Date: Monday, March 26, 2018 at 7:35 AM
To: "systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>" <systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>>
Subject: Re: [SystemSafety] Autonomously Driven Car Kills Pedestrian


Perhaps I am misunderstanding.  Is this your perception of the self-driving car industry?  If so, on what basis?

I cannot speak for other companies, but I can assure you that none of these statements apply to any of the autonomous vehicle project I am involved with.

--
John Howard, Sr. Systems Engineer
Robotic Research, LLC
22300 Comsat Dr.
Clarksburg, MD 20871
Office: 240-631-0008 x274


On 3/25/2018 1:39 PM, Steve Tockey wrote:

FWIW, I found some interesting quotes in the following article:

http://time.com/5213690/verruckt-slide-death-schlitterbahn/


Could  these be applied in cases involving self-driving cars?

“was never properly or fully designed”

“rushed it into use and had no technical or engineering expertise”

“complied with “few, if any” longstanding safety standards”

“the . . . death and the rapidly growing list of injuries were foreseeable and expected outcomes”

“desire to “rush the project” and . . . designer’s lack of expertise caused them to “skip fundamental steps in the design process.””

“not a single engineer was directly involved in . . . engineering or . . . design”


It seems as though all of these statements would apply equally to any case involving self-driving cars.


Cheers,

— steve




From: systemsafety <systemsafety-bounces at lists.techfak.uni-bielefeld.de<mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de>> on behalf of Matthew Squair <mattsquair at gmail.com<mailto:mattsquair at gmail.com>>
Date: Friday, March 23, 2018 at 5:03 PM
To: Peter Bernard Ladkin <ladkin at causalis.com<mailto:ladkin at causalis.com>>, "systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>" <systemsafety at lists.techfak.uni-bielefeld.de<mailto:systemsafety at lists.techfak.uni-bielefeld.de>>
Subject: Re: [SystemSafety] Autonomously Driven Car Kills Pedestrian

I think Uber will come unglued in civil court.

If say the driver is legally deemed to not be in direct control but ‘supervising’ by the court then Uber is still liable for devising a method of supervision of an unsafe device that demonstrably doesn’t work, and it could be argued they could have reasonably known this in the circumstances*. If the argument turns that the driver is solely the culpable agent then as he’s also a Uber employee/contractor they’re still responsible for his actions. So, which ever way it turns Uber will carry the can, at least in a civil prosecution which is where this will get thrashed out I’d guess.

‘Move fast and break things’ indeed…

*As the conversation on this thread would indicate.


On 24 March 2018 at 4:16:49 am, Peter Bernard Ladkin (ladkin at causalis.com<mailto:ladkin at causalis.com>) wrote:


On 2018-03-23 17:40 , Michael Jackson wrote:
>
> So the responsibility in overseeing autonomous driving is worse than that of an old-fashioned
> driving instructor in a dual-control car, teaching an untrusted learner—you can’t even order
> the software to slow down: in short, it is far more demanding and stressful than driving the
> car yourself.
Spot on, as usual.

Woods and Sarter, in their seminal study of pilots using A320 automation, found it was worse than
that. When the situation got odd, rather than cutting out the automation and taking control ("first,
fly the airplane"), they found the crew inclined to try to debug the automation.

PBL

Prof. Peter Bernard Ladkin, Bielefeld, Germany
MoreInCommon
Je suis Charlie
Tel+msg +49 (0)521 880 7319 www.rvs-bi.de<http://www.rvs-bi.de>





_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE<mailto:systemsafety at TechFak.Uni-Bielefeld.DE>



_______________________________________________
The System Safety Mailing List
systemsafety at TechFak.Uni-Bielefeld.DE<mailto:systemsafety at TechFak.Uni-Bielefeld.DE>


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20180329/3f1ab2b8/attachment-0001.html>


More information about the systemsafety mailing list