[SystemSafety] A General Theory of Love
Les Chambers
les at chambers.com.au
Tue May 16 13:46:41 CEST 2017
The twentieth century concept of boundaries will not cut the mustard. Your descriptions are necessary but not sufficient.
AI pioneer Stuart Russell is working on the concept. See:
https://www.ted.com/talks/stuart_russell_how_ai_might_make_us_better_people
His ideas are interesting but, at this point, no help to a tester who's got to tick a pass/fail box on a test log.
The problem remains: what do you do with a machine that changes its behaviour after it leaves the development shop? How do you set the boundaries on its behaviour?
We struggle even with traditional systems. Airbus thought they had it under control and then this happened:
http://www.theherald.com.au/story/4659526/the-untold-story-of-qf72-what-happens-when-psycho-automation-leaves-pilots-powerless/
Captain Kevin Sullivan would like an answer.
-----Original Message-----
From: systemsafety [mailto:systemsafety-bounces at lists.techfak.uni-bielefeld.de] On Behalf Of Peter Bernard Ladkin
Sent: Saturday, May 13, 2017 3:56 PM
To: systemsafety at lists.techfak.uni-bielefeld.de
Subject: Re: [SystemSafety] A General Theory of Love
On 2017-05-12 22:39 , Les Chambers wrote:
> I think the concept of "boundaries" is losing its utility.
What nonsense! It's one of the essential notions in system building and supply.
You can't write a contract for supplying components of a critical system unless it's clear which bits you are responsible for and which bits not. Contracts explicitly set boundaries. All systems and components come with such a contract (if you buy something off the shelf in a store, the contract may be implicitly specified by giving the protocols to which the device is said to conform, such as "IEEE 803.11ac" or "USB 3.0").
In order to begin with any of the activities necessary to design and assess safety-critical systems, you need to identify what is to be "system" and what is to be "environment", to set the boundary. In order to define and encapsulate components/subsystems of systems, you need to do the same.
Modularisation and scoping in complex computer programs depend on explicitly setting boundaries - scoping is the name for how you do it. Such structured programming and modularisation is essential for software-based system assurance, as it has been for almost 50 years now.
Some high-profile accidents where boundary-setting went wrong include:
* Ariane 501 (the values coming from a sensor were out of subsystem-nominal bounds, although they were exactly right for the planned and actual behaviour of the vehicle)
* Three Mile Island (The indicator that a PORV, an "electromatic relief valve", was open or shut in fact indicated the state of an electrical component, a solenoid which closed the circuit to actuate the electrics to operate the valve. Something which should have indicated a state external to the electrical control system, namely the position of the valve, was in fact indicating something internal to it. Perrow mentions this early on in Normal Accidents, pp21-2 of my first edition, and Michael on p164 of his Problem Frames book. The terms "external" and "internal" reference implicitly a subsystem boundary.)
* Fukushima (floods descend into a basement under gravity - where the electrics were. Lochbaum and Perrow had explicitly mentioned this hazard up to two decades previously; it's in Perrow's 2004 book)
Martyn's report http://www.raeng.org.uk/publications/reports/global-navigation-space-systems notes situations in which people claimed their systems had no GPS dependence, but which systems were rendered inoperable when GPS in the vicinity was jammed. Simply said, they got their system boundary wrong.
PBL
Prof. Peter Bernard Ladkin, Bielefeld, Germany MoreInCommon Je suis Charlie
Tel+msg +49 (0)521 880 7319 www.rvs-bi.de
More information about the systemsafety
mailing list