[SystemSafety] Safety Cases
Michael Jackson
jacksonma at acm.org
Tue Feb 11 15:53:59 CET 2014
Peter and Peter:
[Peter B]
"Not right" does not necessarily = "Unsafe"
e.g. turning the ignition key might fail to start the engine but the
car is still safe (probably safer than if it did start!).
[MJ] Yes. This is an example of a 'positive' functional requirement
whose failure does not pose a safety risk. By contrast, failure of
the complementary functional requirement---turning the key the other
way stops the engine---does pose a safety risk, as some unintended
acceleration incidents show. This is an example of my first kind of
safety concern: unsafe because the intended behaviour 'went wrong'.
[Peter B]
Also "right" in the sense of compliance to positive requirements does
not guarantee safety if some positive safety requirement has been
omitted. e.g. large lorries turning left (driver's blind side( have
been killing quite a few cyclists in London recently - probably
because there is no requirement to be able to detect cyclists beside the lorry
[MJ] Yes. But surely compliance to positive requirements must mean
compliance to the stated and agreed positive requirements. If by
ignorance or neglect no positive requirement to detect cyclists has
been stated and agreed, then the case you mention is more like my
second type of safety concern: for the ignorant or neglectful
developers, blind-side cyclists are one of an unbounded set of
hazards outside the alphabet (or vocabulary) of the designed
behaviour---just like a tree falling on your car.
[PeterL]
(we probably prefer the word "vocabulary" to the word "alphabet",
because items of vocabulary have
meanings whereas items of an alphabet generally not)
[MJ] I take your point about the abstract nature of alphabet
elements. The advantage of 'alphabet' is that it denotes a precisely
defined set of distinct elements.
[PeterL]
For example, when expressing the safety requirement of a level
crossing (grade crossing), one
doesn't need to express any general functional requirement of a
train, or a road vehicle, except
that they occupy space. The safety requirement is then that the space
that each occupies must be
disjoint. You don't even need to say, at this level, that a car
moves, or a train moves. But surely
something about enabling movement must be in, or derivable from, the
general functional requirements
of either.
[MJ] Yes. But there is a wide gap between such a requirement and the
design of the crossing system. The requirement is a desired property
of the system behaviour. The system includes the tracks and the given
population of rail and road vehicles, and its designed behaviour must
take proper account of the sizes and speeds of these vehicles. When
we have a proposed designed behaviour we can evaluate whether it has
the desired property. In the case of your requirement for disjoint
spaces this would count as a safety analysis; the case of enabling
movement demands exactly the same kind of analysis, but it would not
count as a safety analysis.
-- Michael
At 12:26 11/02/2014, Peter Bernard Ladkin wrote:
>Michael,
>
>that sounds a lot like how one starts an Ontological Hazard
>Analysis. There is, though, a
>difference, as I see it, as follows.
>
>* You start with the overall functional requirements of the system,
>express them in some language
>(we probably prefer the word "vocabulary" to the word "alphabet",
>because items of vocabulary have
>meanings whereas items of an alphabet generally not) and wish to
>derive safety requirements from that.
>
>* Whereas OHA starts with very-top-level formulations of safety
>requirements, not general requirements.
>
>For example, when expressing the safety requirement of a level
>crossing (grade crossing), one
>doesn't need to express any general functional requirement of a
>train, or a road vehicle, except
>that they occupy space. The safety requirement is then that the
>space that each occupies must be
>disjoint. You don't even need to say, at this level, that a car
>moves, or a train moves. But surely
>something about enabling movement must be in, or derivable from, the
>general functional requirements
>of either.
>
>PBL
>
>On 2014-02-11 11:32 , Michael Jackson wrote:
> > A system has an intended functional behaviour satisfying a set of
> 'positive' requirements: "When I
> > press the footbrake the car slows down," and "When the current
> flow is excessive the circuit breaker
> > trips." These are positive, just like "When I turn the steering
> wheel the car turns" and "When the
> > ignition switch is turned on the motor starts." There is some
> (quite large) set of events, states,
> > etc embodying this behaviour: let's call it the alphabet of the
> functional design. When the car is
> > properly designed, maintained, and operated, it 'goes right' in
> the sense that an observer who
> > observes only elements of the alphabet will see that the
> functional behaviour is as intended.
> >
> > The first kind of safety concern arises directly from some
> failure to exhibit the intended
> > functional behaviour: "I pressed the brake but the car didn't
> slow down (so I ran into the car
> > ahead)." "The current flow exceeded the threshold but the circuit
> breaker didn't trip (so the cable
> > caught fire)." These safety concerns arise when "something goes
> wrong": what goes wrong (but not, in
> > general the resulting mishap) is fully expressible in the
> functional design alphabet. If a serious
> > accident results the investigators determine what should have
> "gone right" but in fact "went
> > wrong". Knowing "What constitutes going right" allows them to
> examine what "went wrong" and identify
> > the causes.
> >
> > The second kind of safety concern arises from circumstances
> expressible only in a larger alphabet.
> > The road collapses in front of the car; a tree falls on the car;
> the car is rammed from behind and
> > the fuel tank explodes; the exhaust system is damaged by impact
> of a flyng stone and poisonous fumes
> > leak into the cabin; a child left alone in the car contrives to
> start it and cause a crash. The
> > alphabet of such imaginable dangers is unbounded: the hazards
> cannot be identified by examining the
> > causal links on which the intended functional behaviour relies.
>
>
>
>Prof. Peter Bernard Ladkin, Faculty of Technology, University of
>Bielefeld, 33594 Bielefeld, Germany
>Tel+msg +49 (0)521 880 7319 www.rvs.uni-bielefeld.de
>
>
>
>
>_______________________________________________
>The System Safety Mailing List
>systemsafety at TechFak.Uni-Bielefeld.DE
More information about the systemsafety
mailing list