[SystemSafety] COTS display certification
Peter Bernard Ladkin
ladkin at rvs.uni-bielefeld.de
Wed Jul 27 06:03:34 CEST 2016
On 2016-07-27 02:24 , Matthew Squair wrote:
> I've always been a bit fuzzy about the next step though.
>
> If we decide that software will implement this function then the software development processes must
> satisfy the allocated integrity level.
Category mistake again. *Safety functions* get a SIL in IEC 61508. Not software development processes.
Here is the IEC 61508 conception, in a nutshell.
* Safety functions are functions supplementary to the basic working of the Equipment Under Control
(EUC).
* The definition and implementation of a safety function is mandated whenever there is an
unacceptable risk of a hazard/outcome (whatever "acceptable" means, which is specifically undefined).
* The purpose of a safety function is to bring the identified unacceptable risk to an acceptable level.
* SILs are assigned to safety functions.
* Safety requirements are SILs.
* An item (SW, HW, HW+SW) is governed by a SIL if it implements a safety function or part of a
safety function. The SIL it is assigned is the SIL of the safety function.
You are of course welcome to use any of these terms in a different fashion, but I would ask that you
quote the source, if you do. It turns out that, after twenty years, none of these key terms are yet
in the IEV (= IEC 60050).
> .... I implement a function in software as a mix of data and code.
> ...... Logically any integrity processes need to cover both the
> code and data 'parts' because it's both that are satisfying the functional requirement.
And - whatever you may mean by "integrity processes" - they do. As follows.
Suppose system S has safety requirement SR(S). Suppose P1, P2, P3, ..... are parts of S (whatever
"parts" are - let's use one of Peter Simons' conceptions
https://global.oup.com/academic/product/parts-a-study-in-ontology-9780199241460 ). Then I say SR(S)
"covers" P1, P2, P3, ......
A safety function SF has safety requirement (SIL) X. Let I(SF) be an implementation of SF, that is,
the functional behaviour of I(SF) is close to that of SF (the meaning of "close to" here is given
precisely by X). Let P1 be the "data" part of I(SF) and P2 the "code" part (if there is any good way
of distinguishing such parts). Then X covers P1 and P2 by the above definition.
I think this is all utterly banal.
> But I can use varying ratios of the two.
Which rather suggests that there is unlikely to be a clear way to apportion an overall safety
(integrity) requirement between P1 and P2 in I(SF) based on SF alone.
> But how does one argue equivalence between integrity processes for code and data? At the aggregate
> level surely you must be making that argument?
I don't know what "integrity processes" are. When you tell me, then I'll be able to answer the
question whether I am "making [a specific] argument".
> And that such integrity is independent of potentially
> varying ratios of the two in any solution?
If "solution" is an I(SF) then there is just one P1 and one P2, given by whatever definitions you
like to use for "code" and "data". That they "potentially vary" suggests you are thinking that in a
different "solution" I'(SF) there will be different parts P1' for code and P2' for data. Obviously
the integrity requirement is independent of P1/P2, P1'/P2' because it is X and X is assigned to SF,
not to I(SF) or I'(SF).
PBL
Prof. Peter Bernard Ladkin, Bielefeld, Germany
MoreInCommon
Je suis Charlie
Tel+msg +49 (0)521 880 7319 www.rvs-bi.de
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: OpenPGP digital signature
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20160727/38d3a792/attachment-0001.pgp>
More information about the systemsafety
mailing list