[SystemSafety] "Security Risk" and Probability
Todd Carpenter
todd.carpenter at adventiumlabs.com
Thu Nov 2 21:38:55 CET 2017
> What seems a little harder to construct is an example of
> software which is safe (that is, implements a safety function
> per requirement) but may be made to violate
> availability. Anyone?
Sure. Back to that medical device example. Let's say for the sake of
argument that the device is an analgesic infusion pump. The device can
be safe - it won't overdose the patient under any circumstances. An
attacker takes over the radio, and causes a DDoS attack on the network.
That's an availability attack on the network, or on other devices on the
network. You might argue, "No, that's availability on other things."
OK, let's assume the radio is used for reporting drug usage to the
billing group at the hospital. The attackers use so much of the radio
that it blocks the pump's ability to report back to the hospital,
forcing a manual intervention for billing, even though the device still
provides safe pain relief to the patient. That's availability on desired
functionality of the pump itself.
Let's assume we have a more complicated device which needs a drug
library update. Again, the radio is busy, and the pump won't get the
update. The hospital procedure is to not use the pump if the device's
internal library isn't up-to-date. The device remains perfectly safe (by
the medical definition of safe, perhaps not yours), but useless. That's
an availability attack in the extreme.
On 11/2/2017 2:53 PM, Peter Bernard Ladkin wrote:
>
> On 2017-11-02 19:54 , Martyn Thomas wrote:
>> How can software be safe and not secure? Only by having no safety function?
> Kevin got to the list first.
>
> Here is my example. Let S be a perfect, invulnerable safety function. Let "input" be a variable
> which is set when any key is pressed on an input device physically separate from anything to do with
> S. Let "communicate-public" be a routine which prints given data D on, say, a public WWW site. Let
> "communicate-private" be a routine which prints given data D on a device only accessible by those
> with access permission for data D.
>
> Let us consider "security" as involving the CIA triad.
>
> (S || loop if input then communicate-public(D) endloop) is a specification for software which is
> safe and insecure in that it violates confidentiality.
>
> Suppose now that (loop if input then communicate-private(D) endloop) is vulnerable to modification
> which turns it into software satisfying (loop if input then communicate-public(D) endloop), and only
> vulnerable to this. Then (S || loop if input then communicate-private(D) endloop) is safe, but may
> lose integrity and confidentiality.
>
> These are not necessarily practical examples. Why would you write software to perform (S || loop if
> input then communicate-public(D) endloop) when you could modularise it into S and (loop if input
> then communicate-public(D) endloop)? You might accidentally write such software, and be then rightly
> condemned for poor software design. But you could.
>
> What seems a little harder to construct is an example of software which is safe (that is, implements
> a safety function per requirement) but may be made to violate availability. Anyone?
>
> PBL
>
> Prof. Peter Bernard Ladkin, Bielefeld, Germany
> MoreInCommon
> Je suis Charlie
> Tel+msg +49 (0)521 880 7319 www.rvs-bi.de
>
>
>
>
>
>
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20171102/ef162a73/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: OpenPGP digital signature
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20171102/ef162a73/attachment.sig>
More information about the systemsafety
mailing list