[SystemSafety] One Admin Item; One article review
Peter Bernard Ladkin
ladkin at causalis.com
Thu Feb 2 14:59:54 CET 2023
1. The Admin item.
I spoke yesterday with my faculty sysadmins about the System Safety List. They will keep it running
until a suitable handover has been arranged. But I think in the meantime we should designate a point
of contact for the List and the faculty sysadmins to arrange a handover in the case I'm run over by
a bus (nota bene - it won't be a bus; the bus drivers here are almost universally careful and abide
by the overtaking rules).
Also - and this is very important for all contributors to the List - you should please send
contributions from now on to the address
systemsafety at lists.techfak.uni-bielefeld.de
The "lists" prefix to the domain is key.
There is a new university regulation that emails may not be forwarded from a university email
address to an address outside the university (this for liability reasons under data protection
regulations). However, this regulation explicitly does not apply to faculty-approved mailing lists.
So if you send a contribution to a BI-Uni email address *without* the "lists" domain prefix, it
won't get forwarded (at some unspecific time in the near future); but if you send it with the
"lists" prefix, it will be distributed to the List as expected.
2. The Director and an associate director of US CISA have just published an article on the state of
cybersecurity in the journal Foreign Affairs
https://www.foreignaffairs.com/united-states/stop-passing-buck-cybersecurity
Alone because of who they are, this article is important. But it is also good.
There are two main issues they deal with. The key takeaway for me is that they are saying directly
that software vendors (presumably they include vendors of software-based devices) must up their game.
Quite right. The majority of CVEs on the ICS-CERT WWW site talk about "specially crafted input
sequences", in other words slightly more sophisticated variants of the buffer-overflow
vulnerabilities of the 1990's. And we have known of general solutions for that since the late
1960's. To coin a phrase, let's call it "strong data typing".....
I have been adding the word "enhanced" to that phrase for a few years, but the principle is the
same: define acceptable inputs and check that your program/device functions as desired on all
acceptable inputs. If you must have a "special sequence" that allows your device to be remotely
modified (say, a trigger sequence for a "debug/maintenance mode") then that is a vulnerability,
especially if no appropriate authentication process is required. (I leave it up to the reader to
determine whether it is worse or better if there are such sequences that you *don't* know about,
which is often the case.)
A colleague who works for IABG (a south-German industrial-support firm that has been in
cybersecurity for ever and a day -- Marv Schaefer used to work with them in the 70's, I recall)
pointed me to an entry for a CompanyXXX Industrial Internet Router. "Specially crafted input" could
allow remote takeover. CompanyXXX's advice (included in the CVE) is: "this device should be operated
in an appropriate security environment" (or words to this effect; the echo from IEC 63069 may well
not be accidental). My colleague pointed out that this is one of the devices CompanyXXX was *selling
to help you establish* an appropriate security environment for whatever plant you wanted to use it
for. I guess "it's security environments all the way up" is a way of saying it's turtles all the way
down.
The second thing Easterly and Goldstein maintain is that "Cybersecurity should be CEO+Board
responsibility". I have heard this many times. Which means that there are still companies in which
cybersecurity isn't a Board-level responsibility. But even in those in which it is, I am not sure
how much this is supposed to help.
First, you mostly need regulation to make something which isn't Board-level responsibility into one.
To get safety firmly entrenched as a Board-level responsibility, it was felt necessary in GB to
devise a crime of corporate manslaughter.
Second, suppose you try to do that with cybersecurity. With safety, when you get better at it, you
have fewer incidents. It is quite possible for one Board member to take responsibility for
supervising the safety incidents and suggesting changes.
But that is not going to happen with cybersecurity.
It is not going to happen because there are more and more and more incidents, not fewer and fewer.
In a largish company, there will be too many incidents to expect any one person to have an adequate
technical overview. Any "Board-level responsibility" needs to be accompanied by a compression of
such information which will enable one person (or two) to exercise adequate oversight (which means
identifying where things need fixing and ensuring those things are fixed). As far as I know, there
are few if any viable suggestions of how to perform such a compression.
So, for example, I imagine that NHS trusts knew that they were dependent on old vulnerable versions
of Windows for some of their critical networked services. I imagine that is so because lots of
people in and around them knew. We have at least one well-known colleague who has been telling them
that (and a lot of other things) for decades.
This situation led those trusts to be vulnerable to WannaCry, Petya and NotPetya. They knew, and
they couldn't do much or anything about it. What matters for exploitation is not whether key people
know about vulnerabilities, but whether known vulnerabilities are present. If no one knows how to
resolve that situation, then you're in the soup, no matter who knows or who takes cybersecurity
seriously or not.
I recall the UK MoD a couple of decades ago trying to introduce a process by means of which a
specific engineer would be determined to be "responsible" for the operation of any critical system
or subsystem; the idea being to bring such systems to be more reliable by having a "keeper", as it
were. It rapidly became apparent that you couldn't persuade any half-awake engineer to take on such
a job. A plane goes down; once a system fault is suspected, one of those "responsible engineers" is
going to end up in jail. Who would possibly volunteer to put themselves in such a situation? What's
the professional insurance premium going to cost?
PBL
Prof. i.R. Dr. Peter Bernard Ladkin, Bielefeld, Germany
Tel+msg +49 (0)521 880 7319 www.rvs-bi.de
-------------- next part --------------
A non-text attachment was scrubbed...
Name: OpenPGP_signature
Type: application/pgp-signature
Size: 840 bytes
Desc: OpenPGP digital signature
URL: <https://lists.techfak.uni-bielefeld.de/pipermail/systemsafety/attachments/20230202/5fbb52d1/attachment.sig>
More information about the systemsafety
mailing list