[SystemSafety] State of the art for "safe Linux"
Paul Sherwood
paul.sherwood at codethink.co.uk
Mon Aug 5 19:55:27 CEST 2024
Hi Peter,
I hope you are well! Thank you for your feedback, please see my comments
inline
On 2024-08-05 16:42, Prof. Dr. Peter Bernard Ladkin wrote:
> there seems to be a largish disconnect between the work you cite, and
> any applications I know of of safety-related software from my contacts
> in the 61508 standards community. I don't know specifically about the
> civil-aerospace applications as much as Dewi does.
I agree there is a disconnect, but that is the current state of
published research I could find on Linux vs safety.
> If you (and colleagues) wish to use a given piece of software in a
> safety-critical application, I don't think you have any other option
> but to try to conform with applicable software functional safety
> standards, whether you like them or not.
I agree. We have been working with exida for some years on this,
primarily focusing on ISO 26262 because that is the standard which most
interests our customers. As I mentioned in my email, 26262 allows for
consideration of 'state-of-the-art' and so I'm hoping to establish what
is 'state-of-the-art' in research, as a precursor to looking at
state-of-the-art in practical software engineering.
> Any possible client must know that they will not be driven into
> bankruptcy if some system using this software fails and causes harm
> (which is always a possibility). That means you need some kind of
> assessment from recognised assessors such as TÜV Rheinland or TÜV Süd.
> Those assessors will write you a certificate concerning standards they
> are familiar with. A client can then use the software according to the
> conditions expressed in the certificate, and will be deemed by most
> courts (which is where claims of damages from harm end up) to have
> exercised what the Brits call due diligence by so doing.
Again, understood and agreed.
> If you want to change standards to accommodate another "vision", there
> is one and only one way of doing so. That is by joining a standards
> committee and influencing them to change the standard. That is harder
> than you may anticipate.
Please be assured I am aware of the scale of difficulty. Given my
advancing years I shall not be starting down that path.
> This business about "Linux kernel for safety-related systems" has been
> going on for so long. Other companies have written kernel-function OSs
> for safety-critical systems, and have assessment certificates for them
> from recognised assessors, all within that time. What's wrong with
> trying that route?
There is nothing particularly wrong with that route, except in some
cases companies wave assessment certificates while making claims that
are not actually supported by the documents.
For example
- in practice the version of software mentioned in the certificates may
not be deployable into an actual system, without lots of (uncertified)
drivers/modifications/effort
- the safety claims in the certificate may be smaller (and less useful)
than the claims made in sales pitches
Writing a kernel-function OS is not a route that we will take, though,
because for philosophical and engineering reasons we choose to reuse
established open source software rather than reinventing, as far as
possible.
> Imagining you can use statistical assessment to validate the use of
> complex software on complex hardware in critical applications, is, I
> would suggest, a pipe dream. The maths on the amount of evidence you
> need, let alone the constraints on the quality of that evidence, is
> sufficient to pretty much rule it out.
That's an interesting comment, which makes me (and some of my American
colleagues) immediately think "Hold my beer!" :-)
We are working on it, and will be pleased to share the evidence, the
maths, and the constraints with interested parties soon.
TVM and best wishes
Paul
More information about the systemsafety
mailing list