[SystemSafety] Machine models under formal methods
Todd Carpenter
todd.carpenter at adventiumlabs.com
Fri Nov 10 15:39:39 CET 2017
Correct me if I'm wrong (I'm sure I am), but the "[SystemSafety]
Bursting the anti formal methods bubble" discussion seems to assume that
we have a clue about what's going on beneath our software. As a caveat,
I agree with the premise that if you can't even assure your software
using a simple machine model (instruction X copies the values stored in
memory locations to/from registers, instruction Y performs ALU
operations on registers, and memory values don't change unless you use
instructions X or Y), then you can't hope to know what your software
does on a complex machine.
Unfortunately, we no longer have simple machines. I've heard claims from
microkernel, RTOS, and secure OS vendors that they verify their code
(using a variety of means, including formal proofs, analysis, etc.)
against instruction set models. But those instruction sets are often
backed by microcode, which was not developed under formal methods.
Worse, on some processors, that microcode is mutable, and loaded at
run-time. Furthermore, there are all sorts of other things going on
under your simple ALU, including Turing complete MMUs, IOMMUs, and DMA
engines.
If you want to be entertained by what the vendors are doing to you,
check out the following recent articles about Intel's Management Engine
back door that they baked into their chipsets. Sorry, it's not a back
door, it is a "management portal." Whatever. These portals are on
endpoint machines. They even put support for this capability in ATOM
SoCs. These management engines can arbitrarily access memory. (Oh, wait,
Intel doesn't do evil. It's not arbitrary... Trust them.) Server class
machines have the equivalent "Baseboard management controllers" which
can also interact with memory. All the BMCs we've studied use old
versions of BusyBox, and are riddled with opportunities. Think about how
that impacts the argument that starts with, "assuming a solid
foundation, we can formally show that this program does X."
http://www.theregister.co.uk/2017/10/21/purism_cleanses_laptops_of_intel_management_engine/
https://puri.sm/posts/deep-dive-into-intel-me-disablement/
http://www.zdnet.com/article/minix-intels-hidden-in-chip-operating-system/
http://www.theregister.co.uk/2017/11/09/chipzilla_come_closer_closer_listen_dump_ime/
(My apologies to the academics: these are popular press articles rather
than peer reviewed articles. Several of the articles point to the real
thing.)
So now lets assume that we develop processors, MMUs, caches, and
inter-core networks that are well behaved. It's possible. What happens
when that machine suffers Single Event Errors?
Note that I am NOT arguing against formal methods in the slightest. I am
merely pointing out that unless someone has a good model of the machine,
it might not be sufficient.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.techfak.uni-bielefeld.de/mailman/private/systemsafety/attachments/20171110/c2d01f4f/attachment-0001.html>
More information about the systemsafety
mailing list