[SystemSafety] OpenSSL Bug
Todd Carpenter
todd.carpenter at adventiumlabs.com
Thu Apr 17 15:11:12 CEST 2014
Alas, right as I hit send, I realized I should have explained myself. Steve wrote that we should
"fix the process." In the US, "fix" is vernacular for modifying your pets (spay or neuter) so they
don't reproduce. I thought it would be amusing to insincerely suggest that would be a appropriate
approach to reduce incompetent practices for safety critical systems, since it would remove certain
undesirable tendencies from the gene pool.
I humbly apologize to the list for attempting to introduce some levity to a serious discussion. I
will return to lurking.
-TC
On 4/17/2014 12:59 AM, Peter Bernard Ladkin wrote:
>
>> On 17 Apr 2014, at 00:52, Todd Carpenter <todd.carpenter at adventiumlabs.com> wrote:
>>
>> If we fix the person, then wouldn't part of the problem stop reproducing itself? :)
>>
>>> On 4/16/2014 5:32 PM, Steve Tockey wrote:
>>> Instead of blaming the person, how about we blame the process? And then take active steps to fix
>>> the process?
> Todd, Steve and Bertrand have elegantly recapitulated the various organisational reactions to a serious accident in one sentence each.
>
> The interchange highlights the trope that, in complex systems, cause and mitigation aren't necessarily well correlated. In a complex critical system with many distributed, as well as variously replicated, control functions, the computer where the fix is installed is not necessarily the computer in which the functional problem was exhibited. To internetworking specialists this is easy to explain - the spam filter sits on your email client, but the cause of the problem lies with and behind spam-propagation machines. But it seems to be harder to explain to people involved in control-system forensics.
>
> The RISKS Forum Digest edition of 16 April 2014 has of course some comments from long-time contributors.
>
> Indeed, Martyn's contribution illustrates the above trope: a SW fix for spider infiltration http://catless.ncl.ac.uk/Risks/27.84.html#subj1
>
> Henry Baker is just as disgusted as I am about memory-insecure programming practice. He also cites someone who found another vulnerability. http://catless.ncl.ac.uk/Risks/27.84.html#subj3
>
> Jonathan Shapiro discusses memory "safety" in some depth (I wish people wouldn't call it "safety" - either "reliability" or "security" are appropriate words, but getting people to use technical terms precisely in informatics appears to be just as easy as getting them to use memory-secure technology). He says that the "many eyes" theory of inspection concerning open-source SW has been profoundly discredited, but one might wish for citations to the literature. He also mentions an unreferenced Columbia PhD thesis which showed that independent programming teams working from the same specification made correlated errors, which exhibits yet another kind of memory fragmentation, namely that apparently one can obtain a PhD nowadays by recapitulating well-known fundamental work http://catless.ncl.ac.uk/Risks/27.84.html#subj5
>
> Finally, the Cloudflare Challenge is worth knowing about, especially if you are one of those security "specialists" saying "we don't know that it's been exploited yet" http://catless.ncl.ac.uk/Risks/27.84.html#subj4
>
> PBL
>
> Prof. Peter Bernard Ladkin, University of Bielefeld and Causalis Limited
> _______________________________________________
> The System Safety Mailing List
> systemsafety at TechFak.Uni-Bielefeld.DE
>
More information about the systemsafety
mailing list