[SystemSafety] OpenSSL Bug
Todd Carpenter
todd.carpenter at adventiumlabs.com
Fri Apr 11 17:14:02 CEST 2014
On 4/11/2014 9:38 AM, Mike Rothon wrote:
>
> 2) Is it implicit that FOSS is less secure than proprietary software because exploits can be found
> by both analysis and experimentation rather than just experimentation?
No. Security by obscurity is not security, since the flaws remain in the system. Hiding
functionality behind a proprietary label is merely increasing the *time* before those flaws are
uncovered. Also, the assumption that "experimentation" is the only weapon to use against
proprietary code is quite incorrect. Decompilers and other analysis tools, including model-based
protocol checkers, are extremely advanced these days (really, you'd be amazed) and attackers,
(a.k.a. researchers), are quite competent with applying them. Large commercial companies often
don't invest in these tools, except for reverse engineering for IP litigation purposes, so the
prevailing attitude in those companies is that the binaries are protected. The attack community has
no reason to expose this fallacy, so the attitude persists.
It is a true statement that binary, obfuscated code _raises_ the cost and time of attack, compared
to having the source code for that specific binary. However, you have to be careful about this
comparison, and cannot treat it as a blanket statement. Source code that is well-tread, with lots
of exposure including analytical tools, and multiple sets of eyes on it, will have fewer (not zero,
as this case shows) opportunities for low-hanging security & safety vulnerabilities. Source that is
developed in a vacuum, with incompetent developers and an incompetent management structure, that
focuses on features and coding, rather than design and analysis, will exhibit more flaws. With good
tools, it might very well be cheaper and quicker to exploit the lower-quality, obfuscated product,
than the exposed source code.
So, the open vs proprietary argument is a red herring. The real issue is the opportunity for
exposure (whether analytical or otherwise) and working the flaws out.
> Or will this start a gold rush analysis of FOSS by security organisations resulting
> in security levels that are close to or better than proprietary software?
Nah. This isn't the first time serious persistent flaws have been found in software, whether it's
open source or not. It is merely another reminder that there is NO panacea. You can't get around
the fact that you need competent, well-trained engineers and scientists, appropriately resourced and
motivated by well-trained and informed management, using a *combination* of tools and methods to
engineer real systems for the real world. No single language, tool, process, or even outlawing
stupidity will fix the problem. Some might improve it, and we should be happy to take incremental
steps in the right direction. Me, I like strongly typed languages, such as Ada, for my safety
critical code. But I don't use Ada for my AI solvers. Pick the right tool for the right problem.
More information about the systemsafety
mailing list