Saturday, October 18, 2008

An Interesting Blog on Software Security (for a Change)

Either because I was never really all that good at it or because the only reason I liked trying to break things was to learn about what I was trying to break, it is safe to say I don't spend as much time thinking about software security but I did actually find the bugs vs. flaws entry on toasa sort of interesting and not just because it got me curious was mjr's "bad science" was (a question that might liven up a team meeting next week) because for many years in various presentations I've been giving for years (including in the intro my current Nessus course I teach) the existence of three distinct types of vulnerabilities: design, implementation, and misconfiguration.

Of course I caveat this proclamation that there are probably a hundred different vulnerability taxonomies out there and this is an obvious oversimplification. And I think this oversimplification originated in the introductory prezos I used to give to Cisco product teams, many which weren't so security clueful around 2000. It was a simple, high-level conception that also corresponded to a phase of the software development cycle. Sort of, because there was the weird case where testing actually discovered design flaws in applications, which should not really be the case, but actually was.

But something about this admittedly oversimplified scheme was bugged me for a while (e.g. if someone has uncovered a fundamental design flaw in DNS or TCP, then why does everybody have to fix their implementations?) and I think John McDonald articulates some of the difficulties in maintaining (and even the value of) such as scheme.

Most compelling, to me, is the argument that the [thought] process for discovery of many classes of vulnerabilities is essentially the same.
I’ve audited for both classes of issues and everything in between. One thing I’ve observed is that the thought process is very similar. You have a system, which has data-flow and control-flow, which turns into an algorithmic system of logic. You have to brainstorm pathological ideas and trace them through the system. Or, you observe potentially problematic elements or nuances in the system and try to trace in both directions to see if you can leverage them to do something "unusual."

When it’s most fun is when you observe multiple atomic actions you can perform in the system, both legitimate and some born of mistake (like a subtle logic oversight). You then use those actions to form a system of logic of your own and in essence create your own "evil" language. You try to find some way of achieving an end by stringing all these atomic actions together programmatically. If you’ve spent a lot of time breaking systems, you probably know what I mean. If not, I assure you that I’m not just making up words to sound cool. (Yeah, this is what I think cool sounds like. The ladies love it.)

Comparing auditing assembly to auditing C is another good example. These are essentially similar tasks but performed at a different layer of abstraction. There’s myriad technical differences in what you do and how you do it, but the actual thought processes are pretty much the same.


So our presentation on BGP Vulnerabilities back in 2003 reflects this simplified view of the problem space, as well as the challenges of maintaining such a framework.

For example, the bugs discovered [through fuzzing] in IOS and gated BGP implementations (failure to properly validate bgp lengths or handle truncated BGP Opens and whatever else I can't remember) are clearly implementation vulns (or bugs). No arguing that, but the differing responses of BGP implementations to SYNs or BGP Opens sort of explodes the division.

If we look at the behavior of the IOS BGP implementation which refused to acknowledge (yes at the TCP layer) if the source IP was not a valid peer, is that a design strength (or some other term, meaning the opposite of a vulnerability) where the fact that Juniper and some of the others allowed you to send SYN's to identify Juniper BGP-listening routers. These are definitely out of scope of RFC 1771, but does that make them implementation issues. Quite, literally yes. But on the other

Which leads to the division of intentionality and culpability, which makes this design v. implementation issue useful. Design flaws/errors (or whatever) are intentional whereas implementation errors/flaws/bugs are accidental. This distinction also helps clarify by the blame game, especially if you toss in [mis]configuration flaws, which are obviously the fault of the end user -- whereas design and implementation flaws are the fault of the vendor.

Or so the mythology goes. If you get owned by something you screw up it is your fault if it is due to something you couldn't control it is obviously somebody else's problem. Well of course, even this is a little more complicated than it would seem, because if you don't patch your systems to a disclosed implementation flaw it magically now becomes also a misconfiguration flaw and it is on you.

No comments: