Thinking about (Malicious) Input


Put simply, the biggest problems in software security exist because software takes input (see the taxonomy of coding errors in Chapter 12). Whether to trust input (including the very format that the input takes) is a critical question that all software builders must ponder.

Exploiting Software is filled with examples of programs that break when malformed or maliciously formed input leads to security compromise [Hoglund and McGraw 2004]. From the much-ballyhooed buffer overflow (which involves putting too much input in too small a place) to the likewise overhyped SQL injection attack and cross-site scripting (XSS) attacks, trusting input turns out to be the common root cause.

Carefully handling input is paramount to software security. Note that input includes things like register settings, environment variables, file contents, and even network configuration. If your program consumes data from "out there," you need to think carefully about who can dink around with the stuff your program eats.

Attacker toolkits (briefly described in Chapter 6) focus plenty of attention on input, with a plethora of fault injection tools, grammar generators, re-players, and the like. By its very nature, penetration testing is obsessed with input as well (mostly because crafting malicious input is the main way to break a system from the outside). If your program accepts input over the network, it needs to be very skeptical of what it is getting.

eXtreme Programming and Security Testing

XP takes an interesting approach to testing, often referred to as "test first" or "test-driven design." Ironically, this approach encourages coding to the testsan activity that was explicitly discouraged by testing gurus before XP came along. Test-driven design is not a disaster. In fact, coding to the tests may work for standard software "features." I bet you can guess the problem thoughsecurity is not a feature.

Tests based too closely on features can fail to probe deeply into more subtle user needs that are nonfunctional in nature. Probing security features only gets us so far. Once again, this is a problem of testing for a negative.

Though unit tests and user stories in XP are supposed to specify the design, they simply don't do this well enough to get to design flaw issues. The code is the design in XP, but finding design flaws by staring at large piles of code is not possible. In fact, refactoring aside, top-down design does not really happen explicitly in some XP shops. That means there is no good time to consider security flaws explicitly.

By using acceptance tests (devised in advance of coding) as release criteria, XP practitioners keep their eyes on the functional ball. However, this myopic focus on functionality causes a propensity to overlook nonfunctional requirements and emergent situations. Security fits there.

One solution to this problem might be to focus more attention on abuse cases early in the lifecycle. This would cohere nicely with XP's user stories. Perhaps some "attacker stories" should be devised as well and used to create security tests.

For more on my opinions about XP and software security, see my talk, "XP and Software Security?! You Gotta Be Kidding," delivered at XP Universe in 2003 <http://www.cigital.com/presentations/xpuniverse/>.


Using a black-list approach (which tries to enumerate all possible bad input) is silly and will not work. Instead, software needs to defend its input space with a white-list approach (and a Draconian white-list approach, for that matter). If your program enforces statements like "Accept only input of 32-bits as an Integer" (something that is easy to do in a modern type-safe language), you're better off right off the bat than with a system that accepts anything but tries to filter out return characters. Make sure that your testing approach delves directly into the black-list/white-list input-filtering issue.

Microsoft pays plenty of attention to malicious input in its approach to software security. You should too. (See Writing Secure Code [Howard and LeBlanc 2003].)




Software Security. Building Security In
Software Security: Building Security In
ISBN: 0321356705
EAN: 2147483647
Year: 2004
Pages: 154
Authors: Gary McGraw

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net