Penetration Testing Today


Penetration testing is the most frequently and commonly applied of all software security best practices. This is not necessarily a good thing. Often penetration testing is foisted on software development teams by overzealous security guys and everyone ends up angry. Plus the focus tends to be too much driven by an outsidein approach. Better to adopt and implement the first two touchpoints (code review and architectural risk analysis) than to start with number three!

One reason for the prevalence of penetration testing is that it appears to be attractive as a late-lifecycle activity and can be carried out in an outsidein manner. Operations people not involved in the earlier parts of the development lifecycle can impose it on the software (but only when its done). Once an application is finished, it is subjected to penetration testing as part of the final preoperations acceptance regimen. The testing is carried out by the infosec division. Because of time constraints, most assessments like this are performed in a "time-boxed" manner as a final security checklist item at the end of the lifecycle.

One major limitation of this approach is that it almost always represents a too-little-too-late attempt to tackle security at the end of the development cycle. As we have seen, software security is an emergent property of the system, and attaining it involves applying a series of touchpoints throughout the software lifecycle (see Chapter 3). Organizations that fail to integrate security throughout the development process are often unpleasantly surprised to find that their software suffers from systemic faults both at the design level and in the implementation. In other words, the system has zillions of security flaws and security bugs. In a late-lifecycle penetration testing paradigm, inside-the-code problems are uncovered too late, and options for remedy are severely constrained by both time and budget.

Fixing things at this stage is, more often than not, prohibitively expensive and almost always involves Band-Aids instead of cures. Post-penetration-test security fixes tend to be particularly reactive and defensive in natureadjusting the firewall ruleset, for example. Though these short-notice kludges may fix up inside problems temporarily, they can be likened to putting a Band-Aid on a laceration. Tracking down the source of the problem and fixing things there is much more effective.

The real value of penetration testing comes from probing a system in its final operating environment. Uncovering environment and configuration problems and concerns is the best result of any penetration test. This is mostly because such problems can actually be fixed late in the lifecycle. Knowing whether or not your WebSphere application server is properly set up and your firewall plays nicely with it is just as important to final security posture as is building solid code. Penetration testing gets to the heart of these environment and configuration issues quickly. (In fact, its weakness lies in not being able to get beyond these kinds of issues very effectively.)

The success of an ad hoc software penetration test is dependent on many factors, few of which lend themselves to metrics and standardization. The first and most obvious variable is the skill, knowledge, and experience of the tester(s). Software security penetration tests (sometimes called application penetration tests) do not currently follow a standard process of any sort and therefore are not particularly amenable to a consistent application of knowledge (think checklists and boilerplate techniques). The upshot is that only skilled and experienced testers can successfully carry out penetration testing. For an example of what happens when not enough attention is paid during a penetration test, see the next box, An Example: Scrubbed to Protect the Guilty.

Use of security requirements, abuse cases, security risk knowledge, and attack patterns in application design, analysis, and testing is rare in current practice. As a result, security findings are not repeatable across different teams and vary widely depending on the skill and experience of the tester(s). Furthermore, any test regimen can be structured in such a way as to influence the findings. If test parameters are determined by individuals motivated (consciously or not) not to find any security issues, it is very likely that penetration testing will result in a self-congratulatory exercise in futility.[4]

[4] Put in more basic terms, don't let the fox guard the chicken house. If you do, don't be surprised if the fox finds absolutely no problems with the major hole in the northwest corner of the chicken yard.

Results interpretation is also an issue. Typically, results take the form of a list of flaws, bugs, and vulnerabilities identified during the penetration testing. Software development organizations tend to regard these results as complete bug reportscomprehensive lists of issues to be addressed in order to make the system secure. Unfortunately, this perception does not factor in the time-boxed (or otherwise incomplete) nature of late-lifecycle assessments. In practice, a penetration test can identify only a small representative sample of all of the possible security risks in a system (especially those problems that are environmental or involve operational configuration). If a software development organization focuses solely on a small (and limited) list of issues, it will end up mitigating only a subset of the security risks present (and possibly not even those that present the greatest risk).

All of these issues pale in comparison to the problem that penetration testing is often used as an excuse to declare security victory and "go home." Don't forget, when a penetration test concentrates on finding and removing a handful of issues (and even does so successfully), everyone looks good. Unfortunately, penetration testing done without any basis in security risk analysis leads to the "pretend security" problem with alarming consistency.

An Example: Scrubbed to Protect the Guilty

One major problem with application penetration testing as carried out today is that the testers are often very good, but not very software savvy network security people. If you have the same guys testing your network infrastructure setup (using Nessus) as are testing your applications, you might ask yourself what kind of value you're getting. It's not that network security people are dopes. They're not. It's just that results from a software penetration test need to be described in a coherent fashion so that real software people can act on them. Communicating with software people is difficult enough if you are one! If you're not ... woe is you.

As a good example of the kind of silly results you get when you have the wrong people do an application penetration test, take a look at the following excerpt of results cut from a real penetration test and carried out by an experienced (network) penetration team. We'll call the company APPSECO to protect the guilty.

Source Code Review of Input Validation Modules

APPSECO conducted a manual security review of a selected set of input validation modules. The modules were provided to CLIENT by the SWVENDOR as an example of their new input validation architecture. APPSECO analyzed the logic flow, input bounds checking, input type and content validation, and error handling. The modules reviewed are listed in the table below:

[List elided.]

...

Input Validation Modules

The results of the code analysis indicate that input validation is ineffective. Further, the input validation modules introduce potential cross-site scripting vulnerabilities to the application. While some input is validated for type, content, and for authorization, much of the input is not.

Because only a portion of the code base was provided, APPSECO cannot make a definitive and complete statement regarding the effectiveness of the code in controlling user input and restricting user access. As such, conclusions regarding the effectiveness of the code and severity of vulnerabilities identified may change upon review of the code given access to the entire code base. For example, numerous validation functions are called within the validation modules for which no definition was provided. These include: [functions elided]. These and all other validation functions must be reviewed.

I find it inexcusable to make claims like those found in the second paragraph given the kind of disclaimers in the third. No non-software person would look at parts of a system and say anything at all about what had been seen (short of identifying local bugs in API usage). Incidentally, later in the same report, a cut-and-paste error in the description of a network access control problem calls out a different client. Hmm.


One big benefit of penetration testing that is well worth mentioning is its adherence to a critical (even cynical) black hat stance. By taking on a system in its real production environment, penetration testers can get a better feel for operational and configuration issues often overlooked in software development. That's why penetration testing needs to be adjusted, not abandoned. For more on black box testing and why it is useful as an attacker technique, see Chapter 3 of Exploiting Software [Hoglund and McGraw 2004].

Coder's Corner

Here's an interesting little problem published by Professor D. J. Bernstein from the University of Illinois at Chicago and attributed to his student Ariel Berkman. (The original posting can be found at <http://tigger.uic.edu/~jlongs2/holes/changepassword.txt>.)

The posting describes a locally exploitable security hole in ChangePassword, which is a YP/Samba/Squid password-changing utility.

If changepassword.cgi is installed on a multiuser computer, any user with an account on the computer can gain complete control of the computer through the utility. The attacker can read and modify all files, watch all processes, and perform other such nefarious activities.

The bug occurs on line 317 of changepassword.c, which calls

system("cd /var/yp && make &> /dev/null");


without cleaning its environment in any way first. This is a big no-no.

Unfortunately (or not, depending on your hat color) the Makefile arranges for changepassword.cgi to be setuid root. A malicious user can create an exploit as follows:

set $PATH to point to an evil make program

set $CONTENT_LENGTH to 512

set $REQUEST_METHOD to POST

feed form_user=u&form_pw=p&form_new1=x&form_new2=x& to changepassword.cgi, where u is the username and p is the password.

The attacker's make program then runs with root privileges.

In short, you can use this CGI script to change a password and to root the box, but not through the Web interface. Since this program doesn't clean up its environment properly before running, you can log into the machine, put a malicious command named make early on your path, execute the CGI script, and you're all done.

This bug is interesting for a number of reasons.

  • It's a nice example of programmers' assumptions being violated.

  • It's a Web application, but you can't find the vulnerability using port 80 nonsense.

  • Because the problem is related to the interaction between the program and the environment, exploitability is tied to the configuration of the machine.

  • Your QA environment might be okay and your production server might be vulnerable.

  • You're unlikely to find it with any sort of black box penetration test since the tester needs to look at the source code to find the problem.





Software Security. Building Security In
Software Security: Building Security In
ISBN: 0321356705
EAN: 2147483647
Year: 2004
Pages: 154
Authors: Gary McGraw

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net