Kumbaya (for Software Security)


Software security is a significant and developing topic. The touchpoints described in this book are meant to be carried out by software security specialists in tandem with development teams. The issue at hand is how information security professionals can best participate in the software development process. If you are a CISSP, an operational security professional, or a network administrator, this Bud's for you. After a brief refresher paragraph on each touchpoint, I will introduce some recommendations relevant to both software developers and information security practitioners. The idea is to describe how best to leverage the complementary aspects of the two disciplines.

  • Requirements: Abuse Cases

    The concept of abuse case development is derived from use case development (see Chapter 8). In an abuse case, an application's deliberate misuse is considered and the corresponding effect is pondered. For example, when addressing user input, a series of abuse cases can be constructed that describe in some detail how malicious users can and will attempt to overflow input buffers, insert malicious data (e.g., using SQL insertion attacks), and basically ride herd over software vulnerability. An abuse case will describe these scenarios as well as how the application should respond to them. As with their use case counterparts, each abuse case is then used to drive a (non)functional requirement and corresponding test scenario for the software.

    Involving information security in abuse case development is such low-hanging fruit that the fruit itself is dirt splattered from the latest hard rain. Simply put, infosec pros come to the table with the (rather unfortunate) benefit of having watched and dissected years of attack data, built forensics tools,[3] created profiles of attackers, and so on. This may make them jaded and surly, but at least they intimately know what we're up against. Many abuse case analysis efforts begin with brainstorming or "whiteboarding" sessions during which an application's use cases and functional requirements are described while a room full of experts pontificate about how an attacker might attempt to abuse the system. Properly participating in these exercises involves carefully and thoroughly considering similar systems and the attacks that have been successful against them. Thorough knowledge of attack patterns and the computer security horror stories of days gone by brings this exercise to life. Getting past your own belly button is important to abuse case success, so consider other domains that may be relevant to the application under review while you're at it. Once again, real battle experience is critical.

    [3] See Dan Farmer and Wietse Venema's excellent new tome on forensics, Forensic Discovery [Farmer and Venema 2005].

    Infosec people are likely to find (much to their amusement) that the software developers in the room are blissfully unaware of many of the attack forms seen every day out beyond the network perimeter. Of course, many of the uninformed are also quite naturally skeptical unbelievers. While converting the unbelievers, great care should be taken not to succumb to the tendency toward hyperbole and exaggeration that is unfortunately common among security types. There's really nothing worse than a blustery security weenie on his high horse over some minor skirmish. Do not overstate the attacks that you've seen and studied. Instead, stick to the facts (ma'am) and be prepared to back your statements up with actual examples. Knowledge of actual software technology a plus.

  • Design: Business Risk Analysis

    Assessing the business impact likely to result from a successful compromise of the software is a critical undertaking (see Chapters 2 and 5). Without explicitly taking this on, a security analysis will fall short in the "who cares" department. Questions of cost to the parent organization sponsoring the software are considered relative to the project. This cost is understood in terms of both direct cost (think liability, lost productivity, and rework) as well as in terms of indirect cost (think reputation and brand damage).

    The most important people to consult when assessing software-induced business risks are the business stakeholders behind the software. In organizations that already practice business-level technology analysis, that fact tends to be quite well understood. The problem is that in a majority of these organizations, technology assessment of the business situation stops well before the level of software. A standard approach can be enhanced with the addition of a few simple questions: What do the people causing the software to be built think about security? What do they expect? What are they trying to accomplish that might be thwarted by successful attack? What worries them about security? The value that information security professionals can bring to answering these questions comes from a wealth of first hand experience seeing security impact when similar business applications were compromised.

    That puts them in a good position to answer other security-related questions: What sorts of costs have similar companies incurred from attacks? How much downtime was involved? What was the resulting publicity in each case? In what ways was the organization's reputation tarnished? Infosec people are in a good position to provide input and flesh out a conversation with relevant stories. Here again, great care should be taken to not overstate facts. When citing incidents at other organizations, be prepared to back up your claims with news reports and other third-party documentation.

  • Design: Architectural Risk Analysis

    Like the business risk analysis just described, architectural risk analysis assesses the technical security exposures in an application's proposed design and links these to business impact. Starting with a high-level depiction of the design, each module, interface, interaction, and so on is considered against known attack methodologies and their likelihood of success (see Chapter 5). Architectural risk analyses are often usefully applied against individual subcomponents of a design as well as on the design as a whole. This provides a forest-level view of a software system's security posture. Attention to holistic aspects of security is paramount as at least 50% of security defects are architectural in nature.

    At this point we're beginning to get to the technical heart of the software development process. For architectural risk analysis to be effective, security analysts must possess a great deal of technology knowledge covering both the application and its underlying platform, frameworks, languages, functions, libraries, and so on. The most effective infosec team member in this situation is clearly the one who is a technology expert with solid experience around particular software tools. With this kind of knowledge under her belt, the infosec professional should again be providing real-world feedback into the process. For example, the analysis team might be discussing the relative strengths and weaknesses of a particular network encryption protocol.

    Information security can help by providing perspective to the conversation. All software has potential weaknesses, but has component X been involved in actual attacks? Are there known vulnerabilities in the protocol that the project is planning to use? Is a COTS component or platform a popular attacker target? Or, on the other hand, does it have a stellar reputation and only a handful of properly handled, published vulnerabilities or known attacks? Feedback of this sort should be extremely useful in prioritizing risk and weaknesses as well as deciding on what, if any, mitigation strategies to pursue.

  • Test Planning: Security Testing

    Just as testers typically use functional specifications and requirements to create test scenarios and test plans,[4] security-specific functionality should be used to derive tests against the target software's security functions (see Chapter 7). These kinds of investigations generally include tests that verify security features such as encryption, user identification, logging, confidentiality, authentication, and so on. Think of these as the "positive" security features that white hats are concerned with.

    [4] Especially those testers who understand the critical notion of requirements traceability <http://www.sei.cmu.edu/str/descriptions/reqtracing_body.html>.

    Thinking like a good guy is not enough. Adversarial test scenarios are the natural result of the process of assessing and prioritizing software's architectural risks (see Chapter 7). Each architectural risk and abuse case considered should be described and documented down to a level that clearly explains how an attacker might go about exploiting a weakness and compromising the software. Donning your black hat and thinking like a bad guy is critical. Such descriptions can be used to generate a priority-based list of test scenarios for later adversarial testing.

    Although test planning and execution are generally performed by QA and development groups, testing represents another opportunity for infosec to have a positive impact. Testingespecially risk-based testingnot only must cover functionality but also should closely emulate the steps that an attacker will take when breaking a target system. Highly realistic scenarios (e.g., the security analog to real user) are much more useful than arbitrary pretend "attacks." Standard testing organizations, if they are effective at all, are most effective at designing and performing tests based around functional specifications. Designing risk-based test scenarios is a rather substantial departure from the status quo and one that should benefit from the experience base of security incident handlers. In this case, infosec professionals who are good at thinking like bad guys are the most valuable resources. The key to risk-based testing is to understand how bad guys work and what that means for the system under test.

  • Implementation: Code Review

    The design-centric activities described earlier focus on architectural flaws built into software design. They completely overlook, however, implementation bugs that may well be introduced during coding. Implementation bugs are both numerous and common (just like real bugs in the Virginia countryside) and include nasty creatures like the notorious buffer overflow, which owes its existence to the use (or misuse) of vulnerable APIs (e.g., gets(), strcpy(), and so on in C) (see Chapter 4). Code review processes, both manual and (even more important) automated with a static analysis tool, attempt to identify security bugs prior to the software's release.

    By its very nature, code review requires knowledge of code. An infosec practitioner with little experience writing and compiling software is going to be of little use during a code review. If you don't know what it means for a variable to be declared in a header or an argument to a method to be static/final, staring at lines of code all day isn't going to help. Because of this, the code review step is best left in the hands of the members of the development organization, especially if they are armed with a modern source code analysis tool. With the exception of information security people who are highly experienced in programming languages and code-level vulnerability resolution, there is no natural fit for network security expertise during the code review phase. This may come as a great surprise to those organizations currently attempting to impose software security on their enterprises through the infosec division. Even though the idea of security enforcement is solid, making enforcement at the code level successful when it comes to code review requires real hands-on experience with code (see the box Know When Enough Is Too Much).

  • System Testing: Penetration Testing

    System penetration testing, when used appropriately, focuses on people failures and procedure failures made during the configuration and deployment of software. The best kinds of penetration testing are driven by previously identified risks and are engineered to probe risks directly in order to ascertain their exploitability (see Chapter 6).

    While testing software to functional specifications has traditionally been the domain of QA, penetration testing has traditionally been the domain of information security and incident-handling organizations. As such, the fit here for information security participation is a very natural and intuitive one. Of course, there are a number of subtleties that should not be ignored. As I describe in Chapter 6, a majority of penetration testing today focuses its attention on network topology, firewall placement, communications protocols, and the like. It is therefore very much an outsidein approach that barely begins to scratch the surface of applications. Penetration testing needs to encompass a more insideout approach that takes into account risk analyses and other software security results as it is carried out. This distinction is sometimes described as the difference between network penetration testing and application penetration testing. Software security is much more interested in the latter. Also worth noting is the use of various black box penetration tools. Network security scanners like Nessus, nmap, and other SATAN derivatives, are extremely useful since there are countless ways to configure (and misconfigure) complex networks and their various services. Application security scanners (which I lambaste in Chapter 1) are nowhere near as useful. If by an "application penetration test" you mean the process of running an application security testing tool and gathering results, you have a long way to go to make your approach hold water.[5] It's worth noting here for non-software people how amusing the idea of a canned set of security tests (hacker in a box, so to speak) for any possible application is to software professionals. Software testing is not something that can be handled by a set of canned tests, no matter how large the can. The idea of testing any arbitrary program with, say, a few thousand tests determined in advance before the software was even conceived is ridiculous. I'm afraid that the idea of testing any arbitrary program with a few hundred application security tests is just as silly!

    Know When Enough Is Too Much

    In one large financial services organization (which shall remain nameless), the infosec people were spinning up an "application security" program. They did many things right. One thing that they got completely wrong, however, was having code review be carried out by infosec people who weren't even sure what a compiler was.

    The software guys very quickly determined the level of competence of the security code review people, and they started gaming the system. In some cases they sent code for review that had nothing whatsoever to do with the system they were actually building. This was just plain deceitful and wrong, but the infosec people were too clueless to figure out what was going on.

    But even when things weren't taken quite to that extreme, they were bad. Dev was submitting code that would not build for review. This hampered infosec's ability to apply modern analysis techniques (since the code may or may not have actually even compiled). The infosec people had a very hard time comprehending how to push back since they weren't familiar with build processes, nightly builds, and the like. In the end, they had not specified what they needed for a successful review in terms that dev would understand.

    There are some big lessons to be learned here. The first is that dev is in a much better position to use code analysis tools than infosec is (though clearly some oversight is required so you don't end up with the fox guarding the chicken house). The second is that real software people need to be attached to and included in modern infosec organizations. The most knowledgeable network security people in the world will sometimes be at a total loss when it comes to software security.


    The good news about penetration testing and infosec involvement is that it is most likely already underway. The bad news is that infosec needs to up the level of software clue in order to carry out penetration testing most effectively.

  • Fielded System: Deployment and Operations

    The final steps in fielding secure software are the central activities of deployment and operations. Careful configuration and customization of any software application's deployment environment can greatly enhance its security posture. Designing a smartly tailored deployment environment for a program requires following a process that starts at the network component level, proceeds through the operating system, and ends with the application's own security configuration and setup.

Many software developers would argue that deployment and operations are not even part of the software development process. Even if this view was correct, there is no way that operations and deployment concerns can be properly addressed if the software is so poorly constructed as to fall apart no matter what kind of solid ground it is placed on. Put bluntly, operations organizations have put up with some rather stinky software for a long time, and it has made them wary. If we can set that argument aside for a moment and look at the broader picturethat is, safely setting up the application in a secure operational environment and running it accordinglythen the work that needs doing can certainly be positively affected by information security. The best opportunities exist in fine-tuning access controls at the network and operating system levels, as well as in configuring an event-logging and event-monitoring mechanism that will be most effective during incident response operations. Attacks will happen. Be prepared for them to happen, and be prepared to clean up the mess after they have.[6]

[6] This kind of advice is pretty much a "no duh" for information security organizations. That's one reason why their involvement in this step is paramount.




Software Security. Building Security In
Software Security: Building Security In
ISBN: 0321356705
EAN: 2147483647
Year: 2004
Pages: 154
Authors: Gary McGraw

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net