Most people expect vendors to provide some degree of assurance about the integrity of their software. The sad truth is that vendors offer few guarantees of quality for any software. If you doubt this, just read the end user license agreement (EULA) that accompanies almost every piece of commercial software. However, it's in a company's best interests to keep clients happy; so most vendors implement their own quality assurance measures. These measures usually focus on marketable concerns, such as features, availability, and general stability; this focus has historically left security haphazardly applied or occasionally ignored entirely. Note Some industries do impose their own security requirements and standards, but they typically involve regulatory interests and apply only to certain specialized environments and applications. This practice is changing, however, as high-profile incidents are moving regulators and industry standards bodies toward more proactive security requirements. The good news is that attitudes toward security have been changing recently, and many vendors are adopting business processes for more rigorous security testing. Many approaches are becoming commonplace, including automated code analysis, security unit testing, and manual code audits. As you can tell from the title, this book focuses on manual code audits. Auditing an application is the process of analyzing application code (in source or binary form) to uncover vulnerabilities that attackers might exploit. By going through this process, you can identify and close security holes that would otherwise put sensitive data and business resources at unnecessary risk. In addition to the obvious case of a company developing in-house software, code auditing makes sense in several other situations. Table 1-1 summarizes the most common ones.
As you can see, code auditing makes sense in quite a few situations. Despite the demand for people with these skills, however, few professionals have the training and experience to perform these audits at a high standard. It's our hope that this book helps fill that gap. Auditing Versus Black Box TestingBlack box testing is a method of evaluating a software system by manipulating only its exposed interfaces. Typically, this process involves generating specially crafted inputs that are likely to cause the application to perform some unexpected behavior, such as crashing or exposing sensitive data. For example, black box testing an HTTP server might involve sending requests with abnormally large field sizes, which could trigger a memory corruption bug (covered in more depth later in Chapter 5, "Memory Corruption"). This test might involve a legitimate request, such as the following (assume that the "..." sequence represents a much longer series of "A" characters): GET AAAAAAAAAAAAAAAAAAA...AAAAAAAAAAAAAAAAAAA HTTP/1.0 Or it might involve an invalid request, such as this one (once again, the "..." sequence represents a much longer series of "A" characters): GET / AAAAAAAAAAAAAAAAAAA...AAAAAAAAAAAAAAAAAAAA/1.0 Any crashes resulting from these requests would imply a fairly serious bug in the application. This approach is even more appealing when you consider that tools to automate the process of testing applications are available. This process of automated black box testing is called fuzz-testing, and fuzz-testing tools include generic "dumb" and protocol-aware "intelligent" fuzzers. So you don't need to manually try out every case you can think of; you simply run the tool, perhaps with some modifications of your own design, and collect the results. The advantage of black box testing an application is that you can do it quickly and possibly have results almost immediately. However, it's not all good news; there are several important disadvantages of black box testing. Essentially, black box testing is just throwing a bunch of data at an application and hoping it does something it isn't supposed to do. You really have no idea what the application is doing with the data, so there are potentially hundreds of code paths you haven't explored because the data you throw at the application doesn't trigger those paths. For instance, returning to the Web server example, imagine that it has certain internal functionality if particular keywords are present in the query string of a request. Take a look at the following code snippet, paying close attention to the bolded lines: struct keyval { char *key; char *value; }; int handle_query_string(char *query_string) { struct keyval *qstring_values, *ent; char buf[1024]; if(!query_string) return 0; qstring_values = split_keyvalue_pairs(query_string); if((ent = find_entry(qstring_values, "mode")) != NULL) { sprintf(buf, "MODE=%s", ent->value); putenv(buf); } ... more stuff here ... } This Web server has a specialized nonstandard behavior; if the query string contains the sequence mode=xxx, the environment variable MODE is set with the value xxx. This specialized behavior has an implementation flaw, however; a buffer overflow caused by a careless use of the sprintf() function. If you aren't sure why this code is dangerous, don't worry; buffer overflow vulnerabilities are covered in depth in Chapter 5. You can see the bug right away by examining the code, but a black box or fuzz-testing tool would probably miss this basic vulnerability. Therefore, you need to be able to assess code constructs intelligently in addition to just running testing tools and noting the results. That's why code auditing is important. You need to be able to analyze code and detect code paths that an automated tool might miss as well as locate vulnerabilities that automated tools can't catch. Fortunately, code auditing combined with black box testing provides maximum results for uncovering vulnerabilities in a minimum amount of time. This book arms you with the knowledge and techniques to thoroughly analyze an application for a wide range of vulnerabilities and provides insight into how you can use your understanding and creativity to discover flaws unique to a particular application. Code Auditing and the Development Life CycleWhen you consider the risks of exposing an application to potentially malicious users, the value of application security assessment is clear. However, you need to know exactly when to perform an assessment. Generally, you can perform an audit at any stage of the Systems Development Life Cycle (SDLC). However, the cost of identifying and fixing vulnerabilities can vary widely based on when and how you choose to audit. So before you get started, review the following phases of the SDLC:
Every software development process follows this model to some degree. Classical waterfall models tend toward a strict interpretation, in which the system's life span goes through only a single iteration through the model. In contrast, newer methodologies, such as agile development, tend to focus on refining an application by going through repeated iterations of the SDLC phases. So the way in which the SDLC model is applied might vary, but the basic concepts and phases are consistent enough for the purposes of this discussion. You can use these distinctions to help classify vulnerabilities, and in later chapters, you learn about the best phases in which to conduct different classes of reviews. |