The Necessity of Auditing


Most people expect vendors to provide some degree of assurance about the integrity of their software. The sad truth is that vendors offer few guarantees of quality for any software. If you doubt this, just read the end user license agreement (EULA) that accompanies almost every piece of commercial software. However, it's in a company's best interests to keep clients happy; so most vendors implement their own quality assurance measures. These measures usually focus on marketable concerns, such as features, availability, and general stability; this focus has historically left security haphazardly applied or occasionally ignored entirely.

Note

Some industries do impose their own security requirements and standards, but they typically involve regulatory interests and apply only to certain specialized environments and applications. This practice is changing, however, as high-profile incidents are moving regulators and industry standards bodies toward more proactive security requirements.


The good news is that attitudes toward security have been changing recently, and many vendors are adopting business processes for more rigorous security testing. Many approaches are becoming commonplace, including automated code analysis, security unit testing, and manual code audits. As you can tell from the title, this book focuses on manual code audits.

Auditing an application is the process of analyzing application code (in source or binary form) to uncover vulnerabilities that attackers might exploit. By going through this process, you can identify and close security holes that would otherwise put sensitive data and business resources at unnecessary risk.

In addition to the obvious case of a company developing in-house software, code auditing makes sense in several other situations. Table 1-1 summarizes the most common ones.

Table 1-1. Code-Auditing Situations

Situation

Description

Advantage

In-house software audit (prerelease)

A software company performs code audits of a new product before its release.

Design and implementation flaws can be identified and remedied before the product goes to market, saving money in developing and deploying updates. It also saves the company from potential embarrassment.

In-house software audit (postrelease)

A software company performs code audits of a product after its release.

Security vulnerabilities can be found and fixed before malicious parties discover the flaws. This process allows time to perform testing and other checks as opposed to doing a hurried release in response to a vulnerability disclosure.

Third-party product range comparison

A third party performs audits of a number of competing products in a particular field.

An objective third party can provide valuable information to consumers and assist in selecting the most secure product.

Third-party evaluation

A third party performs an independent software audit of a product for a client.

The client can gain an understanding of the relative security of an application it's considering deploying. This might prove to be the deciding factor between purchasing one technology over another.

Third-party preliminary evaluation

A third party performs an independent review of a product before it goes to market.

Venture capitalists can get an idea of the viability of a prospective technology for investment purposes. Vendors might also conduct this type of evaluation to ensure the quality of a product they intend to market.

Independent research

A security company or consulting firm performs a software audit independently.

Security product vendors can identify vulnerabilities and implement protective measures in scanners and other security devices. Independent research also functions as an industry watchdog and provides a way for researchers and security companies to establish professional credibility.


As you can see, code auditing makes sense in quite a few situations. Despite the demand for people with these skills, however, few professionals have the training and experience to perform these audits at a high standard. It's our hope that this book helps fill that gap.

Auditing Versus Black Box Testing

Black box testing is a method of evaluating a software system by manipulating only its exposed interfaces. Typically, this process involves generating specially crafted inputs that are likely to cause the application to perform some unexpected behavior, such as crashing or exposing sensitive data. For example, black box testing an HTTP server might involve sending requests with abnormally large field sizes, which could trigger a memory corruption bug (covered in more depth later in Chapter 5, "Memory Corruption"). This test might involve a legitimate request, such as the following (assume that the "..." sequence represents a much longer series of "A" characters):

GET AAAAAAAAAAAAAAAAAAA...AAAAAAAAAAAAAAAAAAA HTTP/1.0


Or it might involve an invalid request, such as this one (once again, the "..." sequence represents a much longer series of "A" characters):

GET / AAAAAAAAAAAAAAAAAAA...AAAAAAAAAAAAAAAAAAAA/1.0


Any crashes resulting from these requests would imply a fairly serious bug in the application. This approach is even more appealing when you consider that tools to automate the process of testing applications are available. This process of automated black box testing is called fuzz-testing, and fuzz-testing tools include generic "dumb" and protocol-aware "intelligent" fuzzers. So you don't need to manually try out every case you can think of; you simply run the tool, perhaps with some modifications of your own design, and collect the results.

The advantage of black box testing an application is that you can do it quickly and possibly have results almost immediately. However, it's not all good news; there are several important disadvantages of black box testing. Essentially, black box testing is just throwing a bunch of data at an application and hoping it does something it isn't supposed to do. You really have no idea what the application is doing with the data, so there are potentially hundreds of code paths you haven't explored because the data you throw at the application doesn't trigger those paths. For instance, returning to the Web server example, imagine that it has certain internal functionality if particular keywords are present in the query string of a request. Take a look at the following code snippet, paying close attention to the bolded lines:

struct keyval {     char *key;     char *value; }; int handle_query_string(char *query_string) {     struct keyval *qstring_values, *ent;     char buf[1024];     if(!query_string)         return 0;     qstring_values = split_keyvalue_pairs(query_string);     if((ent = find_entry(qstring_values, "mode")) != NULL)     {         sprintf(buf, "MODE=%s", ent->value);         putenv(buf);     }     ... more stuff here ... }


This Web server has a specialized nonstandard behavior; if the query string contains the sequence mode=xxx, the environment variable MODE is set with the value xxx. This specialized behavior has an implementation flaw, however; a buffer overflow caused by a careless use of the sprintf() function. If you aren't sure why this code is dangerous, don't worry; buffer overflow vulnerabilities are covered in depth in Chapter 5.

You can see the bug right away by examining the code, but a black box or fuzz-testing tool would probably miss this basic vulnerability. Therefore, you need to be able to assess code constructs intelligently in addition to just running testing tools and noting the results. That's why code auditing is important. You need to be able to analyze code and detect code paths that an automated tool might miss as well as locate vulnerabilities that automated tools can't catch.

Fortunately, code auditing combined with black box testing provides maximum results for uncovering vulnerabilities in a minimum amount of time. This book arms you with the knowledge and techniques to thoroughly analyze an application for a wide range of vulnerabilities and provides insight into how you can use your understanding and creativity to discover flaws unique to a particular application.

Code Auditing and the Development Life Cycle

When you consider the risks of exposing an application to potentially malicious users, the value of application security assessment is clear. However, you need to know exactly when to perform an assessment. Generally, you can perform an audit at any stage of the Systems Development Life Cycle (SDLC). However, the cost of identifying and fixing vulnerabilities can vary widely based on when and how you choose to audit. So before you get started, review the following phases of the SDLC:

  1. Feasibility study This phase is concerned with identifying the needs the project should meet and determining whether developing the solution is technologically and financially viable.

  2. Requirements definition In this phase, a more in-depth study of requirements for the project is done, and project goals are established.

  3. Design The solution is designed and decisions are made about how the system will technically achieve the agreed-on requirements.

  4. Implementation The application code is developed according to the design laid out in the previous phase.

  5. Integration and testing The solution is put through some level of quality assurance to ensure that it works as expected and to catch any bugs in the software.

  6. Operation and maintenance The solution is deployed and is now in use, and revisions, updates, and corrections are made as a result of user feedback.

Every software development process follows this model to some degree. Classical waterfall models tend toward a strict interpretation, in which the system's life span goes through only a single iteration through the model. In contrast, newer methodologies, such as agile development, tend to focus on refining an application by going through repeated iterations of the SDLC phases. So the way in which the SDLC model is applied might vary, but the basic concepts and phases are consistent enough for the purposes of this discussion. You can use these distinctions to help classify vulnerabilities, and in later chapters, you learn about the best phases in which to conduct different classes of reviews.




The Art of Software Security Assessment. Identifying and Preventing Software Vulnerabilities
The Art of Software Security Assessment: Identifying and Preventing Software Vulnerabilities
ISBN: 0321444426
EAN: 2147483647
Year: 2004
Pages: 194

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net