Lists, Piles, and Collections


The idea of collecting and organizing information about computer security vulnerabilities has a long history (see the box Academic Literature). More recently, a number of practitioners have developed "top ten" lists and other related collections based on experience in the field. The taxonomy introduced here negotiates a middle ground between rigorous academic studies and ad hoc collections based on experience.

Two of the most popular and useful lists are the "19 Sins" and the "OWASP top ten." The first list, at one month old as I write this, is carefully described in the new book 19 Deadly Sins of Software Security [Howard, LeBlanc, and Viega 2005]. The second is the "OWASP Top Ten Most Critical Web Application Security Vulnerabilities" available on the Web at <http://www.owasp.org/documentation/topten.html>. Both of these collections, though extremely useful and applicable, share one unfortunate propertyan overabundance of complexity. My hard constraint to stick to seven things helps cut through the complexity.

By discussing the 19 Sins and OWASP top ten lists with respect to the taxonomy here, I hope to illustrate and emphasize why simplicity is essential to any taxonomy. The main limitation of both lists is that they mix specific types of errors and vulnerability classes and talk about them all at the same level of abstraction. The 19 Sins include both "Buffer Overflows" and "Failing to Protect Network Traffic" categories at the same level, even though the first is a very specific coding error, while the second is a class comprised of various kinds of errors. Similarly, OWASP's top ten includes "Cross Site Scripting (XSS) Flaws" and "Insecure Configuration Management" at the same level. This is a serious problem that leads to confusion among practitioners.

My classification scheme consists of two hierarchical levels: kingdoms and phyla. Kingdoms represent classes of errors, while the phyla that comprise the kingdoms represent collections of specific errors. Even though the structure of my classification scheme is different from the structure of the 19 Sins and OWASP top ten lists, the categories that comprise these lists can be easily mapped to the kingdoms (as I show next).

Academic Literature

All scientific disciplines benefit from a method for organizing their topic of study, and software security is no different. The value of a classification scheme is indisputable. A taxonomy is necessary in order to create a common vocabulary and an understanding of the many diverse ways computer security fails. The problem of defining a taxonomy has been of great interest since the mid-1970s. Several classification schemes have been proposed since then [Bishop 2003]. An excellent Web resource at UC Davis can be found at <http://isis.cs.ucdavis.edu/vuln/links.php>.

Vulnerabilities

One of the first studies of computer security and privacy was the RISOS (Research into Secure Operating Systems) project [Abbott et al. 1976]. RISOS proposed and described seven categories of operating system security defects. The purpose of the project was to understand security problems in existing operating systems, including MULTICS, TENEX, TOPS-10, GECOS, OS/MVT, SDS-940, and EXEC-8, and to determine ways to enhance the security of these systems.

The categories proposed in the RISOS project include the following:

  • Incomplete Parameter Validation

  • Inconsistent Parameter Validation

  • Implicit Sharing of Privileges/Confidential Data

  • Asynchronous Validation/Inadequate Serialization

  • Inadequate Identification/Authentication/Authorization

  • Violable Prohibition/Limit

  • Exploitable Logic Error

The study shows that a small number of fundamental defects recur in different contexts.

The objective of the Protection Analysis (PA) project was to enable anybody (with or without any knowledge about computer security) to discover security errors in the system by using a pattern-directed approach [Bisbey and Hollingworth 1978]. The idea was to use formalized patterns to search for corresponding errors. The PA project was the first project to explore automation of security defect detection. However, the procedure for reducing defects to abstract patterns was not comprehensive, and the technique could not be properly automated. The database of vulnerabilities collected in the study was never published.

Landwehr, Bull, and McDermott classified each vulnerability from three perspectives: genesis (how the problem entered the system), time (at which point in the production cycle the problem entered the system), and location (where in the system the problem is manifest) [Landwehr, Bull, and McDermott 1993]. Defects by genesis were broken down into intentional and inadvertent, where the intentional class was further broken down into malicious and non-malicious. Defects by time of introduction were broken down into development, maintenance, and operation, where the development class was further broken down into design, source code, and object code. Defects by location were broken down into software and hardware, where the software class was further broken down into operating system, support, and application.

The advantage of this type of hierarchical classification is the convenience of identifying strategies to remedy security problems. For example, if most security issues are introduced inadvertently, increasing resources devoted to code reviews becomes an effective way of increasing the security of the system. The biggest disadvantage of this scheme is the inability to classify some existing vulnerabilities. For example, if it is not known how the vulnerability entered the system, it cannot be classified by genesis at all.

The schemes discussed here have several limitations in common. One of them is the breadth of the categories, which makes classification ambiguous. In some cases, one issue can be classified in more than one category. The category names, while useful to some groups of researchers, are too generic to be quickly intuitive to a developer in the context of day-to-day work. Additionally, these schemes focus mostly on operating system security problems and do not classify the ones associated with user-level software security. Furthermore, these taxonomies mix implementation-level and design-level defects and are not consistent about defining the categories with respect to the cause or effect of the problem.

Attacks

A good list of attack classes is provided by Cheswick, Bellovin, and Rubin [2003]. The list includes the following:

  • Stealing Passwords

  • Social Engineering

  • Bugs and Back Doors

  • Authentication Failures

  • Protocol Failures

  • Information Leakage

  • Exponential AttacksViruses and Worms

  • Denial-of-Service Attacks

  • Botnets

  • Active Attacks

A thorough description with examples is provided for each class. These attack classes are applicable to a wide range of software, including user-level enterprise software. This fact distinguishes the list from other classification schemes. The classes are simple and intuitive. However, this list defines attack classes rather than categories of common coding errors that cause these attacks.

A similar but more thorough list of attack patterns is introduced in Exploiting Software [Hoglund and McGraw 2004]. Attack-based approaches are based on knowing your enemy and assessing the possibility of similar attack. They represent the black hat side of the software security equation. A taxonomy of coding errors is, strangely, more positive in nature. This kind of thing is most useful to the white hat side of the software security world. In the end, both kinds of approaches are valid and necessary.

Toward a Taxonomy

The classification scheme proposed by Aslam is the only precise scheme discussed here [Aslam 1995]. In this scheme, each vulnerability belongs to exactly one category. The decision procedure for classifying an error consists of a set of questions for each vulnerability category. Aslam's system is well defined and offers a simple way for identifying defects by similarity. Another contribution of Aslam's taxonomy is that it draws on software fault studies to develop its categories. However, it focuses exclusively on implementation issues in the UNIX operating system and offers categories that are still too broad for my purpose.

The most recent classification scheme on the scene is the unpublished PLOVER (Preliminary List of Vulnerability Examples for Researchers) project [Christey 2005]. Twenty-eight main categories that comprise almost three hundred subcategories put Christey's classification scheme at the opposite end of the ambiguity spectrum than mine. Not surprisingly, the vulnerability categories are much more specific than in any of the taxonomies discussed here.

PLOVER is an extension of Christey's earlier work in assigning CVE (Common Vulnerabilities and Exposures) names to publicly known vulnerabilities. An attempt to draw parallels between theoretical attacks and vulnerabilities known in practice is an important contribution and a big step forward from most of the earlier schemes.


Nineteen Sins Meet Seven Kingdoms

  1. Input Validation and Representation

    Sin: Buffer Overflows

    Sin: Command Injection

    Sin: Cross-Site Scripting

    Sin: Format String Problems

    Sin: Integer Range Errors

    Sin: SQL Injection

  2. API Abuse

    Sin: Trusting Network Address Information

  3. Security Features

    Sin: Failing to Protect Network Traffic

    Sin: Failing to Store and Protect Data

    Sin: Failing to Use Cryptographically Strong Random Numbers

    Sin: Improper File Access

    Sin: Improper Use of SSL

    Sin: Use of Weak Password-Based Systems

    Sin: Unauthenticated Key Exchange

  4. Time and State

    Sin: Signal Race Conditions

    Sin: Use of "Magic" URLs and Hidden Forms

  5. Error Handling

    Sin: Failure to Handle Errors

  6. Code Quality

    Sin: Poor Usability

  7. Encapsulation

    Sin: Information Leakage

  • Environment

The 19 Sins are an extremely important collection of software security problems at many different levels. By fitting them into the seven kingdoms, a cleaner organization begins to emerge.

Seven Kingdoms and the OWASP Ten

Top ten lists are appealing, especially since the cultural phenomenon that is David Letterman. The OWASP top ten list garners much attention because it is short and also useful. Once again, a level-blending problem is apparent in the OWASP list, but this is easily resolved by appealing to the seven kingdoms.

  1. Input Validation and Representation

    OWASP A1: Unvalidated Input

    OWASP A4: Cross-Site Scripting (XSS) Flaws

    OWASP A5: Buffer Overflows

    OWASP A6: Injection Flaws

  2. API Abuse

  3. Security Features

    OWASP A2: Broken Access Control

    OWASP A8: Insecure Storage

  4. Time and State

    OWASP A3: Broken Authentication and Session Management

  5. Error Handling

    OWASP A7: Improper Error Handling

  6. Code Quality

    OWASP A9: Denial of Service

  7. Encapsulation

  • Environment

    OWASP A10: Insecure Configuration Management




Software Security. Building Security In
Software Security: Building Security In
ISBN: 0321356705
EAN: 2147483647
Year: 2004
Pages: 154
Authors: Gary McGraw

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net