Touchpoint Process: Abuse Case Development


Unfortunately, abuse cases are only rarely used in practice even though the idea seems natural enough. Perhaps a simple process model will help clarify how to build abuse cases and thereby fix the adoption problem. Figure 8-1 shows a simple process model.

Figure 8-1. A simple process diagram for building abuse cases.


Abuse cases are to be built by a team of requirements people and security analysts (called RAs and SAs in the picture). This team starts with a set of requirements, a set of standard use cases (or user stories), and a list of attack patterns.[3] This raw material is combined by the process I describe to create abuse cases.

[3] Attack patterns à la Exploiting Software [Hoglund and McGraw 2004] are not the only source to use for thinking through possible attacks. A good low-octane substitute might be the STRIDE model list of attack categories: Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege. Cycling through this list of six attack categories one at a time is likely to provide insight into your system. For more on STRIDE, see [Howard and LeBlanc 2003].

The first step involves identifying and documenting threats. Note that I am using the term threat in the old-school sense. A threat is an actor or agent who carries out an attack. Vulnerabilities and risks are not threats.[4] Understanding who might attack you is really critical. Are you likely to come under attack from organized crime like the Russian mafia? Or are you more likely to be taken down by a university professor and the requisite set of overly smart graduate students all bent on telling the truth? Thinking like your enemy is an important exercise. Knowing who your enemy is likely to be is an obvious prerequisite.

[4] Microsoft folks use the term threat incorrectly (and also very loudly). When they say "threat modeling," they really mean "risk analysis." This is unfortunate.

Given an understanding of who might attack you, you're ready to get down to the business of creating abuse cases. In the gray box in the center of Figure 8-1, the two critical activities of abuse case development are shown: creating anti-requirements and creating an attack model.

Creating Anti-Requirements

When developing a software system or a set of software requirements, thinking explicitly about the things that you don't want your software to do is just as important as documenting the things that you do want. Naturally, the things that you don't want your system to do are very closely related to the requirements. I call them anti-requirements. Anti-requirements are generated by security analysts, in conjunction with requirements analysts (business and technical), through a process of analyzing requirements and use cases with reference to the list of threats in order to identify and document attacks that will cause requirements to fail. The object is explicitly to undermine requirements.

Anti-requirements provide insight into how a malicious user, attacker, thrill seeker, competitor (in other words, a threat) can abuse your system. Just as security requirements result in functionality that is built into a system to establish accepted behavior, anti-requirements are established to determine what happens when this functionality goes away. When created early in the software development lifecycle and revisited throughout, these anti-requirements provide valuable input to developers and testers.

Because security requirements are usually about security functions and/or security features, anti-requirements are often tied up in the lack of or failure of a security function. For example, if your system has a security requirement calling for use of crypto to protect essential movie data written on disk during serialization, an anti-requirement related to this requirement involves determining what happens in the absence of that crypto. Just to flesh things out, assume in this case that the threat in question is a group of academics. Academic security analysts are unusually well positioned to crack crypto relative to thrill-seeking script kiddies. Grad students have a toolset, lots of background knowledge, and way too much time on their hands. If the crypto system fails in this case (or better yet, is made to fail), giving the attacker access to serialized information on disk, what kind of impact will that have on the system's security? How can we test for this condition?

Abuse cases based on anti-requirements lead to stories about what happens in the case of failure, especially security apparatus failure.

Coder's Corner

Here is a systematic approach to anti-requirements suggested by Fabio Arciniegas. This approach formalizes the idea of anti-requirements by focusing on the three key aspects of requirements:

  1. Input

  2. Output

  3. Importance

Use cases and functional specifications are often presented as shall/given duets. For example: The system shall produce a unique identifier valid for N days into the future given a present time, a valid authorization token, and N. One way of creating anti-requirements from requirements is to validate the limits of the given part against a set of weighted failures in the shall part. The game of systematically approaching what can go wrong can be played by defining the goal (distance 0) and a weighted perimeter of failure around it:

Distance 0: Valid response

Distance 1: Denied request

------------------------------------- Threshold

Distance 2: Non-unique ID returned

Distance 3: System crash

The combinatory game involves breaking assumptions in the given part of the requirement by asking various questions: What if N < 0? What if N < 0 and authorization is invalid? and so on. Any combination of failed input that results in an output beyond the threshold is a major concern.

This approach not only provides a systematic way to develop anti-requirements from requirements but it also is useful for generating a contractual basis for unacceptable misbehavior; this is something that is fundamental if you are outsourcing developmentat least if you want to avoid the retort, "But it does what you said it should, given the input you said it would have!".


Creating an Attack Model

An attack model comes about by explicit consideration of known attacks or attack types. Given a set of requirements and a list of threats, the idea here is to cycle through a list of known attacks one at a time and to think about whether the "same" attack applies to your system. Note that this kind of process lies at the heart of Microsoft's STRIDE model [Howard and LeBlanc 2003]. Attack patterns are extremely useful for this activity. An incomplete list of attack patterns can be seen in the box Attack Patterns from Exploiting Software [Hoglund and McGraw 2004] on pages 218 through 221. To create an attack model, do the following:

  • Select those attack patterns relevant to your system. Build abuse cases around those attack patterns.

  • Include anyone who can gain access to the system because threats must encompass all potential sources of danger to the system.

Together, the resulting attack model and anti-requirements drive out abuse cases that describe how your system reacts to an attack and which attacks are likely to happen. Abuse cases and stories of possible attacks are very powerful drivers for both architectural risk analysis and security testing.

The simple process shown in Figure 8-1 results in a number of useful artifacts. The simple activities are designed to create a list of threats and their goals (which I might call a "proper threat model"), a list of relevant attack patterns, and a unified attack model. These are all side effects of the anti-requirements and attack model activities. More important, the process creates a set of ranked abuse casesstories of what your system does under those attacks most likely to be experienced.

As you can see, this is a process that requires extensive use of your black hat. The more experience and knowledge you have about actual software exploit and real computer security attacks, the more effective you will be at building abuse cases (see Chapter 9).




Software Security. Building Security In
Software Security: Building Security In
ISBN: 0321356705
EAN: 2147483647
Year: 2004
Pages: 154
Authors: Gary McGraw

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net