Chapter 9: Performance Requirement Patterns


Overview

By "performance" we mean the same as in the Olympic Games: how fast, how long, how big, how much. But there are no medals for success, only boos from the crowd for failure. Performance deals with factors that can be measured, though that doesn't mean performance requirements always specify goals using absolute numbers. In fact, it's best to avoid numbers if you can (for reasons discussed soon). It also helps to avoid stating performance requirements in terms that are hard to measure-in particular, in terms that would take an unreasonable length of time to test, such as mean time between failure. As with the rest of this book, we're talking here about typical commercial software systems, not anything life-critical such as aircraft or medical instruments.

This chapter contains requirement patterns for five types of performance encountered often in commercial systems, as shown in Figure 9-1. (Note that the dynamic and static capacity requirement patterns are separate because their characteristics are distinctly different.) You might come across others. When specifying a requirement for another performance factor, consider the issues that apply to all (or most) types of performance, which are discussed in the "Common Performance Issues" subsection that comes next.

image from book
Figure 9-1: Requirement patterns in the performance domain

Unfortunately, there are no agreed definitions for the main terms used in this chapter, especially performance, capacity, and quality. I base the usage here upon the meanings we'd expect in a brochure for a new car: performance figures for top speed, acceleration, engine power, passenger capacity, load capacity, and so on. Its quality refers to intangibles: how well-built it is, how comfortable, what a pleasure to drive-which don't lend themselves to being quantified, the attributes the car company must convey with extravagant adjectives and flowery language. This chapter doesn't deal with quality requirements.

If an aspect of performance is worth specifying, it's worth specifying well, which demands thought and care. If it's not worth that effort, leave it out (or express it informally, not as requirements) because it will just waste everyone's time. Performance requirements are important because they can have a profound effect upon the architecture of the whole system; it's not always a matter of throwing in more hardware until it works well enough. We're faced with a dilemma of either specifying requirements in a way that's easy to write but a nightmare to build to and test, or having to formulate requirements that might look twisted and convoluted to a nontechnical audience. There are genuine difficulties to overcome; it's up to you whether you tackle them in the requirements or whether you brush them under the carpet, for the poor developers and testers to sort out.

Common Performance Issues

This section describes a number of issues that recur in the performance requirement patterns in this chapter. Some are likely to apply to all types of performance (not just those covered here), the rest just to most of them. These issues are important and can have a profound impact on how performance should be specified and whether a performance requirement you write is meaningful at all. They are presented in rough order of impact (highest impact first).

Issue 1: Easy to Write Equals Hard to Implement.

Most kinds of performance can be expressed very neatly-but when they are, they tend to be unhelpful. "The system shall be available 24×7, give users a one-second response time, handle 1,000 simultaneous users, process 200 orders per minute, and store 1,000,000 customers." A piece of cake to write! But for each performance target you set, ask yourself: what do you expect developers to do with it? Numerical performance targets like these are often so remote from the job of the software that it's reasonable to ask how developers are supposed to react to them: what should they do differently (assuming they code professionally as a matter of course)? If there are no obvious steps they can take, they can hardly be held responsible if the system fails to reach the target. Also, it's usually not possible to test whether a system achieves numerical performance targets until after it's been built (sometimes not until it's installed and running live), by which time it will take much fuss and rework to fix. Nevertheless, you should always get an early feel for the order of magnitude of each prospective performance target. For example, are we talking about hundreds of customers or millions?

Instead, if you can, specify requirements for steps to be taken to contribute to good performance in the area in question. All the performance requirement patterns in this chapter are "divertive" patterns-see Chapter 3, "Requirement Pattern Concepts"-that try to steer you away from the obvious. (But be aware that this is the opposite of what other authors advise. They like the precision and apparent certainty of numeric performance targets. I will present my arguments and leave it to you to decide.)

The situation might appear a little different when you intend to purchase a solution: any off-the-shelf product either satisfies quantitative performance requirements or it doesn't. But if a third party is building a solution just for you, it's just as unfair to present them with purely quantitative targets as it would be to your own developers. And are you prepared to take their word their solution performs as promised? Finally, it's untidy for the requirements to make assumptions about the nature of the solution.

Issue 2: Are We Specifying a Complete, Running System or Just the Software?

To go anywhere, software needs hardware to drive it, and the performance of the whole system (hardware plus software) depends on the power of the hardware. Software is to hardware as a trailer is to a tractor. Setting performance targets for software in isolation is meaningless and silly, yet it happens (and is worth a quiet chuckle when you see it). If any component that affects a performance target is outside your control, you can't promise to achieve it, so don't make it a requirement. But you can state it informally in the requirements specification, if you like. One way out is to define an indicative hardware set-up and define performance requirements for it. (See the "Step 3: Choose Indicative Hardware Set-up" section in the throughput requirement pattern later in this chapter for further details.)

image from book

System performance can also depend on how third-party software products behave. If a particular call to such a product turns out to be slow, you could be unable to meet performance targets. If there is any third-party software, is it under your control or not? If it is, reassure yourself that it performs well enough. If it's not under your control-that is, it's outside the scope of the system as specified-don't hold your performance goals hostage to how well it performs.

Issue 3: Which Part of the System does this Performance Target Apply To?

For most kinds of performance, a performance requirement can apply to a single function, a group of functions, a single interface, and so on, or it can apply to everything (all functions). Always make clear what the requirement applies to. Also, don't make it apply to more than it needs to, because it could be difficult (that is, expensive) or impossible for some things, things we might not be bothered about anyway. For example, demanding user response time of a second for everything might be impossible to achieve for some processing-intensive functions and as soon as they're treated as exceptions, respect is lost for the whole requirement. (Developers also lose respect for anyone who writes unachievable requirements.)

Issue 4: Avoid Arbitrary Performance Targets.

If someone gives you a performance goal, ask them where it came from and ask them to justify it. "Plucked out of thin air" isn't a good enough reason. Performance targets can result from a mixture of assumptions, reasoning, and calculations. If so, make all this background information available to your readers, either by including it in the requirements specification or by telling them where it can be found (for example, in a sizing model). Too many performance requirements are arbitrary. If there isn't a good enough reason for them, leave them out.

Issue 5: How Critical is this Performance Factor to the Business?

The severity of the damage done if an aspect of a system performs inadequately varies enormously: from disastrous to mildly irritating (or perhaps not even noticed). If the system runs out of free storage capacity (disk space), it could fail completely; if response times grow a little, it might not matter. So ask yourself how critical this performance factor is. What's the worst that could happen? If we risk serious damage, place extra stress on measuring and monitoring actual performance (which is the subject of the next issue). At the other end of the scale, if the potential damage is negligible, why bother to specify it at all?

If you have difficulty ascertaining from your customer how important this performance level is to them, ask how much extra they're prepared to pay to achieve it: an extra 10 percent of the total system cost? Fifty percent? One hundred percent? (These are the sorts of figures we could be talking about.) The answer doesn't translate directly into a priority or justify particular steps, but it does give a good idea of how seriously to treat it.

Issue 6: How can We Measure Actual Performance?

Setting a target isn't much use unless you can tell how well you're doing against it. Who'd buy a car without a speedometer? Measuring actual performance is often left as a testing activity, with external tools wheeled in like the machines that monitor patients in a hospital. But it's much more convenient to have this ability built into the system itself. Then it can be used in a production system, and by developers. Some types of performance cannot be determined by the system itself (for example, the response time perceived by a remote user); other types of performance cannot easily be perceived externally (for example, how long an internal system process takes). Monitoring functions are a common subject of extra requirements in the performance requirement patterns in this chapter. Note that for some kinds of performance (such as response time), the act of measuring and recording performance could itself take time and effort and so affect the result a little-though we can be reassured that performance can only get better if such measuring is removed (or switched off).

Monitoring functions are always useful in letting a system administrator see how well a system is running, but they're not usually seen as contributing to the system's business goals, so they're usually given a low priority or dropped altogether (and perhaps built quietly by developers for their own use). Arguing that they play a key part in meeting performance targets provides the solid justification they need in order to earn their rightful place in the requirements.

Issue 7: By When does this Performance Target Need to be Met?

Some performance targets reflect planned business volumes that will take time (perhaps years) to achieve. Always state the timeframe in such cases. This allows optimizations to be made at the best time for the business and the development team. In particular it can save unwarranted effort being devoted to performance during the initial implementation phase, which is usually the busiest.

Issue 8: Put Only One Performance Target in Each Requirement.

Don't lump several targets together. Separating them gives each target the prominence and respect it deserves, lets you give each one its own priority, and makes it easier (for testers in particular) to track each one.

Issue 9: What can be Done if this Performance Target is Not Met?

Pondering this question can give you a useful insight into how seriously to treat this aspect of performance-that is, how much care it deserves and how big a mess we risk if we don't. Can it be improved by beefing up the hardware? Is tweaking the software likely to help? If the problem lies with a third-party product, are we stuck with it? If this aspect of performance isn't good enough, where does the responsibility lie: hardware, software, a bit of both-or will it be impossible to tell? Don't treat this issue as a way to assign blame (because blame comes into play only after a mess occurs and doesn't help prevent it) but as a way to understand the performance needs better so that there won't be a mess in the first place.

image from book
Sizing Model

The system aspects for which performance targets are commonly set aren't independent of one another: if your number of customers grows, you can expect a corresponding increase in transactions, disk storage used, and so on. It's useful to build a model of how these things relate to each other. A spreadsheet is the most convenient and flexible tool to use. Include explanations, state your assumptions, and generally make the whole model as transparent as possible.

The values in the sizing model can be divided into three types: variables, parameters, and conclusions. Variables are the key business numbers that drive everything else (such as how many customers we have). Keep the number of variables to a minimum-perhaps just one. Parameters reflect assumptions about how the business works in practice (such as the number of order inquiries for each order placed, how frequently an average customer visits our Web site, and how long they stay each time). Conclusions are values calculated by the model, which can be as numerous and sophisticated as you care to make them. The variables and parameters describe the volume of business, and the conclusions show the resulting load the system must carry-the basis for its performance goals.

A sizing model lets you play with the variables and parameters, which is especially fruitful when discussing projected business volumes with senior executives and sales and marketing wizards.

A sizing model created at requirements time can be transformed and refined after development to reflect how the actual system behaves. Extra knowledge is available then that can be incorporated into the model-to enable it to be used to calculate the actual size of machine(s) needed to accommodate a particular volume of business. This is particularly useful if you're building a product, because you can plug in each customer's business volumes and have the sizing model work out what hardware they need.

image from book




Microsoft Press - Software Requirement Patterns
Software Requirement Patterns (Best Practices)
ISBN: 0735623988
EAN: 2147483647
Year: 2007
Pages: 110

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net