Exploring Nonfunctional Requirements

   

In Chapter 20 we introduced the concept of nonfunctional requirements and implied that they can play a crucial role in defining a system. To assist with reasoning, discovery, and completeness, we suggested that these requirements be organized into four categories: usability, reliability, performance, and supportability. In the next sections we take a closer look at how each of these nonfunctional requirements provide you with some guidelines to help you discover and record them. Later, we'll briefly discuss one final requirements category, "other," which will be the " catchall " placeholder for everything else we need to identify and record but that did not fit conveniently anywhere else.

Usability

In today's software products, ease of use may rank as one of the top criteria for commercial success and/or successful user adoption. However, since usability tends to be in the eye of the beholder, specifying usability can present a formidable challenge for the requirements team. How do we specify such a fuzzy set of requirements? There is no simple solution to this problem, and the best we can do is to offer a set of guidelines (or at least things to think about) to help your team address the usability requirements challenge. Some suggestions follow.

  • Specify the required training time for a user to become minimally productive (able to accomplish simple tasks) and operationally productive (able to accomplish normal day-to-day tasks ). This may need to be further described in terms of novice users, who may have never seen a computer or an application of this type before, as well as normal users and "power" users.

  • Specify measurable task times for typical tasks or transactions that the end user will be carrying out. If we're building a system for order entry, it's likely that the most common tasks carried out by end users will be entering, deleting, or modifying orders and checking on order status. Once the users have been trained to perform those tasks, how long should it take them to enter a typical order? Of course, this could be affected by performance issues in the technical implementation (such as network speed, network capacity, memory, and CPU power) that collectively determine the response time provided by the system, but task-performance times are also strongly affected by the usability of the system, and we should be able to specify that separately.

  • Compare the usability of the new system with other state-of-the-art systems that the user community knows and likes. Thus, the requirement might state, "The new system shall be judged by 90 percent of the user community to be at least as usable as the existing XYZ system."

  • Specify the existence and required features of online help systems, wizards, tool tips, context-sensitive help, user manuals, and other forms of documentation and assistance.

  • Follow conventions and standards that have been developed for the human-to-machine interface. Having a system work "just like what I'm used to" can be accomplished by following consistent standards from application to application. For example, you can specify a requirement to conform to common usability standards, such as IBM's Common User Access (CUA) standards or the Windows applications standards published by Microsoft.

graphics/userbillofrights_icon.gif

Several interesting attempts to strengthen the fuzzy notion of usability have been made. One of the more interesting efforts has resulted in the "User's Bill of Rights" [Karat 1998]. The bill contains ten key points.

  1. The user is always right. If there is a problem with the use of the system, the system is the problem, not the user.

  2. The user has the right to easily install and uninstall software and hardware systems without negative consequences.

  3. The user has a right to a system that performs exactly as promised .

  4. The user has a right to easy-to-use instructions (user guides, online or contextual help, and error messages) for understanding and utilizing a system to achieve desired goals and recover efficiently and gracefully from problem situations.

  5. The user has a right to be in control of the system and to be able to get the system to respond to a request for attention.

  6. The user has the right to a system that provides clear, understandable, and accurate information regarding the task it is performing and the progress toward completion.

  7. The user has a right to be clearly informed about all system requirements for successfully using software or hardware.

  8. The user has a right to know the limits of the system's capabilities.

  9. The user has a right to communicate with the technology provider and receive a thoughtful and helpful response when raising concerns.

  10. The user should be the master of software and hardware technology, not vice versa. Products should be natural and intuitive to use.

Note that some of the topics covered in the Bill of Rights are essentially unmeasurable and are probably not good candidates for requirements per se. On the other hand, the bill should be useful as a starting point in developing questions and defining requirements for the usability of the proposed product.

Reliability

graphics/ant_icon.gif

Of course, nobody likes bugs , defects, system failures, or lost data, and in the absence of any reference to such phenomena in the requirements, the user will naturally assume that none will exist. But in today's computer-literate world, even the most optimistic user is aware that things do go wrong. Thus, the requirements should describe the degree to which the system must behave in a user-acceptable fashion. This typically includes the following issues:

  • Availability . The system must be available for operational use during a specified percentage of the time. In the extreme case, the requirement(s) might specify "nonstop" availability, that is, 24 hours a day, 365 days a year. It's more common to see a stipulation of 99 percent availability or a stipulation of 99.9 percent availability between the hours of 8 A . M . and midnight. Note that the requirement(s) must define what "availability" means. Does 100 percent availability mean that all of the users must be able to use all of the system's services all of the time?

  • Mean time between failures ( MTBF ). This is usually specified in hours, but it also could be specified in days, months, or years . Again, this requires precision: the requirement(s) must carefully define what is meant by a "failure."

  • Mean time to repair ( MTTR ) . How long is the system allowed to be out of operation after it has failed? A range of MTTR values may be appropriate; for example, the user might stipulate that 90 percent of all system failures must be repairable within 5 minutes and that 99.9 percent of all failures must be repairable within 1 hour . Again, precision is important: the requirement(s) must clarify whether "repair" means that all of the users will once again be able to access all of the services or whether a subset of full recovery is acceptable.

  • Accuracy . What precision is required in systems that produce numerical outputs? Must the results in a financial system, for example, be accurate to the nearest penny or to the nearest dollar?

  • Maximum bugs, or defect rate . This is usually expressed in terms of bugs/KLOC (thousands of lines of code) or bugs per function-point .

  • Bugs per type . This is usually categorized in terms of minor, significant, and critical bugs. Definitions are important here, too: the requirement(s) must define what is meant by a "critical" bug, such as complete loss of data or complete inability to use certain parts of the system.

In some cases, the requirements may specify some " predictor " metrics for reliability. A typical example of this is the use of a complexity metric , such as the cyclomatic complexity metric, which can be used to assess the complexity ”and therefore the potential "bugginess" ”of a software program.

Performance

graphics/stopwatch_icon.gif

Performance requirements usually cover such categories as the following:

  • Response time for a transaction : average, maximum

  • Throughput : transactions per second

  • Capacity : the number of customers or transactions the system can accommodate

  • Degradation modes : the acceptable mode of operation when the system has been degraded

If the new system has to share hardware resources with other systems or applications, it may also be necessary to stipulate the degree to which the implementation will make "civilized" use of such scarce resources as the CPU, memory, channels, disk storage, and network bandwidth.

Supportability

Supportability is the ability of the software to be easily modified to accommodate enhancements and repairs . For some application domains, the likely nature of future enhancements can be anticipated in advance, and a requirement could stipulate the "response time" of the maintenance group for simple enhancements, moderate enhancements, and complex enhancements.

For example, suppose we are building a new payroll system. One of the many requirements of such a system is that it must compute the government withholding taxes for each employee. The user knows, of course, that the government changes the algorithm for this calculation each year. This change involves two numbers : instead of withholding X percent of an employee's gross salary up to a maximum of $ P , the new law requires the payroll system to withhold Y percent up to a maximum of $ Q . As a result, a requirement might say, "Modifications to the system for a new set of withholding tax rates shall be accomplished by the team within 1 day of notification by the tax regulatory authority."

But suppose that the tax authority also periodically introduced "exceptions" to this algorithm: "For left-handed people with blue eyes, the withholding tax rate shall be Z percent, up to a maximum of $ R ." Modifications of this kind would be more difficult for the software people to anticipate. Although they might try to build their system in as flexible a manner as possible, they would still argue that the modification for left-handed employees falls into the category of "medium-level" changes, for which the requirement might stipulate a response time of 1 week. Assuming that such a "requirement" made any sense at all, it could probably be stated only in terms of goals and intentions; it would be difficult to measure and verify such a requirement.

However, what the requirement statement can do, in order to increase the chances that the system will be supportable in the manner just described, is stipulate the use of certain programming languages, database management system (DBMS) environments, programming tools, table-driven support utilities, maintenance routines, programming styles and standards, and so on. (In this case, these really become design constraints, as we'll see below.) Whether this produces a system that can be maintained more easily is a topic for debate and discussion, but perhaps we can get closer to the goal.

   


Managing Software Requirements[c] A Use Case Approach
Managing Software Requirements[c] A Use Case Approach
ISBN: 032112247X
EAN: N/A
Year: 2003
Pages: 257

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net