Three factors affect how serious a problem is for users:
Frequency: How many users will encounter the problem? If a relatively small number of users are hurt by it, it's a lower severity problem.
Impact: How much trouble does the problem cause to those users who encounter it? This can range from almost imperceptible irritation to losing hours of work or even deciding to leave a Web site.
Persistence: Is the problem a one-time impediment to users or does it cause trouble repeatedly? Many usability problems have low persistency because once people figure them out, they can overcome them in the future. Other designs are so confusing that people get lost over and over again. Design mistakes of this kind deserve a higher severity rating than those that bite once.
To calculate the total severity score of a usability problem, we multiply the frequency rating by the impact rating, then multiply that number by the square root of the persistence rating and divide that by the square root of 10. (Dividing by the square root of 10 simplifies the rating by keeping the total number of potential points under 100.)
It's obvious why we multiply frequency by impact: Essentially we're multiplying how many users are hurt by how much they are hurt, and the result is an estimate of total harm done. It may be a bit of a surprise, though, that we then multiply that answer by the square root of the persistence score instead of by the full persistence score. This is because we are dealing with Web sites, where there is not that much persistent use. Users usually only visit Web sites a few times, and if the site has sufficiently hurtful design mistakes, they won't return at all. Thus, we can't give full weight to the idea that users would hypothetically continue to be hurt on subsequent visits because for the most part they won't be revisiting
For each usability problem, we rate each of the three attributes on a scale of 1 to 10, with 10 indicating those that cause the most trouble for the most people. From these scores, we can calculate how severe the problem is. These screen shots illustrate low- and high-severity problems.
A low-severity usability problem: The problem here is that the numbers on the list of checkboxes do not appear to be in numerical sequence, making them seem random. The underlying design problem is that the list looks as if it has been broken up in two columns whereas in fact it's structured by rows. This problem has a very low frequency of occurrence, because most people either click the map or click the name of the area they are interested in; very few people try to match the map and the list. For those users who do try to match them, this is still a very low-impact problem because the list is so small. You need to spend a few extra seconds scanning it, and that's all. Finally, the persistence of the problem is low because if you return to this screen, you know how to deal with it. You are not likely to spend even a few seconds thinking about the mismatch a second time. This layout problem is a minor irritation, and fixing it should not be a high priority.
A high-severity usability problem: The problem on this bank's "About Us" page is that it does not tell enough to establish trust and credibility. Yes, the bank says that it is a "home of traditional banking," but it doesn't back that up with facts such as when the bank was founded, how many branches it has, how solid it is, or any other specific information that would make you feel comfortable handing your money over to it. This problem is high frequency because all users will want to know about a company before doing something as scary as giving it money for safekeeping. The problem is also high impact because it will cause a lot of people to simply refuse to use the site. Finally the persistence of the problem is high, because every time a new user contemplates doing business with the bank, they will want to know more about it, and every time they try to find out, they will be disappointed. This unsatisfying page significantly harms the bank's ability to attract online business.
Hospital Usability: In Critical Condition
Bad user interface can be life threatening in medical applications. In the March 9, 2005 issue of the Journal of the American Medical Association, Ross Koppel and colleagues reported on a field study of a hospital's order-entry system, which physicians used to specify patient medications. The study identified 22 ways in which the system's design flaws caused patients to get the wrong dosage of medicine. Most of these were due to usability problems.
The system screens listed dosages based on the units of medication available through the hospital pharmacy. If a rare medication is usually prescribed in 20- or 30-mg doses, for example, the pharmacy would stock 10-mg pills so that it could cover dosage needs without overstocking. When hospital staff members prescribed infrequently used medications, however, they often assumed the listed unit was a typical dosage. (Years of usability studies in many domains have shown that users tend to assume that the given default or example values are applicable to their own situations.) So a doctor might prescribe 10 mg even though 20 or 30 would be more appropriate. The usability solution here is simple: Each screen should list typical prescription dosages.
Another problem occurred when doctors changed the dosage of a patient's medication. They often entered the new dose without canceling the old one, so the patient received the sum of the old and new doses. This is similar to a banking interface error, when a customer mistakenly authorizes a payment to the same recipient twice in one day. Many bank Web sites will catch this error and ask the client to double-check their records. In general, if users repeat something they've done, the system should ask them whether both operations should remain in effect or whether the new command should overrule the last.
The article reported that at times staff had to review up to 20 screens to see all of a patient's medications. In a survey, 72 percent of staff reported that they were often uncertain about medications and dosages because they had difficulty reviewing them all. The well-known limits on human short-term memory make it impossible to remember across that many screens. Humans are notoriously poor at remembering exact information, and minimizing users' memory load has long been a top guideline. Rather than require users to remember things from one screen to the nextlet alone to the next 19the system should restate facts for users when and where they need them.
Other aspects of the system that required users to go through numerous screens placed additional burdens on some staff. As a result, they didn't always use the system as intended. For example, it was easier for nurses to keep sets of paper records that they entered into the system at the end of their shifts rather than to update it throughout their shifts. This increased the risk of errors and prevented the system from providing real-time information about the medications patients had received. In general, whenever you see users resorting to sticky notes or other paper-based workarounds, you know you have a failed UI.