Section 4.2. Case Study: Usability Involvement in a Security Application


4.2. Case Study: Usability Involvement in a Security Application

Clare-Marie Karat was the HCI lead on the development of a security application in 1989. She provides a firsthand account of the process she used to understand the user requirements and improve the usability of this product.[4]

[4] C. Karat, "Iterative Usability Testing of a Security Application," in Proceedings of the Human Factors Society (1989), 272277.

The project's goal was to improve the dialog of a large data entry and inquiry mainframe application used by 23,000 end users (IBM employees in branch offices across the U.S.). This large mainframe application was composed of many subapplications. The goal was to eliminate the recurring need for security authorization while performing discrete but related tasks that composed a business process. However, the project had entered the design and development phase: coding had already begun. So, before I actually joined the team, the project manager and I agreed upon the approach that I would use. I would quickly complete the HCI activities that should have been done in the requirements phase, then move ahead with the design and development phase HCI work.

Prior to the development of this security application, working with the mainframe application was like having a series of disconnected conversations for the end user. Every time a user wanted to perform a transaction, he had to re-identify, provide a password, select the appropriate application, and type a command known as a mnemonic that indicated the type of transaction to be performed. Further, the use of the actual mnemonics was controlled: users were only told about the mnemonics relevant to their individual job responsibilities and were instructed not to write down the mnemonics or share them with anyone.

I met with the project team to understand the goals of the project and define the overall usability goal. They were ambitious. The old security application was to be taken down on a Friday, the new application was to be installed over the weekend, and employees needed to be able to walk up and use the new security application the next Monday. The transition had to be smooth, without disrupting the end users or business. End users signed on to the target mainframe application about a dozen times a day.

After interviewing key team members, I drafted the measurable usability objectives for the security application and discussed them with the team during a regular status meeting. Consensus was reached that the usability objective for the security application was that "95% of the users will complete the sign-on task error-free within the first three attempts at the task." No specific usability objective was set for the time to complete the sign-on task itself; however, the team members thought users ought to be able to complete it within about 8 seconds.

4.2.1. The Field Study

I went into the field (a field study of end-user work context) to several branch offices and observed the general workplace setting and the way the employees worked with the current system. I met with and talked to a number of the target users, collected necessary information, and was able to create the user profile definition. The users had an average of two years of work experience in their current jobs, and five years overall with IBM. They all had college degrees, good computer skills, and reasonable experience with the major business applications they used.

I observed their work environment and the transactions they needed to complete on the mainframe application. These users accessed the application to complete a variety of tasks and transactions related to customer relationship management, sales, and fulfillment. Employees in the branch offices were working at close quarters in open office settings with no partitions. Phones rang constantly; people carried on conversations. When sales were completed, big bells were rung to celebrate. It was a busy, exciting, fast-paced, and sometimes rather loud environment in which to work. The end users were typically attending to their terminal screens while juggling a variety of other tasks simultaneously, including communicating with other staff and customers through telephone calls and talking to people stopping by and requesting information. It was clear that the user interface to the security application needed to take these social and physical environment factors into account in the end-user requirements for the user interface and interaction methods for the security application.

Interviews and observation of the employees identified that the number of mnemonics that they needed to remember (for some employees, this number approached 60, with an average in the high 30s) was more than the human memory can handle. As might be expected, the users had devised their own methods of keeping track of the mnemonics they needed for their tasks. I conducted a task analysis to identify the key security sign-on tasks the users completed related to the various transactions they worked on each day, and then defined the set of core user scenarios that the security application would have to handle. These scenarios also identified a number of end-user requirements.

4.2.2. The User Tests

Over the course of the next three months, I completed three iterations of design and usability testing of the security application's proposed user interface. Iterative testing provides an opportunity to test the impact of design changes made to the interface.[5] The initial test was a field test designed to collect data in the context of the work environment and included 5 participants. The second test was a laboratory prototype test involving 10 participants, and the third test was a laboratory integration test of the live code on a test system with 12 participants. The same set of tasks was used in all three tests. (In the examples that follow, the system and applications have been de-identified and the generic mnemonic "CMINQ" is used.)

[5] J. Gould, "How to Design a Usable System," in M. Helander (ed.), Handbook of Human Computer Interaction (Amsterdam: Elsevier, 1988).

4.2.2.1 Test 1

For the first test, I built a low-fidelity prototype using the user interface design (the initial design development) that the project team had developed before I joined the effort. The team was very confident that it would be fine and that the users would be able to complete tasks using this design in a few seconds. The low-fidelity prototype was constructed using screenshots that I printed and covered with reusable clear laminate so that the users could write with washable pens what they would normally type into the entry fields. The reusable "paper" prototype provided a realistic approximation of the interface screens and navigation and was developed in 20% of the time that an online prototype would have required. The participants in the usability test completed a set of four sign-on tasks that covered different aspects of the changes to the sign-on process. For example, a sign-on task was as follows: "Please sign on to the XXX system to perform an authorized inquiry function (e.g., CMINQ) on the XXX application."

Because the usability objective concerned the users' ability to quickly learn to use the new system, all three usability tests were designed to measure learning across repetition of the same task. Therefore, the participants completed the set of four typical sign-on tasks three times (called three trials). The sets of tasks were presented in random order in each trial. The same quantitative performance and qualitative opinion measures were collected in all three usability tests. The quantitative data included participant time on task, task completion rate (regardless of the number of errors), and cumulative error-free rate across participants. The qualitative data was collected through use of a debriefing questionnaire that captured user comments on needed changes, what was confusing, and a forced-choice End User Sign-Off rating (a measure I created) about whether the product they had worked with was good enough to install without any changes.

As a way of educating the project team, I invited the lead programmer to work with me in conducting the field test (Test 1). We worked with one participant at a time. I introduced the participant to the purpose of the usability session, and then each participant attempted to perform the three trials of four sign-on tasks, completing a total of 12 tasks (see Figure 4-1). At the end of the third trial, the participant completed a debriefing conversation with me. During the sessions, the lead programmer sat to the side of the participants and unobtrusively completed the stopwatch timing of the participants' performances on the tasks while I ran the session and acted as the "computer" by replacing the screen on which the participants entered their text with the one they would see next on their computer screen based on their previous input. We were able to quickly move through the branch office and collect the session data, completing the field usability test in about 25% of the time a laboratory test would have required because of the time necessary to recruit and schedule participants to come to the usability lab.

I presented the results to the development team as well as the managers of associated technical and operational areas who were invited to attend the meeting. The results were shocking to those present. Only 20% of the participants (one person) could complete the sign-on task error-free. The average time on task was over 3 minutes in Trial 1, with learning improved to 48 seconds in Trial 3 (see Figures 4-2 and 4-3). Not one of the participants thought the product was good enough to install without changes. The senior managers in attendance started talking with each other about setting up a help desk for three months to handle user questions from across the country as the system was rolled out. Nobody had the funds for a help desk in their budget, though, and the conversation became heated. The team was stunned; they had been fully prepared to implement their initial user interface design as the final design for the system.

I spoke up and said that I had recommendations for how we could resolve the usability problems and eliminate the need for a help desk. During the session debriefs, the participants talked about how they were typically multitasking, described the rote manner in which they worked at the terminals, and stated that the system needed to accommodate

Figure 4-1. Reenactment of the field usability test of the low-fidelity prototype of the security system with a target user


their need to work in this type of atmosphere. I had analyzed all the data and identified error patterns that were the most serious for us to address using the Problem Severity Classification Matrix (PSC). The PSC matrix provides a ranking of usability problems by severity and is used to determine allocation of resources for addressing user interface issues. The PSC ratings are computed on a two-dimensional scale, where one axis represents the impact of the usability problem on the user's ability to complete a task, and the other axis represents the frequency of occurrence as defined by the percentage of users who experienced the problem. Ratings range from 1 to 3, where 1 is most severe.[6] The developers were comfortable with the PSC matrix and ratings as they were used to classifying functional problems in the code in this manner.

[6] C. Karat, R. Campbell, and T. Fiegel, 397-404.

I made four recommendations for changes to resolve the severity 1 and 2 level end-user usability issues. The redesign work included improving the clarity of information displayed on screens and its visibility, providing content in error messages that enabled users to recover gracefully from these situations, and simplifying two types of navigational complexity. Three were quickly implemented in the system code by development staff, and a new redesigned low-fidelity prototype with these design changes was tested in Test 2. The fourth recommendation could not be implemented at that time, as the team could not determine a technically feasible way to address the navigational problem.

Figure 4-2. Cumulative percentage of participants performing tasks error-free, and ideal performance


Figure 4-3. Median time on task in seconds for participants in the three trials, and ideal performance


4.2.2.2 Test 2

I redesigned the low-fidelity prototype and ran Test 2 in the usability laboratory at the development site. The usability laboratory consists of a control room and a usability studio. There is a wall with a one-way mirror between the two rooms. The development team was very engaged in the usability test and took turns watching from the control room (so that the users were not disturbed) while I ran the sessions in the studio with the participants. The extra time necessary to recruit and schedule participants was offset by the significant value to the development team in their education about usability and user interface design.

The results of Test 2 were very encouraging; 90% of the participants signed on correctly after seven attempts. This was close to the 95% level specified in the usability objective, but the number of learning attempts required to achieve that level remained unacceptable. The average (median) time on task for participants in Test 2 was less than half of what it was in Test 1, and ranged from about 80 seconds in Trial 1 to about 20 seconds in Trial 3. These times were well above the project team's goal of 8 seconds.

To understand the ideal time on task that was possible with the prototype, we had an employee who had expertise in the security system and the branch office applications complete the usability test. This expert completed each task error-free and in 6 seconds. Clearly, improvement was possible.

When asked, 60% of the Test 2 participants thought the security system was good enough to install without any changes. While the team was heartened by the improvement in the usability of the security system, the low participant opinion scores in Test 2 pushed the team to find a solution to the remaining usability problem.

One severe usability problem that had been identified in Test 1 reoccurred in Test 2. This problem involved a complicated navigational flow where users navigated down two different paths depending on the mnemonic chosen at sign-on. One path was clear to the users. The second path was confusing and could not be addressed at the end of Test 1 because of technical problems. At the end of Test 2, however, the project architect, who had been thinking about the problem since the end of Test 1, created a simple and innovative solution to finesse the situation, and it was implemented. The solution involved the creation of a bridge that replaced a central navigational fork. This change both simplified and standardized the navigational flow. All users could utilize the bridge regardless of the mnemonic chosen at sign-on. The complexity in the security was transparent to users as, after authenticating with a user ID and password, they now saw only the types of transactions they were authorized to make.

4.2.2.3 Test 3

For Test 3, I ran the usability sessions in the lab, and the participants worked with the live code on a test system. In parallel, the project team was conducting the functional testing of the system. The solution implemented for the remaining major usability problem in Test 2 worked very well. In Test 3, 100% of the participants signed on error-free after the third attempt. The user performance met the usability objective set at the beginning of the project. User time on task ranged from 24 seconds to 7 seconds in Test 3, nearly meeting the ideal performance measure. In Test 3, 100% of the participants thought the new security system was good enough to install without any changes. The solution implemented at the end of Test 2 created one minor usability problem in Test 3, and this issue was resolved prior to the rollout of the code to the branches.

The system was rolled out to the branches on time, under budget, and with high user satisfaction. Data analysis of support calls regarding the product and collection and analysis of survey data from users documented that the system was very usable and that the users were delighted not to have to remember or keep track of all of the individual mnemonics anymore. The mnemonics for related business application tasks were now grouped under one master mnemonic. The security application simplified the sign-on process and provided the users the seamless environment across transactions that they needed to complete to provide high-quality service to IBM customers while maintaining the security of sensitive data. The architectural changesthe design and layout of the user interface, navigation, and error messagesall contributed to a simplification of the security system for the users in the branch offices.

4.2.3. The Return on Investment (ROI) Analysis

The collaboration between the HCI lead and the security technical staff was essential in creating the final design. The success in designing and deploying the new security application and the resulting return on investment was achieved through the partnership between the HCI lead and the other team members on the team goal to create a usable and effective security solution. The simple cost-benefit ratio analysis provides clear data on the value of usability in the development of a security application.[7]

[7] For more information on calculating the cost benefit of usability, see ibid and R. Bias, Cost-Justifying Usability: An Update for the Internet Age, 2nd Edition (London: Academic Press, 2005).

Postrelease costs were also significantly lower than would normally be anticipated. There were no change requests after installation, and no updates were required to the code. These achievements were unusual and they were rewarded by the organization. The product manager communicated the project results up the management chain. The entire project team was awarded bonuses for the high-quality deliverable. The division senior executive called together the team of senior product managers and reviewed the positive outcome that was the result of the inclusion of HCI work in the development project. Usability became a critical path in project planning and execution. A number of the developers became very skilled practitioners in this field. One of my colleagues from that initial experience is the manager of an HCI consulting group within the company, and we remain in regular contact.

Before I joined the project team, they had planned to implement the initial design for the system (the one evaluated in Test 1). I analyzed the cost benefit of the usability work by simply calculating the financial value of the improvement in user productivity on the security tasks for the end-user population for the first three sign-on attempts on the security system after release.[8], [9] Then the value of the increased productivity was compared to the cost of the HCI activities on the project. This resulted in a 1:2 cost-benefit ratio; for every dollar invested in usability, the organization gained two back. With the inclusion of cost savings based on further sign-on attempts, the savings related to the greatly reduced disruption of the users, and the reduced burden on the help desk staff, the ratio became 1:10.

[8] C. Karat, "Cost-Benefit Analysis of Usability Engineering Techniques," Proceedings of the Human Factors Society, 839-843.

[9] C. Karat, 2005.

This security application case study described a fairly small and well-defined effort: the project goals were clear, the target users were easily identified and accessible for usability design and evaluation work, and the functionality was designed and implemented using existing technology. The second case study, described in the next section, provides an overview of a project with a greater amount of complexity in the goals of the privacy tools, the target users, the context of use of the privacy functionality to be designed, and the use of innovative technology that needs to integrate and work smoothly with legacy systems.



Security and Usability. Designing Secure Systems that People Can Use
Security and Usability: Designing Secure Systems That People Can Use
ISBN: 0596008279
EAN: 2147483647
Year: 2004
Pages: 295

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net