UsabilitySuitability Evaluation Methods for IOIS Applications

 < Day Day Up > 



Usability/Suitability Evaluation Methods for IOIS Applications

An important prerequisite for applying usability and suitability evaluation methods is a thorough understanding of the users’ requirements and desires. Thus, the first part of this section provides a brief discussion of how to acquire such an understanding.

The remainder of this section provides an overview of the three broad categories of evaluation methods: formal methods (analytic methods such as dialogue and task modeling techniques), empirical methods (experiments and user studies involving human test subjects), and inspection methods (expert examination of user interfaces). Despite the difficulties enumerated above, some usability evaluation methods have been developed for collaborative systems.

Understanding Users’ Needs

There are many techniques for learning about users’ collaboration needs. Some of them are as follows:

  • Task analysis: A family of different techniques that involve breaking apart users’ tasks, from the standpoints of either cognitive or physical activities, at a high level of abstraction or in great detail, depending upon the particular task analysis technique chosen. For practical advice on performing task analyses, we recommend Mayhew (1999).

  • Ethnographic observation: A broad-based approach originating in anthropology in which users are observed while they pursue their normal activities; observers become participants by immersing themselves in the users’ environment. For examples of ethnographic observation applied to adoption of collaborative applications, see the work of Bonnie Nardi (e.g., Nardi & O’Day, 1999).

  • Contextual inquiry: An ethnographic-based technique in which the observer becomes an apprentice of the person being observed; besides observation, contextual inquiry involves focused interviews, discussion, and reconstruction of past events (Holtzblatt & Jones, 1993).

  • Critical incident interviews: A method in which users are interviewed about the events and activities surrounding an unusual or high-impact event. Klein (2000) described the use of critical incident interviews for collaborating teams.

Without understanding users’ characteristics and work environments, it is impossible to determine whether an IOIS would be “natural” or “intuitive” for those users, or whether the IOIS would be compatible with the users’ normal work practices. Consider an application targeted at scientists and mathematicians, such as MathematicaTM. Mathematicians expect to see terminology in the interface such as “factorial” and “cosine”; they do not need definitions of these terms. Factorial and cosine functions also exist in ExcelTM, which was designed for a general audience. In Excel, mathematical terms are defined, and the definitions are readily visible (they are not buried in a “help” file, for example).

Formal Methods

An example of a formal method that can be used to evaluate collaborative systems is Critical-Path-Method (CPM)-Goals, Operators, Methods, Selection rules (GOMS) (John & Kieras, 1994). CPM-GOMS is also known as Cognitive- Perceptual-Motor-GOMS because of its purpose to model the parallel, multi- stage processor nature of human information processing. CPM-GOMS is a task modeling technique that allows the analyst to break down a task at a very fine level of granularity, such as individual eye and hand movements. The method does not assume that each subtask happens serially; it takes into account the parallel nature of performing activities (e.g., both hands can be moving at the same time while eye movement is also occurring). The end results are predictions for task execution times. While CPM-GOMS was originally envisioned as a method for analyzing a task performed by an individual, its assumption of parallelism enables its use in analyzing a task performed by a team of individuals.

An advantage of using a formal method such as CPM-GOMS is the fact that little to no user participation is required, which simplifies the problem of trying to recruit and schedule users and either replicate a realistic work environment in the lab or capture all facets of the users’ environment in the field. A disadvantage of using a formal method is that evaluators normally require extensive training in the method, because the methods are usually complex and can require grounding in specific theories. CPM-GOMS is useful for obtaining a detailed understanding of how quickly a particular task can be done using an interface but cannot be used to answer broader questions such as, “how satisfied will the users be with this interface?”

Empirical Methods

We are not aware of any empirical methods that have been tailored or created specifically for collaborative systems. Some researchers have applied empirical methods developed for single-user systems to small-scale collaborative systems with success. For example, Gutwin and Greenberg (1998) performed usability testing to compare two different interface approaches for a collaborative computer-assisted welding application. In general, usability tests consist of typical users performing typical tasks under controlled, but realistic, conditions, either in a laboratory or in the field. In Gutwin and Greenberg’s test, the subjects worked in pairs in two different locations and performed their tasks over the course of a few hours. The study goals were focused enough to involve only two people at a time performing a few tasks with minimal training. As a result, the study conductors could obtain a rich amount of data and insights within a manageable time period.

Usability tests are often considered to be the “gold standard” in terms of the amount of data and the subtlety of problems that may be uncovered; thus, they are advantageous to perform whenever it is not too difficult to do so. The difficulties normally arise when duplicating a realistic environment of use, recruiting appropriate users, and scheduling them in groups. The challenges only increase when “typical” user group sizes rise; testing with two to five people at a time is much more tractable than 50, for example.

Although it is highly desirable to perform realistic usability tests, such tests have proven to be too difficult and expensive to perform on collaborative applications in many cases. Baker, Greenberg, and Gutwin (2001, p. 123) stated, “we have not yet developed techniques to make groupware [collaborative] evaluation cost- effective within typical software project constraints.” Thus, collaborative applications are often developed or chosen without any evaluation whatsoever. Recent work in tailoring inspection methods for collaborative applications has taken place in an attempt to provide a reasonable means for collaborative application evaluation to take place.

Inspection Methods

Inspection methods are promising, because they can often be performed more quickly and inexpensively than the other usability evaluation methods. Savings accrue because they do not involve scheduling users (as do empirical methods) or extensively training evaluators (often needed for formal methods). They are often used when there is insufficient time or budget to perform usability testing or to analyze an interface using a formal method. Further, they are often used early in the development process on low-fidelity prototypes to gain early insight into whether the proposed design is consistent with general principles of human– computer interaction (even if empirical evaluations are scheduled for later versions of the application). The disadvantage with inspection methods is that they do not always result in finding the subtle problems that occur due to mismatches between the application design and the user’s mental model of how the application is working.

The classic inspection method is heuristic evaluation (Molich & Nielsen, 1990). It is useful to describe it, because several methods have been developed for evaluating multiuser systems that are adaptations of heuristic evaluation. When performing an heuristic evaluation, inspectors (often, but not necessarily, usability specialists) judge whether each user interface element conforms to established usability principles known as heuristics. Examples of heuristics are, “The interface should be consistent,” and “The interface should provide clearly marked exits.” To apply the heuristics, individual evaluators independently step through all parts of a user interface, noting cases where the interface violates the heuristics. After looking at the interface, each evaluator may assign a score to how well the interface meets each heuristic in general. Once each individual assessment is complete, evaluators normally discuss their findings and agree upon a joint set of problems and scores. The power of this method comes from combining the observations of several inspectors, because people normally find somewhat different subsets of the problems. Heuristic evaluation is straightforward enough that people other than human–computer interaction experts or human factors engineers can successfully perform an heuristic evaluation with as little as an hour’s training.

Other inspection methods compare an application against a set of guidelines (either general or application-specific) or a “capabilities” (function) checklist tailored to the users’ needs. An example of a tailorable function checklist for collaborative applications can be found in Drury et al. (Drury, Damianos, Fanderclai, Hirschmann, Kurtz, & Linton, 1999).

Three inspection methods developed for collaborative systems employ heuristics-based inspection: benchmarks for workspace awareness (Villegas & Williams, 1997), the Locales Framework heuristics (Greenberg, Fitzpatrick, Gutwin, & Kaplan, 2000), and the “Mechanics of Collaboration” (Baker, Greenberg, & Gutwin, 2001). An additional inspection method, Synchronous Collaborative Awareness and Privacy Evaluation (SCAPE) (Drury, 2001) provides both a means of specifying awareness and privacy requirements and evaluating whether the application satisfies the requirements via an heuristic approach. We describe two methods in more detail, the Mechanics of Collaboration and SCAPE, because they are more recent and more mature than the others.

Gutwin and Greenberg (2000) maintained that there are some basic collaboration activities that should be supported by any collaborative application:

These activities, which we call the mechanics of collaboration, are the small scale actions and interactions that group members must carry out in order to get a shared task done. Examples include communicating information, coordinating manipulations, or monitoring one another. (p. 98)

Gutwin and Greenberg proposed that the mechanics of collaboration framework can be used to construct heuristics. They formed eight heuristics (Baker, Greenberg, & Gutwin, 2001, p. 125):

  • Provide the means for intentional and appropriate verbal communication

  • Provide the means for intentional and appropriate gestural communication

  • Provide consequential communication of an individual’s embodiment

  • Provide consequential communication of shared artifacts

  • Provide protection

  • Provide management of tightly and loosely coupled collaboration

  • Allow people to coordinate their actions

  • Facilitate finding collaborators and establishing contact

The idea behind the Mechanics of Collaboration method is that evaluators inspect the interface using the heuristics from Baker, Greenberg, and Gutwin (2001) instead of the ones developed by Molich and Nielsen (1990) or, more recently, Nielsen (1994). Otherwise, the method is essentially the same as that developed by Molich and Nielsen (1990).

Note that the heuristics of Baker, Greenberg, and Gutwin (2001) are broad and make the assumption that the role of the user is not a factor in the evaluation. The SCAPE method was developed to provide a finer-grained evaluation technique, acknowledging that users of an application may have different awareness and privacy needs.



 < Day Day Up > 



Inter-Organizational Information Systems in the Internet Age
Inter-Organizational Information Systems in the Internet Age
ISBN: 1591403189
EAN: 2147483647
Year: 2006
Pages: 148

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net