|< Day Day Up >|| |
Information technology can be used to support collaborations and partnerships among organizations for competitive purposes. Organizations have developed the notion of inter-organizational systems (IOSs), also known as inter-organizational information systems (IOISs), to support these collaborations and partner- ships. An IOIS is defined as an automated information system shared by two or more organizations (Cash & Konsynski, 1985) in a collaborative fashion.
Compare the definition of an IOIS with the definition of computer-supported collaborative work (CSCW) applications: applications that support coordinated activity carried out by groups of collaborating individuals (Greif, 1988). CSCW applications are also known as multiuser, groupware, or collaborative applications.
Collaborative applications normally provide capabilities beyond simple information access to facilitate communication and collaboration among partners. Depending upon the collaborative application, both synchronous and asynchronous communications may be supported, and documents can be shared. Some collaborative applications incorporate video to support communications and negotiations. These coordination mechanisms are essential to efficient collaboration among cooperating organizations. In fact, because IOISs are computerbased systems used to collaborate across organizations, they are a subset of collaborative applications.
Hong and Kim (1998) built on Cash and Konsynski’s (1985) work by developing a framework for classifying the various types of IOISs. Their classification scheme is based on three categories: vertical linkage, horizontal linkage, and cross linkage. Vertical systems connect suppliers with sellers with the goal of more efficient marketing. This type of system gives sellers, for example, the capability to place orders quickly and gives suppliers sales data to help them plan production. Horizontal systems link homogeneous groups of businesses. Partner- ships within an industry, often consisting of smaller businesses, benefit from improved access to information. Cross systems are an attempt to integrate horizontal and vertical links into one complete system.
It is necessary to understand the roles of the participants or collaborators in IOISs in order to provide the necessary system capabilities to support a variety of tasks. For example, consider a vertical IOIS that links a manufacturer with a number of suppliers. A subset of those suppliers may be competitors who are negotiating terms with the manufacturer. Suppliers may want to use this system to share contractual information with the manufacturer but not with each other. Vertical and cross IOISs will need to support the most diverse set of users (e.g., suppliers, manufacturers, and retailers), though horizontal IOISs might also need to support differing groups of collaborators (e.g., manufacturers from the Eastern United States versus manufacturers from the Western United States). The roles of participants and their different information sharing needs should be taken into account when evaluating which IOIS is appropriate for a set of cooperating organizations.
To help organizations evaluate which IOIS they should adopt, and provide guidance for developing an IOIS, this chapter includes an assessment of the advantages and disadvantages of using different types of evaluation methods for determining the suitability of IOIS applications. The other contributions of this chapter are as follows:
An explanation of why IOISs are difficult, yet important, to evaluate
A description of how the Synchronous Collaborative Awareness and Privacy (SCAPE) awareness framework could be used to evaluate an IOIS application
A case study of evaluating the GrooveTM application’s suitability for use by a collaborating team that includes members from organizations with different goals
The rest of this section discusses the importance of evaluating IOISs, the difficulties of doing so, and the critical distinctions between evaluating single- user computing applications versus multiuser applications, such as IOISs. The second section describes evaluation methods for multiuser applications in general, and one method in particular, SCAPE, will be described in the third section. The fourth section presents a case study of using SCAPE to evaluate Groove, a popular tool that aids inter-organizational information sharing. Finally, a discussion of what can be learned by evaluating IOISs completes the chapter.
Much research has centered on evaluating the usability of collaborative applications, because it is extremely important to ensure that these applications can be effectively and efficiently used by their intended audiences. The success of a collaborative application normally depends on a “critical mass” of users accepting and making proper use of the application. For example, picture:
An inventory control system so cumbersome to use that some of the staff receiving inventory neglect to log the inventory into the system or log it incorrectly
An instant messaging application that users did not find easy and rewarding to use, so very few of a user’s business or social contacts bother to remain accessible via instant messaging
An automated calendar management application that takes a lot of work to enter activities into, so many people within a workgroup do not make an effort to keep their calendars up-to-date
Clearly, each of these situations constitutes a recipe for failure. These cases illustrate the fact that a collaborative application is likely to fail if the work people need to put into the application exceeds the perceived value of their benefits from using the application (Grudin, 1988).
Besides an imbalance in work versus benefits, there are other reasons why adoption of an IOIS may fail. For example, we have seen adoption of a collaborative application fail when:
Users did not perceive a need to collaborate: Such a finding is consistent with Rogers’ (1995) work on diffusion of innovations, which notes that the rate of adoption of innovations is related to the extent to which the innovation (e.g., a new collaborative application) satisfies users’ needs.
The application did not provide functionality that users felt was relevant: Relevant functionality is dependent on the tasks the users need to perform and the conditions under which they normally perform those tasks. For example, an IOIS intended for use by personnel driving delivery trucks that requires a substantial amount of typing (instead of, say, using a bar code reader) would be likely be unsuccessful.
Users were not available to log in frequently: Users who often attend meetings or engage in other activities that preclude access to the IOIS, for example, would be unlikely to embrace use of the IOIS.
Users did not develop a well-articulated communications strategy: An example of an incomplete communications strategy is one that does not define situations in which to use the collaborative application versus e-mail.
The application was not easy to learn or use: Rogers (1995) noted that adoption of innovations is related to the complexity or ease with which an innovation can be understood.
There is normally a large financial difference to a collaborating group of organizations between failure to adopt an IOIS and adopting—and making good use of—an IOIS that is well-suited to those organizations. When adoption failure occurs, the organizations must count as a loss the purchase price of the IOIS plus the loss in productivity represented by the hours spent installing, training, and experimenting with the IOIS. Collectively, these costs could be substantial, especially for large organizations that may have asked hundreds of people to try the IOIS. In addition, there is an intangible cost: members of the organizations may be less open to use of IOISs in the future once they have had a negative experience with an IOIS.
Contrast this failure situation with the case in which organizations choose an IOIS that meets their needs and streamlines their business processes. Depending on how an IOIS is used, an effective IOIS may result in a decreased need to travel (because IOIS technology can often mitigate the need for face-to-face meetings), shorter document production review cycles, decreased time-to- market, increased sales, and better customer support.
The difference between adoption failure and success hinges on defining collaboration requirements that take into account the work characteristics of users, the likely benefits to users, as well as ease of use from the point of view of the intended set of users. A further prerequisite for success is an evaluation program that examines how well an IOIS is likely to meet those requirements.
Evaluation goals differ depending on whether an IOIS is being chosen from among a set of existing products, or a custom (bespoke) IOIS product is being developed. If an IOIS is being chosen, the candidate applications are each examined against a tailored set of requirements, using one or more evaluation methods such as those discussed later in this chapter. Because the commercial IOISs cannot normally be modified substantially, the one that comes closest to meeting the requirements is chosen. If a custom IOIS is being developed, the goal of evaluation is to find problems that can be corrected as early as possible in the product design and development life cycle. The later in the development process that interface problems are found, the more costly they are to correct. Mantei and Teorey (1988) found that changes made to the interface designs of systems after production coding had begun cost four times as much as changes to the designs made during prototyping phases.
There are several reasons why evaluating collaborative applications is more difficult than evaluating single-user computing applications. Malone (1985) cited the difficulties in assembling a group of people in a lab that reflect the social, motivational, economic, and political characteristics of typical users—yet these characteristics are likely to affect performance when using the collaborative system. If evaluation is attempted in the users’ normal work environments (“in the field”), Grudin (1988) observed that it is extremely difficult to disperse evaluators to the various locations of the collaborators as well as take into account the wide variation in user group composition and work environments. Regardless of whether they occur in a lab or in the field, Grudin (1988) noted that evaluations of collaborative applications take much more time than evaluations of single user applications, because the relevant group interactions “typically unfold over weeks.”
Adding to the difficulty of collaborative application evaluation is the fact that sophisticated applications allow users to take on a number of different roles. Users’ expectations of an applications’ behavior may change depending upon the roles users are playing at the time and the specific tasks they are performing. More generally, collaborative applications are challenging to evaluate due to the need to take into account how the application mediates users’ interactions with each other.
Although difficult to perform, evaluations of collaborative applications are extremely important due to the cost implications described above. The purpose of this chapter is to provide insight into the state-of-the-art in evaluating collaborative applications in general, and into one evaluation method in particular (which will form the basis for the case study presented later in this chapter).
The preceding subsection touched upon the crucial distinction between evaluations of single-user systems, such as word processors, and multiuser systems, such as IOIS applications. An evaluation of multiuser (collaborative) systems needs to investigate whether the application adequately supports collaborators’ awareness of each others’ presence, identities, and activities. Awareness is important in collaborative applications because it aids coordination of tasks and resources, and it assists in transitions between individual and shared activities (Dourish & Bellotti, 1992). Dourish and Bellotti (1992, p. 107) defined awareness as “an understanding of the activities of others, which provides a context for your own activities.” “Workspace awareness” was defined by Gutwin et al. (1995) as the up-to-the-minute knowledge of other participants’ interactions with the shared workspace, such as where other participants are working, what they are doing, and what they have already done in the workspace.
To understand the importance of awareness, picture trying to use a chat application without being able to read the contributions of the other participants; clearly, the application is useless without an understanding of the other participants’ activities. Chat is an example of a synchronous collaborative application (those that are used by collaborators at the same time, although not necessarily at the same place). An “up to the moment” awareness of others’ activities is especially pertinent to the class of synchronous (as opposed to asynchronous) collaborative applications. An example of an asynchronous collaborative application is e-mail.
Awareness and privacy are in tension with one another, as are awareness and information overload. Hudson and Smith (1996) expressed the trade-offs very well:
This dual tradeoff is between privacy and awareness, and between awareness and disturbance. Simply stated, the more information about oneself that leaves your work area, the more potential for awareness of you exists for your colleagues. Unfortunately, this also represents the greatest potential for intrusion on your privacy. Similarly, the more information that is received about the activities of colleagues, the more potential awareness we have of them. However, at the same time, the more information we receive, the greater the chance that the information will become a disturbance to our normal work. This dual tradeoff seems to be a fundamental one. (p. 247)
Any evaluation method that pertains to collaborative applications should be sensitive to issues of privacy and awareness. For example, a student using a distance-learning application may want to make the instructor aware of all of his or her online activities, while the instructor would want to keep grading activity private (except to the student directly affected).
It is difficult to apply evaluation techniques developed for single-user applications to multiuser applications, because they do not address the issues of awareness and privacy. Only recently has there been a push toward developing evaluation methods specifically for collaborative applications.
The identification of any commercial product or trade name does not imply endorsement or recommendation. Groove is a trademark of Groove Networks.
|< Day Day Up >|| |