CASE DESCRIPTION

Perception of Failure and Call for Clarification

Comate was put into operation in January 1998. In the spring of 1999, approximately a year and a half after its introduction, the reception of Comate proved disappointing. The data in the login database of the system showed that only a few dozen of the 250 people authorized to use the system did so on a regular basis. The data also showed that users typically only inspected a few pages per visit and that the duration of an average stay in Comate was short. Although the central CMI department did not keep track of the number of e-mail and hardcopy requests for information, the undisputed impression existed that, contrary to the intentions and expectations, these numbers did not decrease during the period of Comate's operation. These data led Central CMI to conclude that the introduction of Comate was a failure and that the system did not live up to the expectations of its designers. As described in the introduction, this assessment induced the staff responsible for Comate, and more particularly the head of Central CMI, Hans Broekmans, to ask for an explanation of this failure and to inquire what users would regard a useful and usable system. These questions formed the starting point for the investigation by Johan van Breeveldt and his team. Their task was to uncover the information needs of designated system users, present or potential, both by looking in retrospect at reasons for the current lack of usage and by identifying variables influencing a broader acceptance of the system in the future.

The problem that faced the project team at the start of its work was how to find an appropriate and workable restriction of its domain and how to provide the best direction to its work. The team members were well aware of the fact that the success and failure of information systems (ISs) refer to matters of great complexity, linked to great diversity of individual issues, and addressed in divergent ways in multiple IS development approaches and methodologies (e.g., see Currie & Galliers, 1999). The team decided first of all to focus on the acceptability of Comate to users and to direct the investigation towards reaching an understanding of the elements that determine acceptability. Following Grudin (1992) and Nielsen (1993; 1999), the acceptability of ISs can be split into social acceptability (standards, existence or absence of pressure to use the system, etc., see also Venkatesh & Speier, 1999) and practical acceptability (costs, reliability, usefulness, etc.). The project team then decided to concentrate on the latter concept, because it felt that understanding matters of practical acceptability had a greater urgency. The next question was how to define this domain and how to expand the definition into researchable issues and, eventually, questions to be asked of the actual and intended system users. The domain of practical applicability is usually broken down into the concepts of usefulness and ease-of-use (e.g., Nielsen, 1993, 1999). As these two concepts surfaced in the initial meetings of the project team, they met with considerable enthusiasm, as team members were well aware of the fact that these concepts constitute the cornerstones of the well-known Technology Acceptance Model (TAM; see next section). The cause for this enthusiasm was the fact that TAM was recognized as a well-established, robust model, thus providing the investigation with a strong theoretically based rationale for identifying relevant variables. The decision was quickly made to use the two concepts of usefulness and ease-of-use as the main vehicles for establishing the information needs vis-à-vis Comate.

TAM and TTF

As indicated above, the project team decided to start its work by exploring the concepts of perceived usefulness (PU) and perceived ease-of-use (PEU) in order to establish how a definition and elaboration might enable them to identify reasons for the failure of Comate and specify the diagnostic questions that the team should answer. These two concepts are the key independent variables influencing the attitude towards IT and intention to use IT, as specified by the Technology Acceptance Model (TAM, see Davis, 1989; Davis, Bagozzi, & Warshaw, 1989). PU is defined as "the prospective user's subjective probability that using a specific application system will increase his or her job performance within an organizational context" (Davis et al., p. 985). PEU refers to "the degree to which a person believes that using a particular system would be free from effort" (Davis et al., p. 985). The project team decided to study the vast literature on TAM to establish whether or not the model could provide an appropriate perspective for answering the evaluative and diagnostic questions Hans Broekmans had asked. The team found that TAM is a generally accepted and successful model (selective overviews of TAM research are, for instance, available in Lederer, Maupin, Sena, & Zhuang, 2000; Venkatesh & Davis, 2000), undoubtedly owing to its common sense nature, appealing simplicity, and robustness (empirical tests invariably show significant relations between the independent and dependent variables in the model, compare Lederer et al., 2000; Szajna, 1996; Venkatesh & Speier, 1999). However, it was also noted that the explanatory power of the original model is not very high, not to say mediocre, with a typical value for explained variance of around 40% (Dillon, 2000). Besides, the team found multiple equivocalities, with regard to the nature of the relationships and interactions between PEU, PU, and usage (for an overview, see Lederer et al., 2000), the importance of new constructs that some researchers introduced, and the various ways new variables appeared to affect the relationships among the original variables (e.g., Gefen & Straub, 1997; Veiga, Floyd, & Dechant, 2001). This, it decided, was bad news for the investigation, because it implied that TAM alone could not provide the firm ground it needed for detecting weaknesses in the current Comate and for directing prospective diagnosis. A quote from Doll, Hendrickson and Deng (1998, p. 839) may serve as an accurate characterization of the general opinion of the team at that time, as these authors note that: "Despite its wide acceptance, a series of incremental cross-validation studies have produced conflicting and equivocal results that do not provide guidance for researchers or practitioners who might use the TAM for decision making." From its study of the accumulated writings on TAM, the project team drew two conclusions. First, it felt the need for further elaboration of the two concepts of PU and PEU at the conceptual level in order to establish their constituent elements. Second, the team decided that an exploration of other explanatory variables in addition to PU and PEU was called for.

In an additional literature review of the broader class of technology acceptance models, the project team found particularly interesting ideas, useful for both these purposes, in the task-technology fit (TTF) model (e.g., Goodhue, 1995, 1998; Keil, Beranek, & Konsynski, 1995; Lim & Benbasat, 2000; Marcolin, Compeau, Munro, & Huff, 2000). The basic suggestion of TTF is that whether or not the qualities of the system will induce people to use it depends on the task concerned. As Goodhue (1995, p. 1828) puts it: "A single system could get very different evaluations from users with different task needs and abilities." While TTF is newer than TAM and has not attracted as much research attention, research results for this model equally show its robustness and explanatory power (see references above). Just like TAM, TTF has a strong common-sense appeal in its suggestion that IT usage can only be understood if the reason to use the IT, i.e., the task, is included in the picture. The project team concluded that while TTF involves a different perspective on utilization behavior than TAM, these models appear to be complementary rather than contradictory. For instance, it found that Mathieson and Keil (1998; see also Keil et al., 1995) had shown that neither task characteristics nor technology features in their own right can explain variations in PEU, but the interaction between the two classes can. TTF therefore influences or defines PEU. Similar suggestions have been made as to the relationship between TTF and PU (e.g., see Dishaw & Strong, 1999; see also Venkatesh & Davis, 2000: their "interaction between job relevance and output quality" closely resembles TTF). Research by Dishaw and Strong (1999) corroborates the fruitfulness of the idea to integrate the basic concepts of TAM and TTF, as these authors show that a combined TAM/TTF model outperforms an individual TAM model as well as an individual TTF model.

Rethinking Comate

The project team decided to use the combined insights of TAM and TTF to direct its evaluative and diagnostic work. It reached this stage of its investigation some three months after its inception, which was a bit later than anticipated mostly due to the large amount of IT acceptance literature it encountered. The task it faced at this stage was to find a useful interpretation and combination of the conceptual foundations of both models and the cumulative outcomes of studies applying the models. The team was well aware of the fact that these studies do not translate automatically into design directives for ISs. IT acceptance studies pay much attention to issues of significance in assessing the contributions of variables explaining IT usage, which was not the main concern of the investigation at TopTech. In one of the meetings where – again – numerous figures and statistics representing the explanatory power of the models crossed the table, Johan van Breeveldt stood up and exclaimed: "I am not the least interested in how things work in 90, 95 or 99% of the cases! My only interest is in finding out how things work in one case—ours!" These discussions led the project team to define the following agenda: first, it needed to specify and elaborate on the concepts of usefulness and ease-of-use within the context of TopTech's Consumer and Market Intelligence. Next, it needed to identify indicators to serve as hooks for two task realms: the diagnosis of the appropriate organizational context and the redesign and evaluation of the system. The third issue on the agenda concerned the translation of these indicators into questions to be put to selected staff. The fourth task it set was to identify, define, and specify other factors in addition to PU, PEU and TTF. As to this class of additional variables, the team adopted the pragmatic approach of not defining these beforehand but identifying them by inviting respondents to name such factors after considering PU-, PEU-, and TTF-inspired questions. The remainder of this section will focus on the first item on this agenda. The other items will be addressed in the next two sections, describing the data collection strategy and the outcomes of the empirical part of the investigation.

The challenge facing the investigators, given their decision to use TTF as a key component in the definition of perceived usefulness and ease-of-use, was to link the functionalities of Comate to a description of the tasks involved. They decided upon the following three-step procedure for meeting this challenge: the identification of an appropriate model of the tasks, the recognition of a suitable model of the technology functionalities, and the connection of both models. For the first step—identifying the classes of tasks involved in gaining and enhancing the intelligence of markets and customers—the team adopted the commonly accepted model of the Business Intelligence (or BI) Cycle (e.g., Kahaner, 1996; Pollard, 1999; Prescott & Miller, 2001). The BI cycle typically includes four stages: planning and direction (identifying the mission and policies of BI, etc.), collection (data collection and initial processing of these data), analysis (processing data so they can be used for BI-related decisions), and distribution (getting the analysis outcomes on the right desks). The first stage of the BI cycle, planning and direction, falls outside the scope of the Comate case, which only relates to the tasks of collection, analysis, and distribution. As to the second step in defining TTF—modeling the functionalities of the technology—the project team decided to build its elaboration on the 4C framework of groupware functionalities (Vriens & Hendriks, 2000), which is an adaptation of the 3C framework (Groupware White Paper, 1995). The four C's are circulation, communication, coordination, and collaboration. Circulation involves the distribution of information to a broader audience, not aimed at establishing some form of interactivity with that audience. Communication concentrates on the establishment of interaction between senders and receivers of information. Coordination refers to matters of sharing resources, sequential and other correspondence among the subtasks of a larger task, and overlap between individual tasks that are not constituent elements of some overarching task. Collaboration occurs when two or more people are working together on the same task. Functionalities of Comate implemented at that time or considered for future implementation may refer to any of these four classes.

While it had not taken the team long to come up with the three-step procedure and to decide that it would provide a good and useful structure for its definition work, it encountered some irksome problems when it got to the third step of the procedure: How to connect the BI cycle and the 4C framework? and Where did the distinction between usefulness and ease-of-use come into the picture? Should these two concepts be treated on a stand-alone basis, leading to two separate applications of the whole procedure, or could they be included in one procedure through some mutual connection point? It took the team several rounds of sometimes heated discussions to work towards a solution of these problems. The breakthrough moment in these discussions occurred when Maartje Zijweg, one of the marketing specialists, proposed to distinguish between the content and process sides of the CMI tasks. This distinction, so she argued, would provide the basis for two different but related perspectives on tasks and their connection to the functionalities of the technology. Examining this connection from a task-content perspective would lead to the recognition of issues of usefulness. Starting from a task-process perspective would enable the team to recognize issues of ease-of-use in the connection between these tasks and the functionalities of the technology. The other team members applauded this suggestion.

There is no way of telling any more who made the second suggestion that helped the project team out of its deadlock. Several team members claimed authorship of the suggestion, leading to endless back-and-forth discussions. This suggestion was to detach the distribution stage from the BI cycle, to reintroduce it within and between the other stages of the BI cycle, and to elaborate it using the 4C framework. The reinterpreted BI cycle that emerged as the result of this reshuffling is shown in Figure 1. The four C's come into the picture when the question is asked how an application such as Comate may support the tasks within the main classes of the BI cycle (the upper sequence in the figure) and between the stages of the cycle (the lower sequence in the figure). The concepts of circulation, communication, coordination. and cooperation then appear as an elaboration of the way in which connecting to other individuals with similar or related tasks may enhance the task performance of an individual. The four C's are four different ways in which these connections may materialize. They are also the classes of functionality in which the Comate application may prove valuable. When these functionality classes are studied in terms of leading to more effective task performance, the usefulness of the application is at stake. Ease-of-use issues are at stake when the question is asked as to whether using Comate leads to more efficient task performance.

click to expand
Figure 1: An Adaptation of the BI Cycle

Data Collection Strategy

The data in the case study—both for the evaluation and the diagnosis/redesign steps—were collected by means of interviews with several classes of interested parties: actual users, designated users who appeared to use the system hardly or not at all, potential users who had not been included in the Comate-related efforts before, system designers and content specialists at the central CMI department. As to the subclass of actual or potential users, the group of interviewees consisted of intermediate users and end-users of the system. Most of the intermediate users were marketing managers at the corporate, regional, or business-unit level. The end-users included product and marketing managers for individual classes of products and other staff members of the local consumer and market intelligence departments.

As to the content of these interviews, a distinction was made between the assessments of usefulness and ease-of-use. Research has shown that users are better equipped to establish beforehand what they want an individual system to do than how they want it to do that (e.g., see Venkatesh, 2000; Venkatesh & Davis, 1996, 2000). The project team saw this as a justification of separating data collection procedures for the concepts of PU and PEU. As to the usefulness of Comate, the general direction of the interviews involved the sequence of diagnosis—evaluation—redesign. As to ease-of-use, they followed the sequence of evaluation—diagnosis—redesign. To identify other factors than those directly related to ease-of-use and usefulness, the wrap-up phase of each interview contained questions aimed at uncovering the relevance of such factors—both from scratch and on the basis of a list of named factors (such as awareness of the existence of the system). Separate questionnaires were prepared for intermediate and end-users.

The questions concerning usefulness were clustered into five domains of potential usefulness. The groupware functionality "circulation" was split into two domains: (1) circulation within the collection stage and in the connection of this stage with the subsequent analysis stage, and (2) circulation within the analysis stage and in the subsequent stage of connecting the producers and consumers of these analyses. The other groupware functionalities "communication," "coordination," and "collaboration" were treated as separate domains, because Central CMI deemed their importance secondary to the importance of circulation. For each domain, the following subjects were addressed via the following sequence of closed and open questions:

  • characterization of the tasks involved (e.g., domain 1: receiving sources, offering sources to others), specification of elements of the task, general evaluation of the task

  • identification of problems related to the task and its elements

    • designating such problems

      • identifying problems from scratch ("What problems occur?")

      • scoring listed problems ("Do these problems occur?")

      • recognizing problems that should be included in the list ("What other problems occur?")

    • assessing the importance of named problems

    • finding ways to address these problems and other issues to improve task settlement

  • evaluation of Comate in relation to problems and suggested solutions for people familiar with the system

  • solicitation of ideas on potential (new) functionalities for an intranet application with reference to problems and suggested solutions.

The interviews on ease-of-use started from the evaluation of the current system ("How do you like the way the system works?") and worked towards diagnostic and redesign-oriented questions concerning ease-of-use ("How would you want the system to work?"). They started with questions addressing issues at the global level of the system (registration procedures, home page of the system, instruction, manuals and utilities, general search facilities, switching between applications, etc.). The remainder of these interviews was organized around the five applications that made up the system (Market Data, Research Projects, etc.). Respondents were asked to establish the link with the groupware functionalities "circulation," "communication," etc., by presenting them with open questions relating individual functionalities to task elements (e.g., "Does the response button facilitate communication?") and open questions relating the overall application to task domains (assessing ease-of-use of circulation, coordination, etc., via the applications Market Data, Research Projects, etc.). Ease-of-use related questions were only put to actual users of the system.

Results

The outcomes of the rounds of interviews held by the investigators are presented here following the structure of these interviews, which were organized around the five TTF domains of potential usefulness and ease-of-use described above. The outcomes for these domains are then summarized, leading to the final picture of the perceived usefulness and ease-of-use of the system.

As to the first domain, the collection of reports to be circulated and their distribution to the analysts, the potential value of Comate appeared undisputed among those who were aware of the existence of the system, even if they themselves used it hardly or not at all. The main problems they faced as to the availability of sources appeared to be the timeliness of their delivery, the lack of clarity in delivery procedures, and the lack of time the end-users usually had at their disposal when facing tasks for which the use of sources was indispensable. While people recognized that solving these problems would involve more than the introduction of ICT, the general feeling was that Comate, with some adaptations, could do a good job in easing the pain. The criticisms of Comate leading to this call for adaptations included: lack of clarity in the organization of files and location of data, problems with the accessibility of data, problems of authorization, the awkwardness and limitations of the query and search facilities of the system, and the response time for some queries. One respondent observed that the external research bureau that triggered most of the criticism because of delays and vague delivery dates and procedures could do a much better job if it were to publish its reports in batches via Comate instead of in one go. At the same time, it should be noted that many people appeared to be unaware of the existence of the system, either because they forgot that they had been granted permission to use the system or because they had not been included in the circle of initial users in the first place. One respondent remarked: "The concept of ‘Intelligence’ these people at Central CMI appear to have would fit better in the CIA than in our company. If these people had wanted the existence of the system to remain a secret, they could not have done a better job." Several CMI staff members reported that, on several occasions, they had wanted to offer their sources on Comate, but had refrained from doing so. The reasons they mentioned were that some of them had no idea whether or not this was allowed or even possible. Others complained about the lack of transparency in the uploading procedures, especially when it concerned updating existing sources.

The second domain involves the equivalent of the first domain for the analysis stage of the BI cycle. It refers to questions as to how to support the inbound and outbound flows of sources in the analysis networks and the distribution of sources throughout these networks. Again, people recognized the potential value of Comate in this domain. They pointed to particular problems because of the confidentiality of some of their analyses and because of problems of fully understanding the "ins and outs" of these analyses when applied in contexts other than the original. Several people mentioned risks of misinterpretation and potential status loss keeping people from offering their analysis outcomes to others and from using the analysis work of others. In the words of one of the marketing managers interviewed: "What it really comes down to is sharing knowledge about how, when, and why a particular analysis is useful. Sharing knowledge is much more than distributing a set of PowerPoint files." Calls for adjustments, related to problems occurring in the processing of analyses, concerned several elements of these analyses: their number, form, time frame, and method. There were many complaints about the low availability of the work of other analysts, via Comate or other channels, even leading some people to question the raison d'être of Central CMI, as that department hardly offered any analyses. When analysis outcomes did become available, most of the time they appeared in a format that was not suited for use outside the context for which they had been generated. Particularly, long-term analyses appeared to be lacking, which was considered unfortunate as these could provide a kind of organization-wide backbone into which department level analyses could be plugged. Several critical comments were inspired by doubts as to the scientific stature of analyses that had been put on Comate. In short, many comments involved the suggestion to reconsider Comate from the position of the potential consumers of these analyses instead of from the producers' viewpoint.

The third domain concerns the communication aspects within all stages of the BI cycle considered in the investigation. It was hardly surprising that the interviewers found multiple examples of communication in all stages of the BI cycle, between parties within and between departments, at the same geographical location and across locations, and concerning a wide variety of subjects and situations. Typical means that were used in these communications were telephone, e-mail, fax, presentations, or face-to-face contacts. But not Comate! Most people indicated that they experienced no insurmountable barriers to communication, apart from some occasional problems of time-zone differences that could well be by-passed by using e-mail. The main spot where communication support had been introduced in Comate was the response button mentioned above. All the people who knew of the existence of Comate were also aware of the existence of this function in the system. Apparently, in the limited advertising for Comate, the response button had played a significant role in highlighting the potential surplus value of the system. The assessments of this surplus value were, without exception, negative. People indicated they never used it and had no intention of doing so in the future. They offered several explanations. Getting feedback from the authors of the document would simply take too long if they used the feedback button; they preferred to pick up the phone. Also, the fact that remarks entered via the response button would become publicly available met with much criticism. It could do undue harm to both the authors of the documents and the authors of the comments. Also, most questions people appeared to have did not concern an individual document but were of a more general nature. Several people noted that if any type of functionality for supporting communication might be useful in Comate, it would be the establishment of some form of electronic discussion group or database. Such a discussion platform might, for instance, support the location of relevant documents, which people identified as a more relevant topic when communicating in an electronic environment than discussing the contents of these documents.

The fourth domain addresses questions as to whether and how coordination within and between the stages of the BI cycle call for support. While several people did experience problems of coordination—both within their own department and in their relationships with departments elsewhere—the general feeling was that using Comate, or an adapted version of the system, for solving these problems did not make much sense. As one of the interviewees commented: "What sense is there in offering a Porsche to a baby, if it can hardly walk? They had better spend their time on making the things that are available now work, instead of offering all kinds of exotic new things."

As to the fifth and final domain that involved matters of collaboration within and between groups of collectors and analysts of CMI-related information, summarizing the opinions of people outside the Central CMI department was not very difficult, as these proved to be unanimous. None of the actual or would-be users of Comate saw the point of supporting collaboration through a computer system such as Comate. The general feeling was that supporting cooperation through an application such as Comate within their own departments was not necessary or even possible. They did not see the point of dressing up Comate with specific functionalities aimed at supporting collaborations outside their own departments. Either they did not work together with people outside their own departments, or they did have collaborative relationships with people elsewhere, but experienced no problems or challenges for which Comate could be valuable.

Summarizing the findings as to the usefulness of Comate, the conclusion was that the system was or could be turned into an appropriate system for circulating information, provided that all parties involved were willing to publish their sources. The primary function for which Comate appeared to be used was for searching information. Comate appeared not to be used as a communication system, and respondents indicated that they had no intention of using it as such in the future. The main reasons for this were a generally felt preference for personal contact, the resistance to broadcast personal remarks to an anonymous audience, the fact that hardly any questions that people had were related to an individual document, and the tediousness of writing down questions. Comate was not considered useful as a coordination or collaboration system either, because respondents indicated they did not experience problems in these realms that the system could help resolve. As to the content of the system, a key element of usefulness, respondents stated that they missed information about competitors and distribution. They also asked for an increase in the number of analyses offered on Comate. Dedicated presentations linking several sources to a specific research goal were considered even more useful than sources by themselves, either as such or as templates for performing new analyses leading into new presentations.

As to ease-of-use, the interviews showed that the user-friendliness of Comate left a lot to be desired. The respondents complained that the overviews in the system were not clear. They did not consider the system to be attractive. Comate even was characterized as tedious and not inviting to work with. Also, several controls were found to malfunction: no single respondent appeared to use the response button, and many people complained about the search functionality, which they considered below par and badly in need of improvement. Three facets of the system related to ease-of-use were mentioned in particular. First, the indistinctness and intricacy of the registration procedure form appeared to deter people from requesting access to the system. Second, updating, while recognized as crucial for the system to be useful, was generally considered as a cumbersome procedure, particularly because no clarity existed as to what were the responsibilities of individual users and departments regarding updating and which documents could be updated by specific users and which could not. Third, respondents complained about deficient explanation facilities within the system, the lack of a help desk for handling individual problems, and the absence of short training courses. Giving explanations, as several respondents suggested, could clearly demonstrate that using Comate will save time and could, as a result, help convince people to supply their own information.



Annals of Cases on Information Technology
SQL Tips & Techniques (Miscellaneous)
ISBN: B001KZAZTK
EAN: 2147483647
Year: 2005
Pages: 367

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net