Introduction

 < Day Day Up > 



Computing platforms continue to increase in performance. These increases can be attributed to many changes, including more powerful processors, additional memory, and improvements in many other components central to the storage, processing, and display of information. As computers become more powerful, additional features that take advantage of this power are added to standard applications, such as word processors, spreadsheets, and presentation packages.

Ultimately, the goal of using computers is to make individuals and organizations more productive. In this chapter, we explore the relationship among the demands placed on users by their tasks, the performance of the computing platform being utilized, the application being used, productivity, and user perceptions. We explore these issues in the context of clerical tasks typical of those activities lower level organizational workers may engage in, as opposed to the tasks managers working in decision-making environments may encounter. Our goal is to begin answering the following question in the context of traditional office tasks such as document creation and modification:

Under what circumstances does a more powerful computing platform enhance user productivity?

While users often express a preference for more powerful computers, and new features that utilize the increased computing power are added to applications, there is little evidence to support the claim that more powerful computers enable office workers to be more productive when completing common office tasks. Anecdotal evidence exists, but empirical results are limited. Therefore, our research incorporates several computing platforms to investigate the relationships among hardware performance, productivity, and user perceptions.

Toward this objective, we acknowledge that different tasks may place different cognitive and physical demands on the user. Therefore, we simultaneously address this important issue by having participants complete several tasks that result in varied cognitive demands with the current chapter focusing on a subset of these tasks. The effects of differing physical demands are addressed through a detailed analysis of the results.

The applications being utilized and the nature of the documents being manipulated can also affect the users' ability to take advantage of increased computing power. To begin exploring these issues, three common applications were used throughout this research. Results that differ among applications, when users complete similar tasks under the same working conditions, would confirm that differences between the applications also affect the interactions.

Finally, the physical environment users experience may affect their ability to complete tasks effectively. However, due to the number of factors being addressed in the current research and the relative consistency of office environments, the current research employed a single work environment designed to match that which office workers would typically encounter.

Background

For over 30 years, researchers have been investigating the relationship among system performance, user productivity, and user perceptions (e.g., Goodman & Spence, 1978, 1981; Weiss, Boggs, Lehto, Shodja & Martin, 1982; Barber & Lucas, 1983; Dannenbring, 1984, 1984; Lambert, 1984; Martin & Corl, 1986; Kuhmann, Boucsein, Schaefer & Alexander, 1987; Planas & Treurniet, 1988; Kuhmann, 1989; Schleifer & Amick, 1989; Schaefer, 1990; Thum, Boucstein & Kuhmann, 1995; Kohlisch & Kuhmann, 1997). Much of this research has used system response time (SRT) as the measure of system performance, where SRT was defined as the period of time between when a user submits a request to a computer and when the computer begins to display its response. Earlier studies were often conducted using time-sharing systems where the delay between a user's action and the computer's response could be precisely defined and easily measured. However, significant changes in system architecture combined with increases in processor speeds have resulted in much shorter delays between user requests and system responses. Given the differences in performance between earlier time-sharing systems and current computers, it is unclear if results from earlier studies can still be applied. More recent studies often utilize laboratory-based experiments. Laboratory studies do provide valuable insights, but the environment in which the tasks are completed may also affect productivity. Our research differs from previous efforts in that it explores the relationship among hardware performance, productivity, and user perceptions when modern PC-based computing platforms are used in realistic work environments.

While system performance may affect both productivity and user perceptions, researchers have also established a strong connection between task complexity and performance (e.g., Card, Moran & Newell, 1983; Vidulich & Wickens, 1986; Kieras, 1988; Ogawa, 1989; Olson & Olson, 1990; Conway & Engel, 1994; Davis & Davis, 1996; Anderson, Reder & Lebiere, 1996; Kohlisch & Schaefer, 1996). For example, tasks that required users to remember larger quantities of information, or process more complex information (e.g., more complex formulas), often result in poorer performance. Therefore, our study also manipulates task complexity as described below.

Definitions

An industry-standard benchmark, SYSmarkNT (Intel, 2002), was used to assess the power of the computing platforms. One computing platform is considered more powerful than another if it results in higher scores on this standardized test. By using a standardized measure, we enhance the ability of other researchers to interpret and replicate our results. This decision was also motivated by the numerous components that can affect hardware performance, the variety of alternatives available for each of these components, and the difficulty this would creating if hardware performance was described based upon the individual components that make up the computer. Finally, our decision was motivated by the belief that subtle differences, which may not result in noticeable changes in the performance of common applications, may affect user behaviors and perceptions.

User productivity was defined as the amount of work users could complete in a given quantity of time. In the current study, productivity is assessed using two measurements: time and errors. First, we measure the quantity of time required to complete a single modification with shorter task completion times corresponding to higher productivity. Second, we count the number of errors per modification with each extra, missing, or incorrect alteration counting as a single error.

The cognitive demands users experience as they complete computer-based tasks can be difficult to isolate when work is completed in real-world environments. In this chapter, we report experimental results for six tasks that require users to modify existing documents. Two tasks were designed for each of three applications: Word, Excel, and PowerPoint. For each application, users complete one low-demand task and one high-demand task. Each task requires users to delete specified items and insert new items at predefined locations. The low-demand tasks involve modifying an average of 1.1 items (e.g., words, numbers) while the high-demand tasks involve an average of 3.5 items. Additional details are provided below in the section describing the experimental tasks.

Each task requires users to navigate to predefined location and subsequently insert or remove some text. As a result, the two primary activities in which users are engaged can be described as navigation and data entry. The data gathered during this study allows user activities to be divided into three groups: data entry, navigation, and miscellaneous other activities. Data entry includes all activities that result in text being inserted or deleted. Navigation includes activities that alter which text is currently selected (e.g., moving the cursor). Finally, any activities that could not be definitively classified as either data entry or navigation are placed into the miscellaneous other category (e.g., pressing function keys, menu selections). It is important to note that the amount of navigation required does not differ between the low- and high-demand tasks, as each of these tasks requires a single navigation action (i.e., moving the cursor from the current location to the text that must be modified). In contrast, high-demand tasks do require more data entry than low-demand tasks. Additional details are provided in the discussion of dependent variables in section on Experimental Design.



 < Day Day Up > 



Advanced Topics in End User Computing (Vol. 3)
Advanced Topics in End User Computing, Vol. 3
ISBN: 1591402573
EAN: 2147483647
Year: 2003
Pages: 191

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net