Research Methodology and Results

 < Day Day Up > 



Data gathering for our study was carried out in two phases. The first phase utilized a survey instrument, while the second phase was an in-depth case study using grounded theory techniques. Data gathering consisted of unstructured and semi-structured interviewing, documentation review, and observation. This triangulation across various data collection techniques is beneficial because it provides multiple perspectives and yields stronger substantiation of constructs (Orlikowski, 1993).

Phase One: The objective of this phase was to measure the effectiveness of the computer systems at Otis, while at the same time measuring the level of end user satisfaction with that system. As mentioned earlier, the two most common instruments used to measure satisfaction with end user computing are the Doll & Torkzadeh (1988) instrument and the Ives, Olsen & Baroudi (1983) instrument. The Doll & Torkzadeh instrument was developed to measure "computing satisfaction" of an end user with a specific application. In our research, the intent was to measure end users' overall satisfaction with end user computing, not with a specific application. Therefore, we used a more general measure, the short form of the User Information Satisfaction (UIS) questionnaire originally developed by Ives et al. (1983) and later modified by Mirani & King (1994) for the EUC context (see Appendix A). A service-quality instrument developed by Remenyi & Money (1994) was used to establish the effectiveness of the computer service and to identify key problem areas with EUC (see Appendix B). Several additional questions were included to gather information on the user's self-rated computing expertise, prior computing experience and training, and current computing usage patterns.

Phase Two: Fourteen respondents comprising a mix of satisfied and dissatisfied respondents (identified during Phase One) were interviewed. Techniques for qualitative analysis (Miles & Huberman, 1994) and grounded theory (Glaser & Strauss, 1967; Martin & Turner, 1986; Strauss, 1987) were employed in the development of the descriptive categorizations used for the technological frames of reference. The software package NUD*IST was used to assist in the content analysis of the interviews.

Analysis of Data—Phase One

The site consists of approximately 85 end users, running a variety of IBM compatible PCs on a Novell Netware LAN. The majority of the end users utilized Microsoft applications in a Windows environment. Fifty-seven survey instruments were returned, yielding a response rate of 67%. The purpose of the survey was to develop a general user profile of the end users, determine the support needs of these users, and rate the performance of the IS department in meeting these needs. Most respondents had between five to 10 years of experience with personal computers, and two to six years experience in a networked environment. Most respondents used their computers three to six hours a day, and rated their general level of PC expertise in the intermediate to advanced range. The respondents were also asked to rate their level of expertise for the applications they use at Otis. Applications with the highest number of users (Word Processing, Spreadsheets, Electronic Mail, and Presentation Software) also had the highest mean levels of expertise. In contrast, the applications that were not used by many people (Databases, Internet Browser, Electronic Fax and Flowcharting) showed lower mean levels of expertise. The general user profile is summarized in Table 2.

Table 2: General User Profile (Otis Elevator Tre Ltd)

Number of Respondents

57

Gender (if supplied)

 
  • Male

19

  • Female

5

Job Function (if supplied)

 
  • Manager

11

  • End-User

15

Years PC Experience (Mean)

8.24

Years PC Network Experience (Mean)

4.43

Hours/Day on PC

5.2

Self Efficacy Rating (General PC Expertise)

 
  • Beginner

 
  • Novice

1.9%

  • Intermediate

7.4%

  • Advanced

38.9%

  • Expert

48.1%

 

3.7%

The respondents were asked to evaluate 22 separate support items (see Appendix B). The items were first evaluated on a five-point Likert scale in terms of that item's importance to the user in the performance of his or her job. These same items were then evaluated by the user according to the performance of the IS department when providing those items. The difference between the performance scores and the importance score indicates the effectiveness of the IS department in performing the various functions. A zero gap would indicate that there is an exact match between importance and performance. A positive gap indicates that the IS department is committing more resources than are required, whereas a negative gap indicates that the performance is less than the importance; that is, the IS department is underperforming. Since the gap is determined by subtracting the importance score from the performance score, a positive gap implies user satisfaction with that item, while a negative gap implies user dissatisfaction with that item.

Analysis of this dataset surfaced several significant issues. A rudimentary ranking of the data by order of importance indicated which specific support areas were most important to the users, and which areas were not considered as important. Service quality gap analysis indicated which support areas were being satisfactorily or unsatisfactorily delivered, and correlation of the service quality gap with satisfaction indicated which support factors affected end user satisfaction. Each of these issues alone was important, but when combined, they provided a much richer picture of the support environment at any organization.

A basic analysis of the importance and performance scores was performed. The mean and standard deviations for each of the 22 attributes were calculated. The mean perceptual gap score and standard deviation were calculated for each item. The gap was calculated by subtracting the importance score from the performance score. The correlations between the gap scores and the overall satisfaction scores were then determined. The items as shown in Table 3 are listed in rank order of importance.

Table 3: Service Quality Gap Analysis - Listed by Rank Order Importance Score

Attributes

Importance

Performances

Perceptual

Gap Corr.

#

Descriptor

Rk

Mean

SD

Rk

Mean

SD

Gap

SD

w/ Satis.

10

Data security and privacy

1

4.25

.815

7

3.855

.678

-.400

1.116

.1333

13

Fast response time from IS staff to remedy problems

2

4.20

.911

14

3.545

.919

-.611

1.235

.2771[**]

6

High degree of tech. competence of IS staff

3

4.10

.867

6

3.857

.554

-.250

1.014

.1727

11

System's response time

4

4.07

.858

9

3.764

.769

-.296

1.002

.2167

5

Low percentage of hardware and software down time

5

4.05

1.017

2

3.909

.727

-.148

1.035

-.0934

1

Ease of access for users to computing facilities

6

4.03

.962

1

3.964

.571

-.073

.959

-.0161

19

Ability of the system to improve personal productivity

7

4.01

.782

8

3.836

.688

-.185

1.047

-.1489

7

User confidence in system

8

4.01

.924

3

3.893

.412

-.125

1.046

.0588

16

Positive attitude of IS staff

9

3.89

.809

11

3.745

.645

-.130

1.133

.2720[**]

12

Extent of user training

10

3.81

.779

18

3.182

.819

-.604

1.115

-.0942

9

System responsiveness to changing users needs

11

3.76

.769

13

3.585

.865

-.189

1.241

-.0610

17

Users's understanding of the system

12

3.65

.844

12

3.618

.652

-.037

.990

.2017

3

New software upgrades

13

3.64

.841

5

3.875

.689

.232

1.027

.2457 [*]

2

New hardware upgrades

14

3.48

.831

10

3.750

.815

.268

1.152

.2346[*]

8

Degree of personal control users have over their systems

15

3.44

.933

4

3.889

.697

.407

1.125

.2979[**]

15

Flexibility of the systems to produce prof. reports

16

3.41

1.013

16

3.364

1.238

-.074

1.226

-.1677

18

Overall cost effectiveness of IS

17

3.41

.956

17

3.327

1.150

-.078

1.246

.1124

22

Standardization of hardware

18

3.30

1.086

15

3.444

1.093

.132

1.301

-.0039

20

Documentation to support training

19

3.30

.836

19

3.154

.894

-.176

1.108

.1451

14

Participation in planning of the systems requirements

20

3.03

.962

20

3.111

1.093

.075

1.253

.0037

21

Help w/ database or model development

21

2.98

.991

22

2.585

1.232

-.423

1.433

-.0545

4

Access to external databases through the system

22

2.70

1.110

21

2.611

1.250

-.094

1.260

.0146

[**]implies correlation is significant at 5% level

[*]implies correlation is significant at 10% level;

Only five support items scored a positive gap, indicating satisfaction with that item. The item with the highest positive gap was "degree of personal control" (gap of .407). The other four items that indicated user satisfaction were "new hardware upgrades" (.268), "new software upgrades" (.232), "standardization of hardware" (.132) and "participation in planning system requirements" (.075). The remaining 17 items had a negative gap—indicating underperformance of the IS department, or dissatisfaction. The items with the largest negative gap (indicating highest level of dissatisfaction) were "fast response time from IS staff" (-.611), "extent of user training" (-.604), and "help with database or model development" (-.423).

The service quality gap for three support items (fast response time from IS staff, positive attitude of IS staff, and degree of personal control) were positively correlated to satisfaction (r = .277, .272, and .298 respectively, significant at 5% level). These support items have the strongest influence on satisfaction in this environment.

The results of the Mirani & King EUC satisfaction portion of the survey showed that as a whole, there was a high level of EUC satisfaction at Otis Elevator (7-point Likert scale, Mean 5.32, SD 1.07). With a score of 4 indicating neither satisfied nor dissatisfied, we have interpreted scores below 4 to indicate varying degrees of dissatisfaction and scores above 4 to indicate varying degrees of satisfaction. One individual scored below three, indicating a high degree of dissatisfaction. Six individuals scored between three and four, indicating a lesser degree of dissatisfaction. Twenty individuals scored between four and five, indicating a lesser degree of satisfaction. Twenty-two individuals scored between five and six, indicating a higher degree of satisfaction, and seven individuals scored above six, indicating the highest degree of satisfaction. One respondent did not answer this portion of the survey.

Analysis of Data—Phase Two

While the overall score from the user information satisfaction portion of the survey revealed a generally high level of satisfaction, gap analysis of the 22 support items clearly showed specific support areas where there was user dissatisfaction. The interviews therefore concentrated on those specific areas, and the views of the end users were gathered to assist in the assessment of their technological frames of reference. All seven respondents that scored below a four and seven randomly selected end users that scored above a four were interviewed. Analysis of the interviews resulted in the development of the technological frame of reference of the satisfied and dissatisfied user.

The principal author conducted the interviews. The interviews were tape recorded, transcribed and systematically examined for patterns in the frames of a satisfied and a dissatisfied user. The initial content analysis occurred through open-coding (Corbin & Strauss, 1990) of the interview transcripts. A research team comprising four research colleagues performed analysis of the interviews. An initial set of patterns (categories) emerged from the analysis of the coded transcripts. The transcripts were then physically formatted to conform to the standards of the qualitative analysis software package, NUD*IST. In NUD*IST, the initial categories that had emerged from the analysis were formulated into a hierarchical tree structure, and the transcripts were closed-coded and documented. The search function of the software was used to interrogate the transcripts in order to verify and substantiate the initial categories that were discovered during the analysis of the open-coding. A number of additional categories began to emerge from this analysis, as further questions were translated into queries, and a second set of categories began to emerge that addressed additional aspects of the technology that had not been initially analysed. Category frameworks were then iteratively developed, applied to the data, and revised. The tree structure continued to grow and was refined further as additional nodes were included and nodes that did not indicate any theoretical significance were deleted. In addition, several nodes were combined and merged as the categories themselves began to crystallize and became clearer.

The results revealed three basic categories that could be used to group satisfied and dissatisfied users: type of learner, their view of the role of the PC, and the complexity level of the applications they used. Using these categories, Table 4 outlines the frames of the two groups: satisfied and dissatisfied users.

Table 4: Satisfied vs. Dissatisfied Users
 

Type of Learner

Role of PC

Complexity of Application

Satisfied User

self-directed

task completer

more complex

Dissatisfied User

non self-directed

task enhancer

less complex

In general, the satisfied user is a self-directed learner who continually seeks out additional learning opportunities. He views the PC as a tool that is necessary only for the completion of his work and utilizes complex applications in his work. Conversely, the dissatisfied user does not actively seek out additional learning opportunities, and generally will only take IT-related courses if they are required for the job. This user views the PC as a tool that could enhance job performance, and could contribute to their productivity. Generally, they use less-complex types of applications.

These different frames were not initially easily explainable. In particular, our original assumptions contradicted the findings that a satisfied user would use more complex applications and would view the PC as a "task enhancement" tool as opposed to a "task completion" tool. However, since these users viewed the PC as a tool to "get the job done" as opposed to "getting the job done better" they expected less of the PC, and therefore were more easily satisfied. Research on users' expectations of technology finds that users who have realistic expectations of the benefits of technology tend to be more satisfied (Compeau et al., 1999). In addition, since they used applications with a higher complexity level, their potential to initiate complex technical queries was increased. Their interactions with the MIS staff were at a "higher" technical level than those with less complex applications. From the analysis of the MIS staff interviews as well as user interviews, we had concluded that the MIS personnel had little patience with "routine" MIS queries. On the contrary, when the interactions with the MIS support staff dealt with technical issues, the MIS staff would respond more readily, and with a more positive attitude. The satisfied users had a self-directed learning style that facilitated the acquisition of more complex applications. Further, as a result of using more complex applications, their interactions with the MIS staff were frequent and positive, and this resulted in a more satisfied user.

The technological frame of reference of the dissatisfied user is explainable in a similar fashion. Since these users were not self-directed and took only courses that were required for their job, they did not acquire more complex applications. Since their interactions with the MIS staff were at a more "routine" level, these interactions were not positive. Several users who shared this frame of reference reiterated a high level of dissatisfaction with the "superior" attitude of the MIS staff when routine queries were posed to them. In addition, these users expected more from the PC, viewing the PC as a "task enhancing" tool. They expected the PC to greatly contribute to the productivity of their job. Since these users expected more of the PC, they were more easily dissatisfied.



 < Day Day Up > 



Advanced Topics in End User Computing (Vol. 3)
Advanced Topics in End User Computing, Vol. 3
ISBN: 1591402573
EAN: 2147483647
Year: 2003
Pages: 191

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net