Designing the Tests

Service trials were run from March to July 1998. The setup was arduous, including the installation of client/server systems, the selection and training of end users, definition and implementation of the service provisioning chain, issuing equipment to end users, and planning questionnaires, statistical methods, and help desk services. Several company-internal pretrials were therefore conducted to fine-tune the process.

The trials were organized as a series of test periods about two weeks long. The total number of participating end users was 70. The amount was to be maximized for sufficient statistical reliability but practically limited by manageable amount of persons during trials. This number was seen sufficient for our purpose. Since the number of users was limited—and segmentation by country was already a given—we agreed to focus on one user profile instead of delving into user segmentation according to gender, age, profession, and other criteria. In line with the business context of the services to be tested, the following target profile of an end-user was defined:

  • Male; in the product development units of the participating companies, the male gender seemed to be dominant in test user selection.

  • Midrange age, mostly between 20 and 40 years.

  • High level of mobility.

  • High level of education.

  • High demand for communication.

Internet experience and GSM usage were also part of the selection criteria. Thus, the whole end-user view was very business-focused, reflecting the developers’ expectations of the likely order of adoption for mobile multimedia services.

Typically the first series of precommercial equipment and services have not gone through field testing. The instability is likely due to varying and demanding mobile environment conditions. Thus the user-friendly test approach was chosen to start with. The users were selected from those having experience on Internet, mobile phones, and services usage. Thus they were more ready to cope with the technically demanding situations (e.g., contacting help desk services) instead of quitting the trial fully in despair. In the first phases of the trials, the subjects were personnel of the participating mobile operators. In later phases company-external users were included as test subjects.

We applied user-centered design approaches to improve the system quality as experienced by the end user. A number of design methods were chosen to support the development in various design, evaluation, and user support tasks. Focus groups discussions were conducted in early development phases to provide an indication of users’ service preferences. Then around 40 to 50 rough service ideas were generated in several brainstorming sessions. Storyboards were used for more detailed service concept design after the core ideas had been identified. Content providers developed use cases to help us understand the users’ tasks on a step-by-step level. Low-fidelity prototypes were constructed to evaluate early UI versions for service access, and usability tests were carried out to assess the solutions.

In addition to designing user interface solutions, usability evaluation criteria were developed to suit the specific needs of the project. Examples of usability criteria were user interface self-descriptiveness and feedback, controllability, and flexibility; efficiency, safety (mental, physical, and property), error-freeness, and error recovery; clear exits; and functional consistency. However, the field tests in MOMENTS were not intended primarily for optimizing the usability of a new device, but for assessing the overall acceptability of novel technologies and new services provided through them.

The user’s perception of the quality and acceptability of a mobile service is always influenced by a multitude of factors. First, there are the mobile client, the applications in it, the wireless connection to a server, the service presentation, and the content. Also, there are the real performance of the system and the expectations the users have for it.

Expectations about brand new technologies are influenced by another set of issues. Personal experience with similar or comparable technologies forms an obvious base of reference. (In the case of wireless services, the user’s experiences with the wired Internet was a natural and relevant comparison.) Personal needs between individuals vary—whether professional or private—setting up different criteria for acceptability. The type of service itself has some effect on how it will be evaluated. Information services, for example, are assessed on a different basis from entertainment services. Finally, there are market driven expectations about services, which are conditioned by their pricing and the reputation of the provider. We needed a conceptual model to link all these factors affecting service quality (see Figure 10.6).

click to expand
Figure 10.6: Quality evaluation model of MOMENTS services.9

At this stage we had no way of knowing which quality dimensions would be the most influential, so the project needed to address a wide scope of customer experiences. The following list of evaluation criteria provides some idea of the complexity of assessing mobile service quality.

Our challenge was made all the more difficult because we didn’t know whether quality criteria would vary across different services and test sites.

  • Usefulness of the service to the consumer, and the concomitant ability to tailor the service to personal needs and link the service to the user’s business and private objectives. We ask “Is the content what you needed? Does the service provide something just for you?”

  • The amount, versatility, and accuracy of content provided in a given service can be judged only according to what service it is. In a location service, for example, accuracy may be a critical quality criterion. The timeliness of the information provided by the service is another potentially important dimension related to accuracy.

  • Response times in interactions are contingent on connection establishment, information download, and release time. Most people have expectations for the speed of service according to their experiences with the wired Internet. Mobility can be an acceptable excuse for somewhat slower respond times, but there is no doubt a limit that cannot be exceeded. We were also aware that the perception of acceptable delay varies with the type of interaction; for example, real-time game application does not allow the same kind of delay as the email services.

  • Problems in connection reliability can undermine an otherwise perfect service. A mobile service in business context has to be perceived as reliable. If not, users will cleave to conventional information sources. Usage patterns that do develop may take the service in a completely unexpected direction.

  • The quality of content presentation in different media formats such as audio, video, animation, images, text, and graphics will probably be compared to expectations set by other known media. In addition to the user’s previous media experiences, synchronization of different media formats affects the overall quality perception.

  • The way the user interface is designed can influence the user’s perception of security and privacy in a service, but the public image of the company providing that service also plays a role in determining that person’s level of trust in the service.

  • The mobility of the terminal in practical usage situations is an important assessment criterion, but differences between the test set and late-model compact terminals can cause biases that need to be acknowledged.

  • How easy it is to find required information and how appropriate the access methods are—these factors cut close to the core of basic usability.

The evaluation pack for MOMENTS was designed to give us a good idea about the acceptability of services with reference to these criteria in broad strokes, rather than going into deep detail with any individual issue. Usability data in all three trial countries—Italy, Germany, and the United Kingdom—were gathered using pretrial and post-trial questionnaires, face-to-face interviews, and questionnaires filled out by help desk personnel. The effective use of those approaches called for a commitment from all the partners in the project. Since MOMENTS was a large international project with multiple partners at different sites, the responsibility for usability assessment was spread over several locations and development teams. Nokia prepared the usability and quality evaluation guidelines for the project, and operator partners carried out the fieldwork. The mobile operators applied the guidelines to suit local conditions (e.g., operator-specific requirements). The original objective was to gather comparable results from all participating countries, but it turned out that the evaluation approaches had to be localized just like the services themselves.



Mobile Usability(c) How Nokia Changed the Face of the Mobile Phone
Mobile Usability: How Nokia Changed the Face of the Mobile Phone
ISBN: 0071385142
EAN: 2147483647
Year: 2005
Pages: 142

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net