Index_S


S

Sample

bias reduction, 333–335

defined, 328

random selection JavaScript, 337–339

sampling frame, 328–331

size of, 331–333

Sampling rate for diary studies, 370

Satisfaction surveys

questions, 308

timing for, 305

Scenarios

for contextual inquiry, 166

creating with user profiles, 149–150

defined, 149

Schedule

for contextual inquiry, 162

for focus groups, 207, 208

for recruiting, 84

for research plan, 65–75

for specialists, 456

for surveys, 306–307

for task analysis, 183

for usability testing, 264, 267

for user profiling, 131

Schedule for research plan, 65–75

after release, 67, 68

asking questions across multiple projects, 73–74

design and development, 67, 68

example, 79–80

existing development system and, 65–66

organizing questions into projects, 68–73

requirement gathering, 66–67, 68

when starting in the beginning of the development cycle, 66–67

when starting in the middle of the development cycle, 67–68

Scheduling research participants, 103–109

building and space preparation, 113

confirmation and reconfirmation, 106–108

for contextual inquiry, 164–165

double-scheduling, 110–111

invitations, 104–106

no-shows, avoiding, 108, 110–111

participant-driven scheduling, 104

scheduling windows, 103–104

sequence of tasks, 104

"snow days" and, 111

teenagers, 112

See also recruiting

Scheduling service example

creation, 38, 39, 40, 41

cycle 1, 37–38

cycle 2, 38–39

cycle 3, 39–40

cycle 4, 40–41

cycle 5, 6, and 7, 41–42

cycle 8, 42

definition, 38, 39, 40, 41

examination, 37, 38, 39–41

Scheduling service iterative development example, 36–42

Schroeder, Will, 267

Scope of focus group research, 207, 213–214

Screeners for recruiting, 95–103

email screeners, 102–103

general rules, 95–96

importance of, 95

telephone screener example, 97–102

for usability testing, 267

Script for usability testing, 275–285

competitive usability testing, 429

evaluation instructions, 279–280

first impressions, 280–281

introduction, 275–277

preliminary interview, 277–279

tasks, 281–284

wrap-up, 284–285

Search engines

removing hits from log files, 415

statistics in log analysis, 410

Seating order for focus groups, 224

Self-promotion. See promotion

Self-reporting, issues for, 385

Self-selection bias in surveys, 334, 336

Sell-throughs, 22

Sequence models, 180

Sequences

contextual inquiry regarding, 171

for tasks in usability testing, 271

Session-based statistics in log analysis, 411

Session cookies

clickstream analysis using, 408, 413–414

defined, 407

expiration times, 407

identity cookies vs., 407–408

session-based statistics using, 411

Severity

organizing customer feedback by, 400

rating observations in reports by, 489

user severity measures in usability testing, 296

Shared vision, as iterative development benefit, 33–34

Sharing user profiles, 153

Shiple, John, 420

Shopping cart abandonment, click-stream analysis of, 413–414

Snyder, Carolyn, 111, 236

Software

for coding data, 401

EZSort, for cluster analysis, 196–198

free Web survey tools, 325

log analysis tools, 414–418

Sound

microphone types and placement, 225–226

videotaping and, 174, 286

Spam, 91

Sparklit ad-based polling service, 325

Specialists

contacting by email and phone, 454

defined, 447

finding, 449–454

guidelines for managing, 456–457

hiring, 447–457

for independent analysis, 441

RFPs (request for proposals) for, 450–454

setting expectations, 454–457

timing for using, 448–449

for traffic/demographic information, 442

See also consultants

Spiral Development, 32

Spool, Jared, 267

Stakeholders

collecting issues from, 59–60

conflicting goals of, 60

creating a user-centered corporate culture and, 511–512

identifying, 59

participatory design by, 468–469

presenting issues as goals, 60–61

for Web sites, 17

Standard deviation

calculating, 351

confidence interval, 351–352

sample size for 95% confidence, 332

Standard error

calculating, 350

confidence interval and, 351–352

decreasing, 350–351

sample size for 5% standard error, 332

Statement of informed consent. See informed consent statement

Statistical significance, 501

Stealth problems, 503

Stories, extracting from focus group data, 245

Structured diary studies, 375–381

defined, 371

problem report diaries, 379–381

survey-structured diaries, 376–377

unstructured diary studies vs., 371

usability test diaries, 377–379

Success

advertisers' criteria for, 20–23

balancing criteria for, 28

companies' criteria for, 23–27

company departments' measures for, 58

usability as critical to, 20

users' criteria for, 18–20

Survey Monkey Web site, 325

Survey Research Methods, 305, 309, 327

Survey-structured diaries, 376–377

Surveys

accurate results from, 305

after release, 67, 68

analyzing responses, 340–357

attitudinal questions and subcategories, 308

behavioral questions and subcategories, 308

benefits and pitfalls, 70

bias reduction, 333–335

bimodal distribution, 344, 345

brainstorming questions, 307–309

characteristic questions and subcategories, 308

common problems, 354–356

common questions, 533–538

comparing variables, 345–349

competitive research, 308, 430–431

contact information, 320, 321, 322

contextual inquiry as follow-up, 357–358

counting results, 340–345

cross-tabulation, 345–349

defined, 303

described, 70

descriptive goals, 307

diary studies structured on, 376–377

drawing conclusions, 354–357

editing and ordering questions, 319–321

error checking, 325

error estimation, 350–352

example for news site, 360–366

explanatory goals, 307

fielding, 328–340

focus group as follow-up, 357

focus groups combined with, 472–473

focus groups vs., 204

follow-up qualitative research, 357

form tricks, 326–327

free Web survey tools, 325

general instructions for, 321–322

goals, 305, 307, 323

for identity design, 52

in-person, 339–340

incentive for, 328

for information architecture development, 47

invitations, 335–339

laying out the report, 323–324

mail, 339–340

mean calculation, 342–343

measurement errors, 352–354

median calculation, 344

missing data for, 345

mode calculation, 343–344

mortality tracking, 326

ongoing research, 358–360

pre/post, 359–360

profiles, 305

proportion chart, 348–349

question grid, 311–313

question instructions, 322–323

random selection JavaScript, 337–339

refined, 358

for requirement gathering, 66

response rate, 326, 335

sample and sampling frame, 328–331

sample size, 331–333

satisfaction surveys, 305

schedule for, 306–307

scheduling service example, 40, 42

sweepstakes laws and, 322

systematic error in, 353

tabulating results, 340–345

telephone, 339–340

testing, 327

timing for, 304–305

tracking surveys, 358

tracking timing of responses, 325–326

usability testing as follow-up, 358

usability testing for, 325, 327

uses for, 303–304

value surveys, 305

virtual usability tests and, 464

Web survey tips, 324–327

Web use questions, 308, 534–536

writing instructions, 321–323

writing questions, 309–319

See also writing survey questions

Sweepstakes laws, surveys and, 322




Observing the User Experience. A Practioner's Guide for User Research
Real-World .NET Applications
ISBN: 1558609237
EAN: 2147483647
Year: 2002
Pages: 144

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net