Credibility and Computers


“Are computers credible?” That’s a question I like to ask students in my Stanford classes. It invariably generates a lively debate.

There’s no easy answer, but the question is an important one. When it comes to believing information sources—including computers—credibility matters. Credible sources have the ability to change opinions, attitudes, and behaviors, to motivate and persuade. In contrast, when credibility is low, the potential to influence also is low. [1 ]

Throughout most of the brief history of computing, people have held computers in high esteem[2]—a view that is reflected in popular culture. Over the past several decades, computers often have been portrayed as infallible sidekicks in the service of humanity, from Robby the Robot in Forbidden Planet, the 1956 movie classic, to B-9, the robot in the 1960s television program Lost in Space, to R2-D2 in Star Wars. [3 ]

In the consumer realm, computer-based information and services have been marketed as better, more reliable, and more credible sources than humans. Marketers assured the buying public that if a computer said it or produced it, then it’s believable.

Due in part to the emergence of the Internet and the proliferation of less than- credible Web sites, the cultural view of computers as highly credible sources has been seriously challenged. (Web credibility, which deserves special attention, is the subject of Chapter 7.) As consumers become more skeptical, it’s important for designers of persuasive technology to understand the components of credibility, the contexts in which credibility matters, and the forms and dynamics of credibility—the focus of this chapter.

[1 ]C. Hovland and W. Weiss, The influence of source credibility on communication effectiveness, Public Opinion Quarterly, 15, 635–650 (1951).

[2]For more information on how people have viewed computers, see the following:

a. T. B. Sheridan, T. Vamos, and S. Aida, Adapting automation to man, culture and society, Automatica, 19(6): 605–612 (1983).

b. L. W. Andrews and T. B. Gutkin, The effects of human versus computer authorship on consumers’ perceptions of psychological reports, Computers in Human Behavior, 7: 311–317 (1991).

[3 ]For more on the subject, see J. J. Djikstra, W. B. G. Liebrand, and E. Timminga, Persuasiveness of expert systems, Behaviour and Information Technology, 17(3): 155–163 (1998). Of course, computers also have been depicted as evil—most notably, the computer HAL in 2001: A Space Odyssey —but even those portrayals suggested computers are credible.

What Is Credibility ?

Scenario 1: A man wearing a suit knocks at your door. His face is familiar, and he says, “You’ve won our sweepstakes!” He hands you a big check, and the TV cameras are rolling. Outside your house, three reporters compete for your attention.

Scenario 2: You receive a letter in the mail, sent using a bulk mail stamp. The letter inside says, “You’ve won our sweepstakes!” The letter has your name spelled incorrectly, and you notice the signature at the bottom is not an original.

Credibility is a perceived quality that has two dimensions: trustworthiness and expertise.

Even though the overt message in both scenarios is exactly the same (“You’ve won our sweepstakes!”), the elements of Scenario 1—a personal contact, a famous face, media attention, and even the cliché oversized check—make the message believable. In contrast, under the second scenario, you’d probably trash the letter without giving it a second thought. It’s not credible.

A Simple Definition

Simply put, “credibility” can be defined as believability. In fact, some languages use the same word for these two English terms. [4 ]The word credible comes from the Latin credere, to believe. In my research I’ve found that “believability” is a good synonym for “credibility” in virtually all cases.

The academic literature on credibility dates back five decades, arising primarily from the fields of psychology and communication. As a result of research in these areas, scholars agree that credibility is a perceived quality; it doesn’t reside in an object, a person, or a piece of information. [5 ]

In some ways, credibility is like beauty: it’s in the eye of the beholder. You can’t touch, see, or hear credibility; it exists only when you make an evaluation of a person, object, or piece of information. But credibility isn’t completely arbitrary. Much like agreement in evaluating beauty, people often agree when evaluating a source’s credibility.

Some studies suggest there may be a dozen or more elements that contribute to credibility evaluations. [6 ]However, most researchers and psychologists confirm that there are just two key dimensions of credibility: trustworthiness and expertise (Figure 6.1). People evaluate these two elements, then combine them to develop an overall assessment of credibility. [7 ]

click to expand
Figure 6.1: The two key dimensions of credibility.


Trustworthiness is a key factor in the credibility equation. The trustworthiness dimension of credibility captures the perceived goodness or morality of the source. Rhetoricians in ancient Greece used the term ethos to describe this concept. In the context of computers, a computer that is “trustworthy” is one that is perceived to be truthful, fair, and unbiased.

People in certain professions, such as judges, physicians, priests, and referees, are generally perceived to be trustworthy. These individuals have a professional duty to be truthful, unbiased, and fair. If it’s perceived that they are not trustworthy, they lose credibility. (The controversy over judging of pairs figure skating at the 2002 Winter Olympics offers a good example. So does the Enron accounting debacle, which called the credibility of accountants into question.)

Principle of Trustworthiness

Computing technology that is viewed as trustworthy (truthful, fair, and unbiased) will have increased powers of persuasion.

What leads to perceptions of trustworthiness? Research doesn’t provide concrete guidelines, but a few key points seem clear. First and most obvious, the perception that a source is fair and unbiased will contribute to trustworthiness. [8 ]That’s one reason we have independent audits, turn to the opinions of respected third parties, and conduct double-blind studies.

Next, sources that argue against their own interest are perceived as being credible. [9 ]If a UPS representative told you FedEx is faster (or vice versa), you would probably consider this a credible opinion, since the rep ostensibly would have nothing to gain (and something to lose) by telling you that a competitor is more efficient. In general, the apparent honesty of sources makes them highly credible and therefore more influential.

Finally, perceived similarity leads to perceived trustworthiness. [10 ]People tend to think other people (or other computers, as we discovered in the Stanford Similarity Study) are more trustworthy when they are similar to themselves in background, language, opinions, or in other ways. As noted in Chapter 5, the similarities don’t even have to be significant to be effective.

Principle of Expertise

Computing technology that is viewed as incorporrating expertise ( knowledge, experience, and competence) will have increased powers of persuasion.


The second dimension of credibility is expertise—the perceived knowledge, skill, and experience of the source. Many cues lead to perceptions of expertise. Among them are labels that proclaim one an expert (such as the title “professor” or “doctor”), appearance cues (such as a white lab coat), and documentation of accomplishments (such as an award for excellent performance). In general, a source that is considered an expert on a given topic will be viewed as more credible than one that is not.

Combinations of Trustworthiness and Expertise

Trustworthiness and expertise don’t necessarily go hand in hand. A car mechanic may have the expertise to know exactly what’s wrong with your car, but if he has a reputation for charging for unneeded repairs, he’s not trustworthy and therefore is not perceived as credible.

Similarly, there can be trustworthiness without expertise. A friend might suggest you try acupuncture for your back pain, although she only read about it. Your friend’s good intentions probably would not be enough to persuade you to pursue the ancient tradition because she lacks the credibility of an expert.

The most credible sources are those perceived to have high levels of trustworthiness and expertise.

Given that both trustworthiness and expertise lead to credibility perceptions, the most credible sources are those that have high levels of trustworthiness and expertise—the car mechanic who is also your brother, the close friend who has spent years practicing Eastern medicine.

The same is true for computing products. The most credible computing products are those perceived to have high levels of trustworthiness and high levels of expertise.

If one dimension of credibility is strong while the other dimension is unknown, the computing product still may be perceived as credible, due to the “halo effect” described in Chapter 5 (if one virtue is evident, another virtue may be assumed, rightly or wrongly). However, if one dimension is known to be weak, credibility suffers, regardless of the other dimension. If a computerized hotel advisor contained more information than any other system in the world, you’d rightfully assume the system is an expert. However, if you learned that the system was controlled by a single hotel chain—an indication of possible bias— you might question the trustworthiness of any hotel suggestion the system offers.

Credibility versus Trust

In the academic and professional literature, authors sometimes use the terms credibility and trust imprecisely and interchangeably. Although the two terms are related, trust and credibility are not synonyms. Trust indicates a positive belief about the perceived reliability of, dependability of, and confidence in a person, object, or process. [11 ]If you were planning to bungee jump off a bridge, you’d need to have trust in your bungee cord. Credibility wouldn’t apply.

People often use the word trust in certain phrases when they really are referring to credibility (e.g., “trust in the information” and “trust in the advice”[12]). When you read about “trust” and computers, keep in mind that the author may be referring either to dependability or to credibility.

One way to avoid confusion: when you see the word trust applied to technology, replace it with the word dependability and then replace it with the word believability and see which meaning works in that context. In my lab, we have an even better solution: we simply never use the word trust We’ve settled on words that have more precise meanings to us, such as entrustable, dependability and credibile.

[4 ]In Spanish, for example, the word creíble means both “believable” and “credible.”

[5 ]C. I. Hovland, I. L. Janis, and H. H. Kelley, Communication and Persuasion (New Haven, CT: Yale University Press, 1953).

[6 ]P. Meyer, Defining and measuring credibility of newspapers: Developing an index, Journalism Quarterly, 65: 567–574 (1988). See also C. Gaziano and K. McGrath, Measuring the concept of credibility, Journalism Quarterly, 63: 451–462 (1986).

[7 ]C. S. Self, Credibility, in M. Salwen and D. Stacks (eds.), An Integrated Approach to Communication Theory and Research  (Mahway, NJ: Lawrence Erlbaum, 1996).

[8 ]For more on how being fair and unbiased contributes to perceived credibility, see C. S. Self, Credibility, in M. Salwen and D. Stacks (eds.), An Integrated Approach to Communication Theory and Research  (Mahway, NJ: Lawrence Erlbaum, 1996).

[9 ]E. Walster, E. Aronson, and D. Abrahams, On increasing the persuasiveness of a low prestige communicator, Journal of Experimental Social Psychology, 2: 325–342 (1966).

[10 ]For a discussion on the effects of similarity on trustworthiness and, consequently, on credibility, see J. B. Stiff, Persuasive Communication (New York: Guilford Press, 1994).

[11 ]For more about how to define “trust” see the following:

a. J. K. Rempel, J. G. Holmes, and M. P. Zanna, Trust in close relationships, Journal of Personality and Social Psychology, 49 (1): 95–112 (1985).

b. J. B. Rotter, Interpersonal trust, trustworthiness, and gullibility, American Psychologist, 35 (1): 1–7 (1980).

[12]For examples of phrases that are synonymous with the idea of credibility, see the following:

a. B. H. Kantowitz, R. J. Hanowski, and S. C. Kantowitz, Driver acceptance of unreliable traffic information in familiar and unfamiliar settings, Human Factors, 39 (2): 164–176 (1997).

b. B. M. Muir and N. Moray, Trust in automation: Part II, Experimental studies of trust and human intervention in a process control simulation, Ergonomics, 39(3): 429–460 (1996).

When Credibility Matters in Human Computer Interaction

In some cases, it doesn’t matter whether or not a computing device is perceived as being credible. [13 ]In many situations, though, credibility does matter; it helps to determine whether or not the technology has the potential to persuade. I propose that there are seven contexts in which credibility is essential in human computer interactions.

Credibility Matters When Computers

  1. Instruct or advise users
  2. Report measurements
  3. Provide information and analysis
  4. Report on work performed
  5. Report about their own state
  6. Run simulations
  7. Render virtual environments

If a computing technology operating in one of these seven contexts is not perceived as credible, it likely will not be persuasive. Suppose a computer system reports measurements, such as air quality in a “take the bus” initiative or body fat percentage in a weight control system. If the measurements are credible, the system will be more likely to influence. If they are not credible, they’re not likely to persuade people to take the bus or motivate them to lose weight.

These seven contexts, discussed below, are not mutually exclusive. A complex computing product, such as an aviation navigation system, may incorporate elements from various categories—presenting information about weather conditions, measuring airspeed, rendering a visual simulation, and reporting the state of the onboard computer system.

Instructing or Advising

Credibility matters when computers give advice or provide instructions to users. If the instruction or advice is poor or biased, the computer will lose credibility. For instance, several search engines have been criticized for sorting systems that are driven by advertising revenues rather than relevancy. [14 ]Their credibility has been called into question.

In some cases, it’s clear that a computer is giving instructions or advice, such as when an in-car navigation system gives advice about which route to take. If the directions are faulty, the system will lose credibility. [15 ]

But it’s not always obvious when a computing product is giving instructions or advice. Think of default buttons on dialog boxes. The fact that one option is automatically selected as the default suggests that certain paths are more likely or profitable. This is a subtle form of advice. If the default options are poorly chosen, the computer program could lose credibility because the dialogue boxes, in essence, offer bad advice.

In some cases, the loss of credibility can threaten the marketability of a product. Chauncey Wilson, a colleague who is director of the Design and Usability Testing Center of Bentley College in Waltham, Massachusetts, tells the story of working on a team to develop a new software product. An early alpha version of the product went out with a dialog box that asked users if they wanted to delete tables from a critical database. The project team started getting calls from some early adopters who reported that tables were mysteriously vanishing. The team tracked the problem to a poorly chosen default option. When asking users if they wanted to delete a table, the system offered “Yes” as the default. From past experience with other software, users had become accustomed to choosing the default option as a safe choice. The fix took only a few minutes, but this minor coding mistake cost the product credibility with early adopters, who then were somewhat reluctant to use the beta version.

Reporting Measurements

Imagine how users would respond to the following:

  • A GPS device that reported the user was somewhere in Arizona when she clearly was in Oregon.
  • A heart rate monitor that indicated the user’s heart was beating 10 times per minute.
  • A UV ray monitor that reported a person’s sun exposure to be very low, even as she could feel and see that she was getting a severe sunburn.
  • A Web-based typing tutor that reports a typist’s speed as more than 500 words per minute.

As these examples make clear, credibility is key when computing products report measurements. If reported measurements are questionable or obviously inaccurate, the products will lose credibility. If the product were designed to influence or motivate, it likely would fail because of the inaccurate measurements it had reported.

Providing Information and Analysis

A friend of mine is an avid golfer. If she has a round of golf scheduled for a Monday afternoon and the weather looks questionable Monday morning, she’ll turn to an online weather service she’s book marked to get hourly updates on local weather conditions. But over time, she’s lost faith in the system, which too often shows a sun icon when the sky is dark with clouds, or rain when the sun is peeking through. She likes the hourly updates, but she no longer views them as entirely credible.

Credibility matters when computers provide data or information to users. Whether a technology product provides investment information, reports on local weather conditions, or does comparison to find the lowest airfare for your next business trip, if the information is not accurate, the product will not be credible.

If a computing product offers dynamic information, tailored to users in real time, not only is the credibility of the information at stake, so is the method used to tailor the information. and a host of successful e-commerce sites analyze users’ purchase histories and draw on those analyses to suggest other products that users may want to buy. The credibility of such systems depends on how the information is analyzed to develop recommendations. Such systems are far from perfect. (Amazon recently recommended that a friend purchase a lightweight gardening book because she had previously purchased The Botany of Desire —a philosophical treatise on how plants might view humans.)

Another example is The site offers to help users set and achieve their goals, from remodeling their home to finding a new job. The system coaches users in setting specific goals and milestones, drawing on information from experts in relevant domains. This expert knowledge is accessible on demand, and the success of the site hinges on users believing that the information provided is credible. While the system uses automated reminders and other interactive features, the aspect that relates to credibility is the expert knowledge stored in system.

Reporting on Work Performed

A colleague of mine uses a popular antivirus software. He’s diligent about downloading updated virus definitions twice a month. In downloading the updates, the system asks which files he wants to update, including program files as well as virus definitions. He checks only definitions, then clicks. The downloading and updating begins.

When the system has finished updating and installing his virus definitions, it gives him the following message: “You chose not to install any 1 of the available update(s).” This message apparently refers to other updated files available, not the definitions files. But the message always makes my colleague worry that some how the definitions didn’t get downloaded. He checks the date of the virus definition list installed, just to be sure. It always seems to be correct, reflecting the most recent update. But the confusing message makes my colleague question the credibility of the program.

As this anecdote illustrates, if the report on work performed does not match the actual outcome, the credibility of a product may be questioned. In some cases, the product’s survival may be jeopardized, as the following example shows.

In the late 1990s, a now defunct company was a leader in creating lasers for eye surgeries to improve vision. The company’s sophisticated, expensive laser surgery machine lost credibility because it would, at times, print out incorrect reports about the procedure it had just performed. (This mistake was limited to a special set of circumstances: if the patient was undergoing a double toric optical correction, the device would report whatever was done on the first eye for both eyes, rather than giving the real report for each eye.) Although the machine would carry out the surgical procedure (fortunately) according to the surgeon’s specifications, the report the machine gave about the surgery it had just performed would be incorrect. [16 ]

Although this reporting error did not change the clinical outcome for the patients, it’s understandable that ophthalmologists would not want to risk their reputation or their patients’ vision by using a product that was known to be flawed. Ultimately, the manufacturer took the product off the market. Clearly, credibility matters when computers report on work performed.

Reporting on Their Own State

Similarly, credibility is at issue when computers report on their own state: how much disk space they have left, how long their batteries will last, how long a process will take. You would assume that a computer should be able to report about itself accurately, but as many frustrated PC users will testify, this is not always the case. If a computer indicates that no printer is attached when one is, or that it must shut down a program to conserve space when you have only one program running, you may question how much the computer knows about itself—or anything else, for that matter. Any future reporting from the computer will be less believable.

For example, Figure 6.2 shows the message a user received when trying to edit a large file in Microsoft Notepad. In this example, the user was able to open the file but received the error message upon trying to edit it. The message itself is false. The problem is not the size of the computer’s memory—you would get the same message if you deleted all other applications—but the fact that Notepad can’t deal with files larger than 32,000 bytes. [17 ]

click to expand
Figure 6.2: This error message incorrectly reports the status of computer memory.

Running Simulations

Credibility also is important when computers run simulations, a topic discussed in Chapter 4. Computers can simulate everything from chemical processes and the progress of a disease in a population to aircraft navigation, nuclear disasters, and the effects of global warming. For simulations to be persuasive, they must be credible.

If users perceive that a computer simulation designed to convey a real-world experience doesn’t closely match reality, the application won’t be credible. An expert surgeon using a computer simulation to teach surgical procedures would notice where a silicon-based simulation doesn’t match flesh-and-blood reality. If the technology diverged too far from the real experience, the computer product will lose credibility in the eyes of the surgeon.

Rendering Virtual Environments

Virtual environments must be credible as well if they are to persuade. A credible virtual environment is one that matches the user’s expectations or experiences. Often this means making the virtual environment model the real world as closely as possible—at least for issues that matter. In some cases, though, virtual environments don’t need to match the physical world; they simply need to model what they propose to model. Like good fiction or art, a virtual world for a fantasy arcade game can be highly credible if the world is internally consistent. It may not match anything in the real world, but if the virtual world seems to follow a consistent set of rules, the digital reality may appear credible to users. If it is inconsistent, it will not be credible.

[13 ]Exceptions include when users are not aware of the computer (e.g., an automobile fuel injection system); don’t recognize the possibility of computer bias or incompetence (e.g., using a pocket calculator); don’t have an investment in the interaction (e.g., surfing the Web to pass the time); and when the computer acts only as a transmittal device (e.g., videoconferencing).

[14 ]See for example, “Google unveils new program for pay-per-click text ads,” The Wall Street Journal, February 20, 2002.

[15 ]For a study on user reactions to a navigation system that provided incorrect directions, see R. J. Hanowski, S. C. Kantowitz, and B. H. Kantowitz, Driver acceptance of unreliable route guidance information, Proceedings of the Human Factors Society 38th Annual Meeting (1994), pp. 1062–1066.

[16 ]I learned about the problem with this machine from my brother, an ophthalmologist, and confirmed the problem by talking with a specialist at the company that acquired the manufacturer (after the product had been taken off the market).

[17 ]From the “Interface Hall of Shame” section of the Isys Information Architects site. See

Four Types of Credibility

Within each of the seven contexts of credibility outlined above, different types of credibility may come into play. Although psychologists have outlined the main factors that contribute to credibility—perceptions of trustworthiness and expertise—no research has identified various types of credibility. This is surprising, considering that credibility plays such a large role in everyday life as well as in computing products. For other common dynamics, such as “friendship,” there are various flavors: best friends, old friends, acquaintances, and more.

I will attempt to fill this research gap by proposing a taxonomy of credibility. I believe that four types of credibility—presumed, reputed, surface, and earned— are relevant to computing products. The overall assessment of computer credibility may hinge on a single type, but the assessment can draw on elements of all four categories simultaneously (Table 6.1).

Table 6.1: Credibility of Computing Products

Type of credibility

Basis for believability


General assumptions in the mind of the perceiver


Simple inspection or initial firsthand experience


Third-party endorsements, reports, or referrals


Firsthand experience that extends over time

Presumed Credibility

“Presumed credibility” can be defined as the extent to which a person believes someone or something because of general assumptions in the person’s mind. People usually assume that their friends tell the truth, so they presume their friends are credible. People typically assume that physicians are good sources of medical information, so they are credible. In contrast, many people assume car salespeople may not always tell the truth; they lack credibility. Of course, the negative view of car salespeople is a stereotype, but that’s the essence of presumed credibility: assumptions and stereotypes contribute to credibility perceptions.

Principle of Presumed Credibility

People approach computing technology with a preconceived notion about credibility, based on general assumptions about what is and is not believable.

When it comes to computing technology, at least until recently, people have tended to assume that computers are credible. [18 ]Computers have been described in the academic literature as

  • “Magical” [19 ]
  • Having an “‘aura’ of objectivity” [20 ]
  • Having a “scientific mystique” [21 ]
  • Having “superior wisdom” [22 ]
  • “Faultless” [23 ]

In short, researchers have proposed that people generally are in “awe” of computers and that people “assign more credibility” to computers than to humans. [24 ]This provides an advantage to designers of persuasive technology, as people may be predisposed to believe that these products are credible.

As noted earlier, with the emergence of the Internet and the widely varying credibility of Web sites, this traditional view of computers may be changing. In the future, designers of persuasive technology may have to work harder to persuade users that their products are credible.

Surface Credibility

“Surface credibility” is derived from simple inspection. People make credibility judgments of this type almost every day, forming an initial judgment about credibility based on first impressions of surface traits, from a person’s looks to his or her dress. The same holds true for computing products. A desktop software application may appear credible because of its visual design. The solid feel of a handheld device can make people perceive it as credible. A Web site that reports it was updated today will have more credibility than one that was updated last year. Users assess the credibility of computing products based on a quick inspection of such surface traits.

In some contexts, the surface credibility of a computing product is critical because it may be the only chance to win over a user. Think about how people surf the Web. Because there are so many Web pages to choose from, and there may not be clear guidelines on which pages are “best,” it’s likely that Web surfers seeking information will quickly leave sites that lack surface credibility. They may not even be aware of what caused their negative view of the site’s surface credibility. Was it the visual design? The tone of the text? The domain name? Many factors can enter into these instant credibility assessments.

A study at my Stanford research lab has demonstrated the key role that surface credibility can play. As part of the lab’s research in 2002 on Web credibility (a topic discussed in more detail in Chapter 7), we asked 112 people to evaluate the credibility of 10 health-related Web sites.[26 ]We were mainly seeking people’s qualitative assessments of what made these health Web sites credible or lacking in credibility.

Among the sites we chose for this particular study, participants ranked as the most credible site and Thrive Online as the least credible (Figure 6.3). Some of their comments about the sites reflect how surface credibility works.

click to expand
Figure 6.3: Study participants perceived to have the lowest level of surface credibility among 10 health sites and to have the highest.

After viewing the Thrive Online site, participants generally had negative comments, some of which related to surface credibility:

  • “Pop-health look and feel, like one of those covers at the Safeway magazine rack”
  • “Too cartoony”
  • “Has ads right at top so makes me think it’s not committed to the topic”
  • “Seems kind of flashy”
  • “Too many ads”
  • “Online greeting cards don’t seem very health-oriented”
  • “A lite health site”

In contrast to Thrive Online, received positive comments relating to surface credibility, including

  • “Very professional looking”
  • “Laid out in a very matter-of-fact manner”
  • “It looks like it’s intended for doctors and researchers”
  • “Addresses important issues”
  • “Lack of marketing copy makes it more credible”
  • “Gov[ernment] affiliation makes it credible”
  • “Site owners don’t have ulterior motives for presenting the information”

The cues that shape perceptions of surface credibility are not the same for everyone. They differ according to user, culture, situation, or target application.

Principle of Surface Credibility

People make initial assessments of the credibility of computing technology based on firsthand inspection of surface traits like layout and density of ads.

After renting a car in San Diego, I went over to the kiosk that provides computerized directions. The kiosk seemed outdated to me, lacking the latest interface elements and the latest hardware. I hesitated before using it; I almost chose another source of information: the rental agency employees. For other customers, the kiosk may have appeared new and therefore more credible. (Notice how presumed credibility also comes into play. My assumption: Old computing products are less credible than new ones. In another setting—say, in a developing country—I may have viewed the kiosk as the best available technology and therefore highly credible.) Fortunately, the kiosk I used in San Diego gave me just the right information I needed to drive to my destination. But I was a bit skeptical, I’ll admit.

My research at Stanford has shown that computing products are likely to be perceived as credible when they are aesthetically pleasing to users, confirm their positive expectations, or show signs of being powerful. But a comprehensive formula for surface credibility has yet to be developed. [27 ]

Reputed Credibility

Reputed credibility can be defined as the extent to which a person believes someone or something because of what third parties—people, media, or institutions—have reported. These third-party reports may come in the form of endorsements, reports, awards, or referrals. Reputed credibility plays a big role in human interactions. Prestigious awards, endorsements, or official titles granted by third parties make people appear more credible.

The reputed credibility effect also holds true for computing products. If an objective third party publishes a positive report on a product, the product gains credibility.

On the Web, reputed credibility is common. A link from one Web site to another may be perceived as an endorsement, which can increase perceived credibility. In addition, a site’s credibility can be bolstered if the site receives an award, especially if it’s a recognized award such as a Webby. [29 ]

Principle of Reputed Credibility

Third-party endorsements, especially from respected sources, boost perceptions of credibility of computing technology.

In the future, we will likely see computer agents[30 ] that endorse one another.[31 ] For instance, a computer agent that searches online for travel deals that match my interests and budget may refer me to another agent, one that can give restaurant suggestions for the locations where I’m planning to travel. The restaurant agent, in this case, benefits from enhanced credibility because of the endorsement. Agent endorsement may become an important and influential form of reputed credibility, especially if the agent who makes the recommendation has a good track record.

Earned Credibility

If your tax accountant has shown herself to be competent and fair over many years, she will have a high level of credibility with you. This earned credibility is perhaps the most powerful form of credibility. It derives from people’s interactions with others over an extended period of time.

Earned credibility can apply to interactions with computing products as well. If an ATM reported an unexpectedly low balance in a man’s bank account, he may change his weekend vacation plans rather than question the credibility of the machine, especially if he has a long history of getting accurate information from the device. If a runner used a heart rate monitor for two years, and its measures always matched her own manual count of her heartbeats, the monitor would have a high level of earned credibility in her eyes. She would believe almost any measure it offered, within reason.

Principle of Earned Credibility

Credibility can be strengthened over time if computing technology performs consistently in accordance with the user’s expectations.

Earned credibility strengthens over time. But sometimes the opposite also is true: extended firsthand experience can lead to a decline in credibility. A traveler using an information kiosk may eventually discover that it provides information only for restaurants that have paid a fee. This pay-for-listing arrangement may only become apparent over time, as the person becomes more familiar with the service. In that case, the credibility of the service may decline rather than increase with extended use.

Earned credibility is the gold standard, both in human-human interactions and in human-computer interactions. It is the most solid form of credibility, leading to an attitude that may not be easily changed (although in some cases, one misstep can instantly destroy credibility, as in the example of the laser surgery machine described earlier). Creating products that will earn rather than lose credibility over time should be a primary goal of designers of persuasive technologies.

The four types of computer credibility are not mutually exclusive; they represent different perspectives in viewing elements of computer credibility. And they can overlap. For example, presumed credibility, which is based on assumptions, also plays a role in surface credibility, which is based in part on making quick judgments, which in turn can be based on underlying assumptions about credibility.

[18 ]For researchers’ conclusions about presumed credibility, see the following:

a. B. M. Muir and N. Moray, Trust in automation: Part II, Experimental studies of trust and human intervention in a process control simulation, Ergonomics, 39(3): 429–460 (1996).

b. Y. Waern and R. Ramberg, People’s perception of human and computer advice, Computers in Human Behavior, 12(1): 17–27 (1996).

[19 ]J. A. Bauhs and N. J. Cooke, Is knowing more really better? Effects of system development information in human-expert system interactions, CHI 94 Companion (New York: ACM, 1994), pp. 99–100.

[20 ]L. W. Andrews and T. B. Gutkin, The effects of human versus computer authorship on consumers’ perceptions of psychological reports, Computers in Human Behavior, 7: 311–317 (1991).

[21 ]L. W. Andrews and T. B. Gutkin, The effects of human versus computer authorship on consumers’ perceptions of psychological reports, Computers in Human Behavior, 7: 311–317 (1991).

[22 ]T. B. Sheridan, T. Vamos, and S. Aida, Adapting automation to man, culture and society, Automatica, 19(6): 605–612 (1983).

[23 ]T. B. Sheridan, T. Vamos, and S. Aida, Adapting automation to man, culture and society, Automatica, 19(6): 605–612 (1983).

[24 ]L. W. Andrews and T. B. Gutkin, The effects of human versus computer authorship on consumers’ perceptions of psychological reports, Computers in Human Behavior, 7: 311–317 (1991).

[26 ]Our Web credibility study is described in more detail in Chapter 7.

[27 ]I believe the closest thing to a formula for surface credibility stems from my Stanford lab’s work. See, for example, B. J. Fogg and H. Tseng, The elements of computer credibility, Proceedings of ACM CHI 99 Conference on Human Factors in Computing Systems (New York: ACM Press, 1999), vol. 1, pp. 80–87,

[29 ]Webby awards are presented annually by the International Academy of Digital Arts and Sciences to acknowledge “the best of the Web both in quality and in quantity.” I am a judge for Web sites in the Science category. For more information, visit

[30 ]I use “agent” in the same sense that Kurweil defines the term: “An intelligent agent (or simply an agent) is a program that gathers information or performs some other service independently and on a regular schedule.” Source:

[31 ]Nikos Karacapilidis and PaylosMoraïtis, Intelligent agents for an artificial market system, Proceedings of the Fifth International Conference on Autonomous Agents (New York: ACM Press, 2001), pp. 592–599 (see See also C. Wagner and E. Turban, Are intelligent e-commerce agents partners or predators? Communications of the ACM, 54(5): (2002).

Dynamics of Computer Credibility

Credibility perceptions are not fixed; they can strengthen or weaken over time. How is credibility gained over time? How is it lost? And how can it be regained? A small body of research examines these questions and provides some limited answers. Specifically, research confirms what seems obvious: computers gain credibility when they provide information that users find correct, and they lose credibility when they provide information that users find incorrect. [32 ]

Credibility perceptions can strengthen or weaken over time, but once lost, credibility may be hard to regain.

If the treadmill at your gym reports that your heart rate is just 60 beats per minute when you’re puffing and panting after running two miles, you’d be less inclined to believe other information from the machine: maybe you didn’t really cover two miles, perhaps you didn’t run at 8 miles/hour after all. If you believe one piece of information is in error, you will be less likely to believe other information the machine offers.

Another factor that matters in perceptions of credibility is the magnitude of errors, and that depends on the context of use. In some contexts, computer users are more forgiving than in others.

In a study of automobile navigation systems, error rates as high as 30% did not cause users to dismiss an onboard automobile navigation system. [33 ]Stated differently, even when the system gave incorrect directions 30% of the time, people still consulted the system for help in arriving at their destinations, probably because they didn’t have a better alternative. In this context, getting correct information 70% of the time is better than not having any information at all.

In other situations, a small error from a computing product may have devastating effects on perceptions of credibility. Again, my earlier example of the defective reporting of a laser surgery machine illustrates this point.

As these examples suggest, it’s not the size but the significance of the error that has the greatest impact on credibility. Most studies show that small but significant errors from computers have disproportionately large effects on perceptions of credibility. [34 ]But even simple, seemingly insignificant mistakes, such as typographical errors in a dialogue box or a Web page, can damage credibility.

Once a computing product loses credibility, it may be possible to regain some credibility by one of two means. First, the product can win back credibility by providing accurate information over an extended period of time. [35 ]A blood pressure monitor that gives an inaccurate reading at one point may regain credibility if the next 20 readings seem accurate.

Principle of (Near) Perfection

Computing technology will be more persuasive if it never (or rarely) commits what users perceive as errors.

The other pathway to regaining some credibility is to make the same error repeatedly (if it is not a critical error). In such cases, users may learn to anticipate and compensate for the error, [36 ]and the computer wins credibility points just for being consistent. Every time I use the word “bungee,” my spellchecker says it’s not part of the dictionary. But I’ve come to expect this now, and the spellchecker doesn’t lose any additional credibility for suggesting I’ve spelled the word incorrectly. (I also could add the correct spelling to the program’s “custom dictionary,” which would further compensate for the spellchecker’s error.)

Although there are two paths to regaining credibility, in many cases the point is moot. Once people perceive that a computing product lacks credibility, they may stop using it, which provides no opportunity for the product to regain credibility through either path. [37 ]If a laser surgery system makes an error, it’s doubtful that an ophthalmologist would give it a second chance.

[32 ]To read more about how computers gain or lose credibility, see the following:

a. B. H. Kantowitz, R. J. Hanowski, and S. C. Kantowitz, Driver acceptance of unreliable traffic information in familiar and unfamiliar settings, Human Factors, 39(2): 164–176 (1997).

b. J. Lee, The dynamics of trust in a supervisory control simulation, Proceedings of the Human Factors Society 35th Annual Meeting (1991), pp. 1228–1232.

c. B. M. Muir and N. Moray, Trust in automation: Part II, Experimental studies of trust and human intervention in a process control simulation, Ergonomics, 39(3): 429–460 (1996).

[33 ]B. H. Kantowitz, R. J. Hanowski, and S. C. Kantowitz, Driver acceptance of unreliable traffic information in familiar and unfamiliar settings, Human Factors, 39(2): 164–176 (1997).

[34 ]For more on the disproportionate credibility cost of small errors, see the following:

a. B. H. Kantowitz, R. J. Hanowski, and S. C. Kantowitz, Driver acceptance of unreliable traffic information in familiar and unfamiliar settings, Human Factors, 39(2): 164–176 (1997).

b. J. Lee, The dynamics of trust in a supervisory control simulation, Proceedings of the Human Factors Society 35th Annual Meeting (1991), pp. 1228–1232.

c. B. M. Muir and N. Moray, Trust in automation: Part II, Experimental studies of trust and human intervention in a process control simulation, Ergonomics, 39(3): 429–460 (1996).

[35 ]B. H. Kantowitz, R. J. Hanowski, and S. C. Kantowitz, S.C., Driver acceptance of unreliable traffic information in familiar and unfamiliar settings, Human Factors, 39(2): 164–176 (1997).

[36 ]B. M. Muir and N. Moray, Trust in automation: Part II, Experimental studies of trust and human intervention in a process control simulation, Ergonomics, 39(3): 429–460 (1996).

[37 ]B. M. Muir and N. Moray, Trust in automation: Part II, Experimental studies of trust and human intervention in a process control simulation, Ergonomics, 39(3): 429–460 (1996).

Errors in Credibility Evaluations

In a perfect world, humans would never make errors in assessing credibility— but they do. These mistakes fall into two categories: gullibility and incredulity (Table 6.2).

Table 6.2: Errors in Credibility Evaluations

User perceives product as credible

User perceives product as not credible

Product is credible

Appropriate acceptance

Incredulity error

Product is not credible

Gullibility error

Appropriate rejection

If a body-fat measuring device reports that your body fat is 4%, it’s probably not accurate unless you are a world-class athlete spending most of your time training. If you accept the 4% figure as factual, you’re probably committing the gullibility error. People commit this error when they perceive a computing product to be credible, even though it is not.

At the opposite extreme is the “incredulity error.” [38 ]People—often experienced computer users—commit this error when they reject information froma computer, even though the computer’s output is accurate. Sometimes when I seek the lowest fares on the Internet, I don’t believe what I find at the first travel Web site I consult. I go to another travel Web site and check again. Almost always, I find the same fares for the dates I want to travel. For some reason, I don’t completely believe the first site, even though I find out later it gaveme the best information possible.

The gullibility error has received a great deal of attention. Those in education—especially librarians—have set out to teach information seekers to use credibility cues, such as the authority of content authors and frequency of site updating, when searching for online information. [39 ]

The incredulity error has not been given equal attention. People seldom advocate being less skeptical of computer technology. As a result, the burden for boosting the credibility of computing products seems to rest with the creators of these products.

To minimize the incredulity error, designers should strive not to give users any additional reasons, beyond their preconceived notions, to reject the information their products provide. They can do this in several ways, such as highlighting the aspects of their products that relate to trustworthiness and expertise—the key components of credibility—and focusing on the credibility perceptions they can impact. For example, while designers don’t have much control over presumed credibility, they may be able to affect surface and earned credibility.

[38 ]For concepts related to the incredulity error, see the following:

a. J. Lee, The dynamics of trust in a supervisory control simulation, Proceedings of the Human Factors Society 35th Annual Meeting (1991), pp. 1228–1232.

b. T. B. Sheridan, T. Vamos, and S. Aida, Adapting automation to man, culture and society, Automatica, 19(6): 605–612 (1983).

[39 ]To see an example of how librarians have taken an active role in helping people determine the credibility of online information, see

Appropriate Credibility Perceptions

A key challenge for developers of computing products, then, is to reduce incredulity errors without increasing gullibility errors. The goal is to create computing products that convey appropriate levels of credibility—that is, products that make their performance levels clear. This may be too lofty a goal, since companies that create computing products are unlikely to disparage what they bring to market. It doesn’t make good business sense to undermine your own product.

Or does it?

In some cases a computing product that exposes its own shortcomings may be a winner in the long run. You’ve probably been in situations where people have done something similar: a taxi driver who says he can’t quite remember how to get to your destination, a sales representative who confides that she makes a bigger commission if she closes your deal today, or a professor who says she’s not sure about the answer. In all three cases, the overall credibility of the person is likely to go up in your estimation. Paradoxically, admitting a small shortcoming gives a person greater credibility. [1]

No research has been done to determine if this same dynamic applies to computers, but I suspect it does (as long as the “shortcoming” isn’t a fundamental flaw in the software). Consider a fitness device that calculates the number of calories burned in a single workout session. Today such devices give an exact number, such as 149 calories. Those with a grasp of physiology know this precise number is almost certain to be incorrect. What if the device instead suggested a plausible range of calories burned, such as “140 to 160 calories”? This would show that the product is designed to report information as accurately as possible. As a result, it may appear more credible than a machine that reports an exact figure that is likely to be false.

[1]The classic psychology study on how credibility increases when people reveal information that works against them, such as revealing a weakness or a bias, is E. Walster, E. Aronson, and D. Abrahams, On increasing the persuasiveness of a low prestige communicator, Journal of Experimental Social Psychology, 2: 325–342 (1996).

A more recent publication explains how the credibility-boosting dynamic works: When premessage expectations are disconfirmed (as in the case of a person or a software product admitting a bias or a shortcoming), the receiver of the message (in this case, the user) perceives the sender (in this case, the software) as unbiased (discussed in Stiff 1994, p. 96), which is a key contributor to credibility. For more discussion on this concept, see J. B. Stiff, Persuasive Communication (New York: Guilford, 1994).

More related to technology and credibility, a study about hate speech on Web sites showed that Web sites that reveal their biases appear more rational, even if their views are extreme. See M. McDonald, Cyberhate: Extending persuasive techniques of low credibility sources to the World Wide Web, in D. Schumann and E. Thorson (eds.), Advertising and the World Wide Web (Mahwah, NJ: Lawrence Earlbaum, 1999), pp. 149–157

The Future of Computer Credibility

The credibility of computing products should be a growing concern for designers. Thanks in part to notable cases of misinformation on the Web, such as the case of the 20-year-old hacker who changed the quotes in Yahoo’s news stories,[2] people seem less inclined to believe information from computing products. Computers are losing their aura, their mystique, their presumed credibility. But this might be a good thing. Ideally, computing products of the future will be perceived to have appropriate levels of credibility, and they will be, in the end, appropriately persuasive.

As computers become ubiquitous, the question “Are computers credible?” will become even more difficult to address. Increasingly, computing technology will be too diverse for a single answer. As our ability to evaluate credibility matures, we’ll examine computer credibility according to specific functions and contexts.

A reasonable approach is to design for and evaluate computer credibility in each of the seven contexts outlined in this chapter—tutoring, reporting measurements, and so on. It also will be useful to distinguish among the four categories of credibility—presumed, reputed, surface, and earned. As designers begin to understand and differentiate among these contexts and categories, they will be taking a big step forward in designing credible computing products.

For updates on the topics presented in this chapter, visit


Persuasive Technology(c) Using Computers to Change What We Think and Do
Persuasive Technology: Using Computers to Change What We Think and Do (Interactive Technologies)
ISBN: 1558606432
EAN: 2147483647
Year: 2005
Pages: 103
Authors: B.J. Fogg © 2008-2020.
If you may any questions please contact us: