Section 28.2. The Five Golden Rules


28.2. The Five Golden Rules

It's too easy for developers to throw up their hands and say, "Users are the weakest link." In fact, the blame frequently lies with developers for not understanding how users interact with technology and designing interfaces that facilitate mistakes. When designing interfaces that need to both serve users and protect them, a few questions come up again and again: when should the program turn to the user for input? When it does, how should the program solicit that input? This section will offer answers to these questions as we have fine-tuned them during our work on Firefox.

28.2.1. Identifying "The User"

There are undoubtedly people who will take offense at some of the statements made in the following sections. It's important to realize that "the user," as discussed here, is meant to represent the intended majority of our Firefox audiencethe dentists, lawyers, teachers, stay-at-home dads, and other members of society who aren't in the computer profession and don't know anything about security beyond what they read in the news. Although Firefox is becoming increasingly popular in enterprises, this chapter does not apply to corporate customers, who usually have more interest in their own security and frequently have trained IT departments who have security as an explicit responsibility. Firefox offers different solutions for corporationswithout harming the Firefox consumer usersoften by simply putting advanced options behind a technical wall that's impenetrable by most home users.

28.2.2. 1. Enforce the Officer/Citizen Model

Society has constructed a variety of organizations to regulate and protect itself. The police maintain order and enforce the law. Insurance companies protect against unpredictable disasters. Lawyers defend personal rights and freedoms. People want to delegate their security to experts who specialize in the field and are able to guarantee it.

It's easy to forget that the same is true of their online security. The old Mozilla software is a chronicle of indecision within the walls of Netscape. Every option, confirmation window, and question to the user marks another case where two internal camps couldn't agree on the most secure way to proceed and instead deferred to the user's decision. This approach, of course, is absurd. Consider arriving home to an unsolicited package on your doorstop. You call the bomb squad, and after some analysis, the lead officer comes to brief you: "An X-ray of the package was perfectly normal, and the organization that sent it seems to be reputable, but a chemical swab of the exterior yielded some concerning figures. Gee, I dunno. Your call."

The example seems over the top until you consider the questions the average user encounters every day on the Internet. Figure 28-1 illustrates a particularly mystifying one that IE users can encounter while attempting to navigate to certain sites, many of which are in fact trustworthy. The "View Certificate" button opens a window containing what amounts to Greek to a majority of web users; in our real-world analogy, it's the equivalent of the officer providing you with the raw data output of the chemical swab.

Figure 28-1. Internet Explorer 6.0 users can encounter this baffling dialog in day-to-day surfing


Before coding up such confirmation windows, software developersthe group most notorious for encouraging them as a suitable compromiseshould ask themselves a question: if I can't figure it out, how are users supposed to? What additional qualifications do users possess that will help them arrive at a better conclusion? What evidence is there to suggest that most users even know what a certificate is in the digital sense?

In developing Firefox, we challenge ourselves daily to do something that has proven difficult in the software industry: make decisions. We want users to know that they can go about their business on the Internet with security in the back of their minds, because it's on the front of our minds. The intent is not to be covert, or to pretend that security isn't a concern; we do want our users to know that we're making decisions on their behalf. But we want them to have confidence in our ability to make such judgments (hence our slogan, "the browser you can trust").

So, how do you discern the bombs from the fruitcakes when there's not enough information to make the call? You need to err on the side of caution, but only if the circumstances justify it. That means that if 99.9% of sites matching the profile outlined in Figure 28-1 are reputable sites that forgot to dot the i's in their certificates, you owe it to your users to just navigate to the siteespecially because usability studies indicate that almost all of them will click "Yes" without reading the question.[2] On the other hand, if there's a strong likelihood that users are in danger, you owe it to them to take protective actions on their behalfespecially, again, because they're likely to make the wrong choice ("Yes") if you ask them. In other words, if you're doing things right, you should be making the decision users would make anyway when it's probably the right one, and correcting them in the cases where they're likely making the wrong one. You've achieved status quo or better, and you're down one annoying confirmation window.

[2] Mary Ellen Zurko, Charlie Kaufman, Katherine Spanbauer, and Chuck Bassett, "Did You Ever Have to Make Up Your Mind? What Notes Users Do When Faced with a Security Decision," 18th Annual Computer Security Applications Conference (2002).

Skeptics will point out that mistakes will be made and we will have broken our contract with the user, but that's only partly true. Our promise to users is that they can trust us to do the due diligence on their behalf, and they understand that we are more likely to choose "Yes" or "No" correctly than they are. We purport to be the browser they can trust, not the browser without security problems. While we believeand evidence suggeststhat our products are more secure than our competitors' out of the box, trust is a more nuanced reward that must be won on many fronts.

The media is chiding Microsoft not because IE has so many exploits, but because IE has so many exploits as a result of prioritizing other things above security (as its manager conceded[3]), because vulnerabilities are often public for weeks before being patched, and because Microsoft is denying some security features to users who won't upgrade to Windows XP. The Microsoft attacks are motivated by a perception that the company isn't doing everything it could be to protect users, in the same way that a doctor will come under fire for malpractice if his perceived negligence injures his patient. Because we design Firefox with security in mind, proactively seek out vulnerabilities by paying experts who discover them, and generally provide 24-hour turnaround to patch them when they arise, our users do not feel cheated when exploits are discovered, and continue to trust us to make decisions on their behalf.

[3] Chor.

28.2.3. 2. Don't Overwhelm the User

Put a usability guru in a room with a developer who wants to add a dialog box, and sparks will fly. What's the big deal about asking the user for a decision, anyway?

There's nothing inherently wrong with turning to the user for input, but our own usability studies corroborate the widely held belief that doing so too often is detrimentalboth to users and to developers. As discussed earlier, Firefox is built entirely on the principle that people want to use the Web, and not the web browser. Dialogs and confirmation windowsespecially when they pose questions as inane as the one pictured in Figure 28-1interrupt the user's thoughts and delay the payoff (the loading of the web site). Even before the user has a chance to begin reading the question, he is already frustrated and predisposed to dismiss it in the quickest way possible. This is not the state of mind you want your users to be in when making a security decision with potentially devastating consequences.

So, dialog abuse can harm users, but how does it harm us as developers? When users are inundated with questions that are cryptic or of little interest, they begin to lose faith in our commitment to pester them only when absolutely necessary, and they no longer find it incumbent upon themselves to read future questionseven when they have not seen them before. Consequently, we as developers lose another tool in our toolbox of ways to warn users. A particularly apt example from modern society is found in the U.S. Department of Homeland Security's color-based threat level. The government bumps the national threat level up and down so frequently that studies indicate many citizens have stopped reacting to the changes. The tool no longer acts to mobilize public awareness, and thus the government must resort to even more pervasive means to grab their attention. This only frustrates and interrupts them even more, and a vicious cycle ensues.

With Firefox, we seek to break this cycle by distinguishing the problem from its symptoms. The problem is that users do not want to be bothered unless they perceive it to be necessary; that they grow jaded of the interruptions and begin to ignore them is just a symptom, and should not be perceived as an invitation to interrupt them more forcefully. Instead of introducing dialogs that animate, change color, use bold text, or otherwise try to one-up the traditional dialog and capture the user's attention, we acknowledge our users' preference to surf the Web without a shadow, and try to make decisions on their behalf in line with the officer/citizen model described earlier. There is growing evidence that the software industry is moving in this direction. For example, antivirus programs that once (ridiculously) notified the user of a virus and asked whether to delete it now simply delete the virus and notify the user of the action taken.

Of course, it would be disingenuous to suggest this as the be-all and end-all solution to the great dialog problem. There are circumstances that mandate user input and can't be avoided. In Firefox, for example, a user who encounters a web site that is trying to install software (such as extensions) must decide whether to permit the installation. To enforce an indiscriminate ban on software installation would be to marginalize one of Firefox's greatest advantages and destroy an ecosystem that has subsisted since the project's inception. On the other hand, allowing all software installation would put our users at great risk.

The dialog problem isn't binary, though. Just because we can't avoid asking the user a question doesn't mean a dialog is the only venue to pose the question. Wherever possible, we try to replace dialogs with UI widgets that are less jarring to the user experience while still maintaining the better features of a dialog from a usability perspective: easy to notice and difficult to ignore. In the case of software installation, for example, we replaced the dialog asking users whether to install the software with a transient toolbar of sorts (Figure 28-2). The toolbar appears automatically upon installation attempt, and it is distracting only to the extent that the page shifts slightly to allocate space for it. This has proven to be a good compromise both from a pure usability perspective and in the context of security. As far as usability is concerned, our users report that the toolbar is appreciably less intrusive than a dialog appearing when the page loads, and say they like being able to install the software at their leisure instead of being forced to decide immediately. This explains the security win as well. Because the interface respects the user's workflow in not demanding a decision, users are not inclined to dismiss it instinctively.

Figure 28-2. Firefox 1.0 displays this nonintrusive toolbar when an unknown site tries to install software


Spyware is a major problem for computer users today: a 2004 study conducted by the ISP Earthlink found that the average computer user has 28 spyware programs installed on his computer.[4] We believe this is because most versions of Internet Explorer require the user to allow or reject software installation immediately upon visiting a site, prompting the user to just hit OK in frustration. Our interface is there if they want to install the software, and if not, they can just leave it be. As of Windows XP Service Pack 2, Internet Explorer uses a similar toolbarbut again, this benefits only Windows XP users. Our challenge in the future will be to resist the temptation to use this toolbar for other, less important purposes, because it would then incur the same user apathy as the dialog.

[4] Earthlink, Inc., "Inaugural Report Charts 28 Spies per Computer" [cited Jan. 2005]; http://www.earthlink.net/spyaudit/press/.

Dialogs still have their place in end-user software, and indeed Firefox has plenty. While usability studies tend to agree on the scenarios outlined thus far, there is evidence that users are more receptive to dialogs that directly result from their own deliberate actions. In other words, most users probably wouldn't perceive a dialog that appears on page load to be a consequence of their choosing to navigate to that page. But a dialog that appears upon clicking a button is more reasonable and more likely to be read if it seems to exist to help the user achieve his goal.

Alhough they often don't realize it, many developers add security controls and dialogs because they want the user to know the steps they've taken to keep the user safe. There is, after all, something disheartening about spending all day implementing an unbreakable encryption scheme only to end up with a product that is, as far as most users are concerned, exactly the same as yesterday's. There's just one problem with developers wanting to show off their accomplishments: most users simply don't care.

Still, there's something to the idea that users want occasional reassurance of their safety. The key is to provide it in a way that doesn't annoy them. People want to see the uniformed security at the airport checkpoints, but they don't want to be singled out for a strip search. You can count on your marketing department to play up security for you, but it's still important to have some feedback built into the product itself. Just make sure you don't overload users with details they don't care about, and make sure the information doesn't interrupt their actually using the product.

28.2.4. 3. Earn Your Users' Trust

Perhaps the most compelling impetus to use dialogs in a product is liability. Dialogs force the user to decide so that the company doesn't have to, a sleight of hand that the in-house lawyers appreciate later when it turns out that the user's decision was wrong.

The problem is that while your users may know next to nothing about security or computers, they know plenty about corporate dishonesty. They are aware of and disgusted by the fine print, immune to the psychology of "$39.99," and wary of privacy policies the size of novels. They are intuitively aware of when a company is furthering its own goals at their expense, and the moment they suspect this of you and your product, you have ceded their loyalty to your competitor.

Google is a perfect (if overused) case study of a company that ostensibly has valued its user trust above all else. Google has decided that cultivating its brand image over time will eventually yield greater revenues than instant-revenue schemes, and the result is a front page that's as pristine and free of advertising as the day the service was launched. With its stated policy of "don't be evil" and its congenial public image, it seems easy for people to forget that, at the end of the day, Google is a multibillion-dollar business.

So, how does this apply to security? In conversations with our users, we've learned two things. The first is that most users simply don't understand complicated security questions, as already discussed. But the second is that some mentally categorize such questions in the same way they do the fine print on advertisingas intentionally obfuscated warnings that companies can point to later and say "Look, we warned you!" if it becomes necessary. People recognize the deceptiveness of fine print, and whether they perceive a company as a group of human beings or a faceless monolith relies in part on whether the company recognizes it, too.

Without delving too much into human psychology, our basic premise in designing Firefox is that most users really don't want to be responsible for their own security, an idea that again invokes the officer/citizen model discussed earlier. The thought of having to fend for yourself in a Web that has been deemed "harmful" and "corrupt" by the press is daunting and overwhelming. Our users expect us, as Internet security experts, to make decisions that keep them safe, instead of passing the buck to them and washing our hands if they mess up.

This probably sounds a bit pie-in-the-sky to lawyers: sure, products would be nice and usable if we could just leave the user alone forever, but what happens when we make the wrong security decision on their behalf? Given the heat Microsoft is taking for its security problems in IE, this is very much a concern of ours with Firefox, and we certainly haven't excised all security dialogs. But one thing we've learned is that if users come to perceive your company as a group of human beings, as discussed earlier, then they are much more willing to tolerate your mistakes. The key is handling such mistakes properly, as discussed in the following section.

28.2.5. 4. Put Out Fires Quickly and Responsibly

The release of Firefox sparked a mass exodus of Microsoft Internet Explorer users, driving IE's market share below 90% for the first time in years.

We believe that the exodus from IE was driven as much by disillusionment with Microsoft as with the heavily publicized security exploits in the company's browser. The security problems are largely viewed as a symptom of a more dire problem: Microsoft just doesn't seem concerned about its users. Whereas Google has built a reputation for intimately caring about customers, Microsoft has created an image of apathy by leaving security exploits unpatched for weeks after they have been announced, and by telling users of its older operating systems that they need to upgrade. Users can accept security exploits if it's clear that you work diligently to prevent them and then work diligently to respond to them when they do arise. Likewise, provided that the police work hard to deter criminals, the public isn't outraged when a crime does occurunless the police completely botch the investigation afterward.

Many companies seem to forget that announcing a security problem is only half the battle: what the company does next matters just as much. When a security bug is found in Firefox, there's no time for a sigh and a postmortem. The public eye is upon us, watching us closely to see how we respond. The Mozilla security team generally strives for a 24-hour turnaround on security patches, a policy that has been lauded by users and the media alike. It's important to recognize that at this point, the principles of usability extend beyond your product to your web site. Can users find your security update easily? Does it seem like you are being publicly accountable, or are you trying to brush it under the rug, burying it in a remote section of your web site? Remember: the user is expecting corporate tricks.

Admittedly, security response is an area where Firefox lagged behind for much of its history. Security information used to be difficult to find on the Mozilla web site, and only recently did we introduce a built-in update system that automatically notifies Firefox users of security patches and installs them. The usability of such a system is critical: if it's too aggressive in trying to persuade users to get patches (as critics of Windows XP's system have complained), users will ignore it for the same reasons they ignore persistent security dialogs. On the other hand, if it's too passive, users won't get the critical updates.

And if you've read this far, this should just be a reminder: users really don't care that you're patching exploit #25F104A in the handling of XML remote procedural calls. Give them the option to view details about the update, but don't overwhelm them with information in the initial screen. They just want to be safe.

28.2.6. 5. Teach Your Users Simple Tricks

Many of the reasons given thus far to explain why products offer security dialogs and other controls have been rather cynical: because they are easy to code, because they are the path of least resistance between two warring camps, and because they reduce a company's liability. To be fair, many good-intentioned developers add dialogs or other interface text for a more noble purpose: to educate users and make them better citizens in your product's community (in Firefox's case, the Web). Alas, most people are more interested in using a product to get stuff done than to learn security trivia.

Distinguished companies like eBay and CitiBank learned this lesson the hard way in their fight against phishers. Many scam artists direct users to a fraudulent web site that appears to be legitimate because the artist is using a sophisticated technique to mask the web address. For years, the corporate victims of such attacks tried to educate their users about the intricacies of the Internet URL scheme so that they could discern the masked addresses from the real ones. This was largely a failure, and recently companies have resorted to developing and deploying software that automatically recognizes phishing sites for users, a strategy that is proving more effective. eBay, for example, rolled out its Web Caller ID toolbar and enjoyed a resulting decline in successful phishing attacks.

So, what went wrong? Users, it seems, are willing to learn only as much as they need to know to get the job done using the productand nothing more. They are willing to learn that the odd convention of wrapping "CNN" in "www." and ".com" gets them to the CNN web site, but they don't know or care what magic makes that happen. And despite the best efforts of many different organizations to popularize the term "URL," most of our users still have no idea what that acronym means. Accordingly, we don't use it anywhere in our interface. While it would certainly make our lives easier if our users were well versed in Internet technologies, it's not their job to know such trivia, nor is it our job to lecture them about it.

The fact that most of our users are baffled by the Web's navigation presents a number of challenging usability and security concerns. For example, the "s" in "https" indicates that the given site uses SSL to encrypt its transmissions, but many users don't know that and won't ever learn it. How do we let them know when they are at a secure site without burdening them with such detail or disturbing them with a dialog? Other browsers have historically solved this problem by displaying a small lock icon in the status bar on secure pages. That solution is on the right track, but it has two problems: many users don't notice it, and those that do don't make the connection between the icon and the page. In Firefox, our solution is to remove all doubt and make it as dead simple as possible: we turn the entire address bar a bright shade of yellow at secure sites. It's impossible to miss; the connection with the page is clear because it highlights the page address; and it's obvious what it means because it's punctuated by a large lock.

Such simplicity is crucial in trying to make something as complex and varied as security a usable aspect of your product. Whenever you feel compelled to educate users about your support for 128-bit encryption or TLS 1.0, ask yourself: will users be able to use the product without this knowledge? If the answer is yes, odds are that they won't bother reading what you have to say.

That sounds blasé, but it's a very frustrating problem. There are many severe security problems in Firefox that we could fix easily if we could educate our users about certain concepts. For example, a serious problem we're currently battling is spoofing on the user interface level: sites spawn a new browser window that replicates the entire browser interface, with a fake address bar that displays a legitimate web address like "http://www.ebay.com/". The window loads a spoofed eBay page in a frame, so for all intents and purposes, it looks exactly as if the user were at the real eBay site. This is all possible due to a common browser feature that allows sites to hide the standard browser UI from windows they open (which is useful for web applications that want to provide their own interface). Our original solution was to disallow sites from hiding the address bar so that even if they replicated the entire interface with a fake address bar, our real address bar (with the real address) would show as well. But although this was foolproof deterrence, it proved completely useless, because users perceived the duplicate address bar as a Firefox glitch and still believed they were at the correct site. In response, one Mozilla developer came up with another foolproof idea that was brilliant: on each launch of Firefox, paint the Firefox interface with a nonintrusive, randomly generated pattern. Because sites wouldn't be able to replicate this pattern, users would know when they were viewing spoofed UI. Again, though this approach would make any developer drool with delight, this is just not a concept that most of our users would grasp, and so it fails to be a viable solution. In a sense, our security battles are not just against malicious hackers, but against our own users. Both have proven stubborn foes.



Security and Usability. Designing Secure Systems that People Can Use
Security and Usability: Designing Secure Systems That People Can Use
ISBN: 0596008279
EAN: 2147483647
Year: 2004
Pages: 295

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net