Section 3.2. Balance Security and Usability


3.2. Balance Security and Usability

Balance is key to all security efforts. The same phenomenon that happens with cars happens with computers. Unless you stand over them with a loaded gun, users will disable, evade, or avoid any security system that proves to be too burdensome or bothersome. Preventing your system from becoming the victim of such "empowered" users requires a combination of engineering and education. The engineering builds security systems that are safe and usable, and the education informs users about the actual risks so that they will be motivated to use your security (or at least so that they won't disable it).

How do you find the right balance? You begin by examining and exploiting, in each new situation, the differences between the two groups.

3.2.1. Exploit Differences Between Users and Bad Guys

Let's return to those "marching dots" that I mentioned in my introduction. The purpose of those dots is to protect a password from "shoulder surfing" as it is being entered. In theory, if you have a strong password that's sent over an encrypted link, shoulder surfing is the main threat that passwords face.

But the user who actually types the password is in a fundamentally different position from a potential shoulder surfer. Consider:

  • The user knows what he is typinghe is only looking for errors.

  • The user is close to the screen where the password is being enteredhe can read text that is printed with relatively low contrast.

  • The user is always present.

Now, compare that with a potential attacker:

  • An eavesdropper needs to accurately reconstruct every character in the password.

  • An eavesdropper is probably several feet away from the screenif that.

  • An eavesdropper might not even be presentthe user might be entering the password in the privacy of his own home, for example, or on a desert island.

You can take advantage of these differences to produce an interface that promotes complex passwords, while still leaving the eavesdropper in the dark. We did that when we designed Tresor 2.2 (see the upcoming sidebar). Alternatively, you can design an interface that allows the user to choose the amount of security that he wants when entering a password. As shown in Figure 3-1, the designers of GNU Keyring followed this approach.

3.2.2. Exploit Differences in Physical Location

One user may be working in an airport lounge, surrounded by people who eavesdrop from either boredom or darker motivation. Another user may be working in her private study at home, with nothing but two blank walls behind her. However, our current "one size fits all" security systems tend to ignore that difference. They arise from a single assumption: the bad guy may be standing behind you this minute!

3.2.3. Vary Security with the Task

The task at hand is a vital component in security decision-making. Security practitioners call this threat analysis. Different kinds of security measures are called for protecting information that is in transit versus information that is to be stored permanently on a hard drive. Sure, both data streams might be protected with 128-bit AES encryption. But in the

CASE STUDY: TRESOR 2.2

Tresor is a file encryption application that offers very high security. The application makes it easy to type a long "passphrase"even for users who perpetually make typographical errors. Designed by Roland Blaser and myself, Tresor 2.2 's passphrase entry design replaces "marching dots" with a new metaphor: the rolling blackout.


Users in the light; eavesdropping in a blackout

Like most web browsers, Tresor 2.2 uses those cute little dots to replace the characters in the user's password. But in Tresor 2.2, the characters are a bit slow in acting, always revealing the last few characters for a few seconds as the user types, long enough for the user to catch a typo.

But how many characters should be visible? In our first user test, we had three password characters visible and discovered that this provided enough information for an eavesdropper standing a few feet away to read the password as it was typed. The effect was startling: eavesdroppers' brains "saw" more than what was before their eyes. The security of the scheme was completely compromised.

In a follow-up test, we tried revealing just one or two characters. This foiled the eavesdropper, but users were crippled as well: they frequently didn't catch their own errors.


Error correction

Error correction worked perfectly from the beginning. Pressing Delete would delete the last character and reveal one more so that three would always be visible. Yes, the eavesdropper might now pick up a fragment of a passphrase, but we decided that such revelationshappening only when the user made a mistakewere insignificant when one considered the added security that could come with a 50- or 100-character passphrase.


Final polish

We added a few extra safety features, including always hiding the first four characters, so eavesdroppers can't get a running start, and timing out the Delete key reveal, protecting users who wander off, and we were done.


User control

Users are permitted to set a number of preferences, such as length of time before the "rolling blackout" hides the typed characters, with all ranges proven through user testing. Help files teach how to select secure passwords and encourage users to set preferences so that they are comfortable, making life difficult for both eavesdroppers and bad guys mounting an offline attack.


Results

We were able to develop a system that encouraged the use of very long passphrases. In one speed test, a user was able to type in a passphrase of approximately 50 characters 20 times in a row without a single surviving error.


Figure 3-1. The GNU Keyring application allows the user to choose whether to veil the password (left) or show it (right); users in busy airports probably want to veil their passwords, and users working in the privacy and safety of their own home can have their passwords exposed, which makes it much easier to enter them using the Palm's Graffiti system


first case, it's appropriate to use an ephemeral encryption key that is destroyed when it is no longer used; with stored documents, you might want to provide for key escrow, secret splitting, or even a secondary encryption key so that the document's contents can be recovered in the event of a problem.

Likewise, different security measures and interfaces are appropriate if your intention is to protect a laptop from a competitor or a co-worker. In the first case, you might be happy with a password that needs to be entered when the computer is powered on after it has been asleep for more than a few minutes. In the second case, you probably want a password on the screensaverand a cable lock attaching the laptop to the table.

3.2.4. Increase Your Partnership with Users

Security forces and their users are at war today, with many users in open rebellion. Security administrators need to understand that users frequently write down passwords because they cannot possibly remember all of the different codes and "secret handshakes" that are required to get through the day. Users need to learn to be discreet in their rule breaking: store the passwords on an unlabeled page of your day planner, instead of posting it on the side of your monitor.

3.2.4.1 Trust the user

Users are not the enemy; the bad guys are. Form a partnership with your users, treat them as intellectual equals, and they will respond. Adams and Sasse found that users do a much better job of implementing and following security policies when they are given cogent explanations of both the goals of the policies and the real security threats that the organization faces.[1] Of course, you don't have to actually believe that users are your intellectual equals. However, if you follow such a path, you'll be amazed at how swiftly their minds will improve.

[1] Anne Adams and M. Angela Sasse, "Users are not the enemy," Communications of the ACM (Dec. 1999), 4046 and reprinted as Chapter 32, this volume.

Make reasonable compromises in your design, offer users the flexibility to conform your application or service to their current conditions, and give users the information they need to make these decisions.

3.2.4.2 Exploit the special skills of users

As one example, elderly users, while experiencing failing memory of the present, experience an actual increase in more ancient memories, such as the name of their second grade schoolmarm. Encouraging them to form passwords from such information, such as "MissMorrison2", can result in passwords they are ideally specialized to remember, while confounding all but the most aggressive attacks and guesses.

Look for similar special skills among your specific user population that you can use to their advantage.

3.2.4.3 Remove or reduce the user's burden

We have systems that allow free and easy access; we have systems that provide high levels of security. Until recently, we haven't had many systems that do both well. For example:

WHY WE OVERPROTECT

For someone who has spent much of his career giving people lots of rope, I was amazed at how working on the Tresor project made me reverse roles. In less than a week, I found myself looking for new tricks, ploys, and techniques that I could use to limit the power of a user. All of these limitations, of course, were for the user's own good.

The engineer and I ended up in full role-reversal, with him as advocate for the users and me struggling to protect them, even if it killed them. I felt positively righteous about my efforts. The goal was, after all, security! It had suddenly seemed to me sacrilegious to implement anything that could even remotely help an eavesdropper, regardless of what it cost the user. I was gripped with a terrible fear that someday, some document would "get out," and it would be my fault.

When I pulled back, I stopped considering only the worst-case scenario, which for me was the user typing in passwords while standing in Grand Central Station with 12 people in trench coats hovering over her shoulder while another battery of 12 people trained high-tech cameras on her screen. I instead strove for balance between user and bad guy, and then gave the user the power to control her own use of the product.

(One might think that after slaving under the yoke of such systems for 30 years, as well as having a hand in designing more than a few of them in earlier years, I would have been a little more sympathetic toward my users. I might have been if the users weren't so stupid as to start typing in secret codes in the middle of Grand Central Station with trench-coated men floating above their shoulders!)


  • Today, you can buy a secure USB drive that requires a fingerprint to "open" it. This works great when the fingerprint reader is cooperative, but it's maddening when it isn't. Although there is potential for biometrics-based systems to be dramatically easier to use than today's password-based systems, designers shouldn't assume that replacing a password with a fingerprint reader automatically makes a system more "usable." Much work still needs to be done to make the dream of free and easy access on the part of the authorized user a reality.

  • Portable systems can be more "aware" of their environments. Are they at home, at the office, or in a restaurant or airline terminal? Knowing this, the security system can smoothly change methodologies and requirements without user intervention. (My laptop could determine that it is at home by seeing the WiFi MAC address of my home gateway, for example.) A more sophisticated approach would allow the user to choose among high, medium, and low security needs for each new environment as it is detected.

3.2.5. Achieve Balanced Authentication Design

Authentication came late to the personal-computer party. Before the explosion of the Internet in the 1990s, few people needed more than a single password for their email. The personal computer was primarily a tool for developing documents that would be printed for distribution. Authentication was provided by physical access: if you could touch the keyboard, you could access the documents!

Everything changed with the arrival of the Web, and it caught the security world by surprise. Practices that had worked since the 1960s suddenly failed. Why? Because the users changed. Instead of being a few trained, dedicated employees, suddenly millions of people, with no instruction whatsoever, were faced with signing up for a dazzling array of usernames and passwords. Simultaneously, the potential for attack on protected information went up astronomically.

Unfortunately, many of the solutions that were pressed into widespread use actually prevent the user's most valiant attempts to comply.

3.2.5.1 Remove unnecessary password restrictions

Web sites always set a low end for password selection, such as no fewer than four or six characters, a vital requirement. Some, however, go above and beyond, by setting a limit above and beyond, such as prohibiting more than six characters in total or requiring use of numbers only. This prevents people from using the secure passwords they've already committed to memory. My personal solution to this problem has been to create a database listing each site's username and password (currently 134 records in number). I have a shorthand for my usual password, but all others I'm forced to create are "in the clear," typed right in there for anyone with access to my machine to see. (I hesitate to reveal such a secret, lest someone break into my house some night so that they can access my free subscription to the Podunk Shopping News.)

Few users go to the trouble of building a database. They either avoid sites that won't accept their standard password, or register today to read the one thing they want, then immediately forget what they made up, never intending to visit again.

Many password restrictions are implemented because the passwords entered in a web site are crunched and munched and spit into some ancient application running on an IBM mainframe or something. The programmers who created the web interface felt that they had a responsibility to be faithful to the AS/400. A better solution would be to allow users to enter long or strong passwords, and then to silently drop illegal characters that can't be sent to the legacy system.

And if a password doesn't work, instead of telling the user to check his Caps Lock key, a better approach is to simply flip the case and try resubmitting. This cuts down on tech support calls without significantly impacting the security of the system as a whole.

3.2.5.2 The Doctor and password madness

My wife, the Doctor, was working over the summer at a local hospital. This hospital is fiercely into security, requiring no fewer than four sets of passwords to navigate its system. And why not? There are confidential patient records on those systems! By golly, they ought to have eight sets of passwords, and really make things secure!

But wait! It gets worse! After being on the job for six weeks, my wife had received only two of the four sets of usernames/passwords that she needed to do her joband she had spoken to no fewer than seven people to get them. Two weeks of further extreme effort finally produced the last two sets.

What was she doing in the meantime? Instead of spending all her time repairing people, she wasted hours camping out in another doc's offices, using his computer (and his passwords, thanks to the sticky notes) to do her work.

Meanwhile, the other doc, bumped from his office, would go and get an extra cup of coffee. The security system so carefully put in place had thus not only opened up your medical records to anyone schooled in the use of sticky notes, but also was resulting in the hospital pouring money down the drain in the form of lost productivity and company-supplied coffee.

Things get worse if my wife doesn't log in to a particular system every 90 days. This happens more than you might think, because my wife works at this particular hospital only during the summer and over the winter holidays. If she is gone for more than three months, the system will decide that her usernames and passwords are idle and will expire them.

It's almost as bad for full-time doctors. They get to keep their usernames, but have to select (and post on their computer monitors) new passwords every 90 days.

Expiring stuff is the only way this security crew has been able to prevent doctors from memorizing their passwords. You might think that memorizing passwords would be a good thing. One of the official reasons to expire passwords is to limit the amount of time an undetected attacker can use a compromised password. But this security measure is defeated easily by any attacker who can read sticky notes.

3.2.6. Balance Resource Allocation

Hospitals all over the country have been panicking because of new security regulations suddenly hitting them by surprise with no more than about six years' notice. My wife called down to Emergency a couple of days after the last law struck to ask them to fax a few pages from the record of a patient they had just sent up, but they refused. Someone could steal the fax off the machine that sits right out in the hall, with easy patient access. The previous week, that was an acceptable risk. This week, it was against the law.

While the security forces had spent years staring at their computer screens, thinking up ways to require four sets of auto-expiring usernames and passwords for all the doctors, they had failed to set up the most rudimentary physical security for either computers or fax machines. The most casual field studya walk through the hospital, informal chats with personnel on dutywould have revealed the problem years before, giving them plenty of time to move the fax machines 5 or 10 feet into a secure area.

Balance is also a critical factor in deciding where to expend resources. Cybersecurity in the absence of physical security is useless. Any new project should be launched with a thorough field study of the people who will be using the system, the places where they will use it, and the nature of the tasks they will be accomplishing. Existing efforts should go through the same kind of field review at least annually. Look specifically for aspects that are out of balance, whether they involve technology or an unsecured fax machine in the hall.



Security and Usability. Designing Secure Systems that People Can Use
Security and Usability: Designing Secure Systems That People Can Use
ISBN: 0596008279
EAN: 2147483647
Year: 2004
Pages: 295

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net