Section 2.1. Introduction


2.1. Introduction

The need for people to protect themselves and their assets is as old as humankind. Peoples' physical safety and their possessions have always been at risk from deliberate attack or accidental damage. The increasing use of information technology means that individuals and organizations today have an ever-growing range of physical (equipment) and electronic (data) assets that are at risk. To meet the increasing demand for security, the IT industry has developed a plethora of security mechanisms that can be used to make attacks significantly more difficult or to mitigate their consequences.

A series of surveys has shown thatdespite ever-increasing spending on security productsthe number of businesses suffering security breaches is increasing rapidly. According to the United Kingdom's Department of Trade and Industry's Information Security Breaches Surveys,[1] 32% of UK businesses surveyed in 1998 suffered a security incident, rising to 44% in 2000 and 74% in 2002, and reaching a massive 94% in 2004. The 2004 CSI/FBI Computer Crime and Security Survey [2] reports that U.S. companies spend between $100 and $500 per employee per annum on security. But purchasing and deploying security products does not automatically lead to improved security. Many users do not bother with security mechanisms, such as virus checkers or email encryption, or do not use them correctly. Security products are often ineffective because users do not behave in the way necessary for security mechanisms to be effective. For example, users disclose their passwords, fail to encrypt confidential messages, and switch virus checkers off. Why? Because most users today:

[1] Department of Trade and Industry, Information Security Breaches Survey (2004); http://www.security-survey.gov.uk/.

[2] Ninth Annual CSI/FBI Survey on Computer Crime and Security (2004); http://www.gocsi.com/.

  • Have problems using security tools correctly (for an example, see the classic paper on PGP by Whitten and Tygar, reprinted in Chapter 34 of this volume)

  • Do not understand the importance of data, software, and systems for their organization

  • Do not believe that the assets are at risk (i.e., that they would be attacked)

  • Do not understand that their behavior puts assets at risk

Whitten and Tygar have identified a "weakest link property ," stating that attackers need to exploit only a single error. Frequently human frailty provides this error: humans are invariably described as the "weakest link" in the security chain. But until recently, the human factor in security has been neglected both by developers of security technology, and by those responsible for organizational security. Kevin Mitnick[3] points out that to date, attackers have paid more attention to the human element in security than security designers have, and they have managed to exploit this advantage prodigiously.

[3] Kevin D. Mitnick and William L. Simon, The Art of Deception: Controlling the Human Element of Security (New York: John Wiley & Sons Inc., 2003).

The aim of this chapter is to show how human factors knowledge and user-centered design principles can be employed to design secure systems that are workable in practice and prevent users from being the "weakest link." We view secure systems as socio-technical systems; thus, we are not just concerned with improving the usability of security mechanisms for individual users: our aim is to improve the effectiveness of security, and reduce the human and financial cost of operating it.

Security of any sociotechnical system is the result of three distinct elements: product, process, and panorama.


Product

What do current security policies and mechanisms require from the different stakeholders? Is the physical and mental workload that a mechanism requires from individual users acceptable? Is the behavior required from users acceptable? What is the cost of operating a specific mechanism for the organization, in both human and financial terms?

Currently, we have a relatively small set of general security mechanisms, and general policies that mandate user behavior when operating those mechanisms. Usable security requires a wider range of security policies and mechanisms, which can be configured to match the security requirements and capabilities of different users and different organizations.


Process

How are security decisions made? Currently, security is seen to be the responsibility of security experts. During the system development process, security is frequently treated as a nonfunctional requirement, and is not addressed until functionality has been developed. We argue that the organization's security requirements need to be determined at the beginning of the design process, and that the development of security mechanisms should be an integral part of design and development of the system, rather than being "added on." When deciding on a security mechanism, the implications for individual users (workload, behavior, workflow) need to be considered.

Usability and security are often seen as competing design goals. But in practice, security mechanisms have to be usable to be effectivemechanisms that are not employed in practice, or that are used incorrectly, provide little or no protection. To identify appropriate policies and usable mechanisms, all stakeholders have to be represented in the process of designing and reviewing secure systems .


Panorama

What is the context in which security is operated? Even a very usable security mechanism is likely to create extra work from the users' point of view. It is human nature to look for shortcuts and workarounds, especially when users do not understand why their behavior compromises security. User education and training have a role to play, but changing individuals' behavior requires motivation and persuasion, especially when the users' own assets are not at risk. A positive security culture , based on a shared understanding of the importance of security for the organization's business, is the key to achieving desired behavior. Effective and usable security requires effort beyond the design of user interfaces to security tools, specifically:

  • Security training in operating security mechanisms correctly. Effective security training goes beyond instruction: it includes monitoring and feedback. Monitoring of security performance is an ongoing activity; action needs to be taken when policies are breached. For instance, mechanisms that are too difficult to operate need to be redesigned, or sanctions need to be carried out against users who refuse to comply.

  • Security education. Users' motivation to comply is based on understanding why their behavior can put organizational assets at risk. Education needs to instill personal and collective responsibility in users, but also in security designers, administrators, and decision makers.

  • Political, ethical, legal, and economic constraints surrounding the system. Currently, decision-making on security is largely driven by technical considerations (e.g., security mechanisms are selected according to technical performance). However, other requirements may conflict with or override technical performance.

In this chapter, we explore each of these points in more depth. As stated at the outset, our aim is to broaden the view of what is involved in building usable secure systems. At this point in time, we cannot offer a blueprint for building such systems, but we identify relevant knowledge and techniques that designers who aim to design systems that are secure in practice will find helpful.



Security and Usability. Designing Secure Systems that People Can Use
Security and Usability: Designing Secure Systems That People Can Use
ISBN: 0596008279
EAN: 2147483647
Year: 2004
Pages: 295

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net