Software applications often have a social context. The features a software supplier includes in an application or infrastructure software often has implications for people other than its own customers. Conversely, others' actions may affect an application and its users. This is an economic externality, in that the actions of others can profoundly influence the operators and users of an application, often without compensatory payments. Two specific issues in this category are security and privacy, and both raise significant management challenges. In section 5.4, a security example is used to illustrate some management challenges in distributed applications.
Ensuring good security—preventing cracking if possible, and cleaning up in its wake when it does occur—is a major function in operation (see section 3.2.8). Crackers can operate inside or outside an organization, but the problem has certainly become more acute with the connection of most computer systems to the public Internet, opening them up to attack from anywhere in the world. While security may focus on crackers coming from the Internet, it is important to realize that attacks may originate within an organization as well. Outside contractors or even employees of good standing have an opportunity to access, destroy, or change unauthorized information.
Security technologies, operational vigilance and measures, and laws and law enforcement all work in concert as essential components of an overall security system (see figure 5.2). The party most directly affected by security lapses is the user, but the user is largely dependent on the other three players—software suppliers, operators of the application and infrastructure, and law enforcement—to prevent and contain losses. Users have responsibility also, for example, in maintaining the secrecy of passwords. Security requires a systems approach, and its effectiveness tends to be only as strong as its weakest link. Often that weakest link is the user: even with training, it cannot be assumed that individual users will always take proper care. Thus, the management of the end-user organization, not just the software creators and operators, must be sensitive to and act upon these concerns (Keliher 1980).
Figure 5.2: Dependence of the user on other players in an overall security system.
Some cracking activities depend on security defects in application or infrastructure software. Software offering tighter security provides greater value but also increases development costs and typically reduces usability.
Example Many large organizations manage mission-critical information, such as customer and employee lists, inventory, and orders using a database management system (DBMS). Major vendors include IBM, Informix, Microsoft, Oracle, and Sybase. The DBMS is a software infrastructure that provides structured storing and retrieval of information. Because many users are sensitive to the opportunities for theft and vandalism of this crucial information, commercial vendors compete partly on the security features. For example, a DBMS can capture authorization lists of who may access or modify which information and enforce access policies based on these lists.
Many security issues arise in all aspects of an application.
Example It is common for information to be communicated across a network, sometimes among applications, and in that context there are at least four major security threats (see table 5.4).
The delivery of information is prevented.
An employee precludes his manager from submitting a poor performance evaluation.
An unauthorized party views the information.
An employee improperly views the performance evaluation of another employee as a manager submits it.
The information is modified before it reaches the recipient.
An employee modifies her performance evaluation to remove unfavorable comments before it is submitted.
False information is created by an unauthorized source.
An employee submits a false (and presumably positive) performance evaluation, pretending it came from his manager.
Source: Stawlings (1999).
The composition of applications raises security issues as well. This makes improved security a natural aim of infrastructure software developers and equipment manufacturers as well as application software creators.
Example A firewall is specialized equipment installed on all connections between a protected enclave (like a company's internal network) and the public network, and it protects applications within the enclave or operating through the firewall from certain attacks. The virtual private network (VPN) is offered by some networking service providers, or it can be provisioned by an end-user organization using software added at the network boundaries (usually within firewalls) that provides secure transfer of information between sites. When provisioned as a service, it often includes other features like guaranteed performance characteristics.
Where cracking depends on a security defect or weakness in software, the larger the market share for that software, the greater the collective vulnerability. Thus, greater diversity in deployed software can improve security. On the other hand, widely used software may economically justify more maintenance and testing resources and receive more attention from benign hacking activities, reducing its vulnerability.
Organizations systematize and control security by defining security policies, which define what actions should and should not be allowed.
Example Access control illustrates policy-driven security. For some protected resource like a database or a protected enclave, access control establishes a list of user identities and associates with each individual attributes of access, such as no access, full access, access to examine and not change, and so forth. Alternatively, access control may be based on roles (manager, administrator, clerk) rather than on user identities. Security mechanisms built into the software enforce these policies (see section 5.4).
Policies are difficult to establish and enforce because they attempt to distinguish between legitimate actions by users and illegitimate actions by others, something that is often not easy to distinguish (for people, let alone software) by observing impersonal actions over the Internet. We say impersonal because Internet actions lack physical proximity, which would provide many clues to legitimacy or intent beyond simply observing actions.
Returning to figure 5.2, how do different parties interact to provide security, and what incentives encourage them to contribute to overall security in a cost-effective way? From an economics perspective, the private costs (penalties or liabilities) of security lapses should accrue to those in a position to prevent them and approximate the social costs. Accomplishing this, voluntary actions tend to efficiently balance threats against the costs of measures to counter those threats. Both software suppliers and operators are in a similar position vis- -vis the user. Their actions can mitigate security threats, but they are less likely to directly suffer a loss, and they are motivated primarily by user desire for security and willingness to pay for that security. As a rule, security policies should be defined by the user or end-user organization and, to the extent possible, enforced by the software and hardware. There is a trade-off between what can be accomplished by operational vigilance or by software and hardware means, but generally the latter is more cost-effective because it avoids recurring labor costs.
Thus, a software supplier will view security protections as important but also offer configurability options that allow the user to adjust security policies to local needs. Technological mechanisms to support security policies can range from simple declarations or warnings at entry points to total physical containment and separation. The ability of software by itself to enforce security policies is limited; stronger security measures can be achieved in the hardware as well as by separation of physical location.
Example A cracker who gains access to a user account on a computer is in a much stronger position to cause harm than one who relies on access to applications (such as a Web server) running on that computer over the network. A cracker who has physical access to a computer is in an especially strong position. For example, he may be able to reboot the computer from an operating system on a removable medium and gain complete control.
As shown in figure 5.2, laws and law enforcement play a role, particularly in dealing with serious breaches of security policies. Laws and associated sanctions are appropriate for situations where direct harm is inflicted on the user, such as information stolen for commercial gain or vandalism that causes economic loss, where the user has taken reasonable precautions to prevent such loss. It is arguably not society's responsibility to compensate software suppliers and users for the cost of security measures—this is the price of joining an open society like the Internet—but it can help in several other ways. First, legal remedies help deter crackers, thereby reducing the scale of the problem and preventing it from inflicting great harm on society as a whole. Second, one role of society generally is to step in when one person harms another. Third, society might compensate victims for their losses, although this creates a moral hazard because economic incentives to invoke appropriate security measures or demand them from suppliers and operators are removed. An alternative approach is insurance. Although still creating a moral hazard by diluting incentives, the insurance underwriting process adjusts premiums in accordance with the credibility of the security measures taken.
There is also a trade-off between technology and laws in enforcing security. At one extreme, Draconian security policies may be able to reduce the role of laws and the importance of law enforcement but may unduly harm usability. At the other extreme, relying heavily on laws can encourage lax technological security and is not advisable, given the high costs and uncertainty of law enforcement. Further, law and law enforcement are relatively ineffective when tackling a global threat, because of jurisdictional issues, and cracking over the Internet is truly a global issue. International agreements and collaborative enforcement help but are even more costly and less certain.
Example The 1986 Computer Fraud and Abuse Act makes unauthorized access and theft of data or services from financial institution computers, federal government computers, or computers used in interstate commerce a crime (Marsh 1987). To prosecute offenders, their actions must be documented, and thus companies must maintain complete audit trails of both authorized and unauthorized data accesses. This is an illustration of how the software application and laws must work in concert to facilitate law enforcement and appropriate remedies.
Specific capabilities that are available in software to enhance security are described in the next section, and law enforcement challenges are discussed further in chapter 8.
Privacy is another example of a social issue that raises significant management issues (see section 3.2.9). As a user invokes the features of an application, there is an opportunity for the operator of that application to capture user profile information (see figure 5.3). That profile can include personal information directly provided by the user as well as other information inferred from the user's actions.
Figure 5.3: Operators can capture user profiles and potentially aggregate that information over multiple applications.
Example The discussion forums a user visits, as well as the messages the user reads and posts, could suggest hobbies, interests, or political views. The Yellow Pages entries and maps and driving directions accessed by a user could suggest her physical locality over time. The products viewed in an e-commerce catalog could suggest products the user may be susceptible to buying.
Customization based on personal profile information can make applications more usable and valuable to the user. However, this depends heavily on what information is collected, how it is used, and whether it is disseminated to third parties. Particularly disturbing to privacy advocates is the ability to trace and correlate multiple activities. This can occur in at least a couple of ways. First, the user profile for a user's visit to one application can be aggregated over multiple visits to that application. Second, the user profile can potentially be aggregated over multiple applications (see figure 5.3). Over time, there is the potential to learn a lot about an individual's activities, perhaps more than she has perceived or would assent to.
Example If an individual's accesses to automatic teller and vending machines can be traced and captured, information can be gathered about location and activities. The more of those traces that can be aggregated, the more comprehensive the picture. Similarly, aggregated traces of an individual's activity across multiple e-commerce sites are more comprehensive and revealing than a single trace. Already there are specialized firms (such as DoubleClick) that offer such aggregation services, usually in the context of targeted Web advertising.
These examples illustrate that, as with security, the popularity of the Internet raises new privacy issues and makes privacy a more serious issue for software suppliers, operators, service providers, application providers, and users. There are several legitimate tensions in privacy rights. Many users demand a right to privacy but also derive legitimate benefits from sharing information with applications. Application providers have the potential to derive significant revenues from utilizing user profiles to aim advertising at certain consumers or to sell that information. Government has a need to gather personal information in law enforcement investigations and in national security (see chapter 8).
Given the importance of these issues, it is useful to see how software applications and infrastructure might enable invasions of privacy or prevent or defeat them. Two elements of an application determine the level of privacy: anonymity and control.
The best way to ensure complete privacy is to ensure the anonymity of the user. In this regard, there are two forms of identity. Personal identity includes information associated with an individual, such as name, address, e-mail address, or credit card number. Anonymous identity uniquely distinguishes one individual from all other individuals but doesn't reveal personal identity.
Example When visiting a delicatessen, a customer may be asked to "take a number." This number uniquely identifies that individual among all customers in the deli (in order to serve customers in order of arrival) but in and of itself offers no hint of personal identity.
In this context, there are three levels of anonymity:
Complete anonymity. No identity information (personal or anonymous) is available to applications or service providers. There is no feasible way to capture traces of user activity over time or across applications.
Anonymous identification. While no personal identity information is available, it can be inferred when the same user revisits an application or provider (using an anonymous identifier, as in the deli example). Traces can be captured of a single user's activity, but those traces cannot be matched to personal identity.
Personal identification. Applications or providers are aware of at least some personal identity information. Often, even with incomplete information, it is possible to correlate across distinctive sets of personal information if there is some commonality (e.g., an e-mail address).
There are also intermediate cases.
Example Anonymous identification of a network access point or a computer used to access an application is common, for example, the network address used for access. This may approximate the anonymous identification of a user over relatively short periods of time, when the user is at a single computer and network access point, but not over longer times if the user switches computers or network access points (this is called nomadic access; see chapter 10). In addition, multiple users sharing a single computer will be identified erroneously as a single user. This form of anonymous identification would be more complete and accurate for mobile terminals like personal digital assistants (PDAs) or cell phones, which are less commonly shared.
Anonymous identification is relatively benign and offers many possibilities to application and information providers, for example, in collecting anonymous but comprehensive statistical profiles of customers so as to better position products or services. The direct benefit to users is more questionable, although in many cases they may not mind anonymous identification provided they are certain it is not associated with personal identification. On the other hand, in many situations anonymity is not the right answer.
Example It is necessary to identify buyers and sellers in an e-commerce application in order to enforce the terms and conditions of a sale and for fulfillment (shipping the goods or providing the service). A user may appreciate a Web site that automatically personalizes its screen (this requires at least anonymous identification) or sends e-mail notifications (this requires personal information, e.g., e-mail address). In practice, most Web sites demand personal information (which can be fabricated by the user) for personalization because this is in their business interest.
Privacy rights and policies allow the user some control over the types of personal information collected and to whom that information can be disseminated. Or, at minimum, they require disclosure of collection and dissemination, so that the user can make an informed choice to use the application or not.
What does software technology offer in the way of privacy protection? It can enable some control over what information is shared with an application provider in the first place. Once personal information is shared, however, software cannot unilaterally determine whether and how information is further disseminated, because in the process of legitimate collection and use of personal information it is inherently revealed. If the application provider chooses to restrict further dissemination (because of self-imposed policies or legal requirements) certain security technologies can help (see section 5.4). In fact, this is no different from any other situation when confidential information is to be protected from disclosure. A similar issue arises in rights management (see chapter 8): to legitimately use information is to disclose it, rendering most direct technological limitations ineffective.
What does technology offer in relation to anonymous or personal identity? Some technologies offer full anonymity if that is what the user desires. Whether these technologies are reliable or can be defeated depends on the strength of the security and the integrity of a service provider.
Example An anonymous remailer is a service that accepts e-mail messages, strips off all identity information, and forwards those messages to their intended destination. By keeping track of identity for each message, the remailer can support replies to the anonymous sender. However, the preservation of anonymity depends on the integrity of the remailer and the effectiveness of security measures. The sender is not anonymous to the remailer and must trust the remailer to preserve anonymity. Laws that force the retention and disclosure of personal information by the remailer may intervene in some circumstances.
Anonymous identity can be insidious because it has the potential to allow violations of privacy without the user's knowledge or permission, since the user has not provided explicit information. Later, if the user does reveal personal information, this can potentially be coupled with all the information collected under the anonymous identity. Some technology companies have been blindsided by being oblivious to privacy issues connected with anonymous identity. Fortunately, there is a growing awareness of these issues, and they are increasingly being addressed in advance by software and hardware suppliers.
Example In 1999, Intel proposed to add a unique serial number to each of its Pentium microprocessor chips, not in the time-honored way of stamping it on the outside but embedded electronically, so that applications could potentially access this serial number and anonymously identify a computer participating in an application. Intel relented when privacy advocates pointed to the potential for abuse (Markoff 1999a). Yet, telephone companies have long retained unique personal identity information for each telephone and have collected detailed records of telephone calls. A U.S. law even mandates that cell phones must capture accurate location information by 2002 so that emergency services can be dispatched more reliably and expeditiously (Romero 2001). Not all users may be aware when anonymous identity information is collected and disseminated. A significant difference between these cases is that telephone companies are subject to specific legal restrictions on revealing information.
Technology can help limit the ability of applications to correlate users' activities across multiple visits or applications. The key issue is what personal identity information is disclosed to operators or service providers by an application.
Example Each time a credit card is used for a purchase in the physical world, the merchant obtains personal identity information (e.g., the person is physically present and could be photographed by a security camera). This isn't necessary if a user provides a credit card number to a merchant on an e-commerce site, because it is the merchant's bank, not the merchant, that needs this personal information (the credit and number) to secure payment, and the merchant bank doesn't need to know anything about what was purchased other than payment information. A standard for Secure Electronic Transactions (SET) promulgated by credit card companies provides this enhanced privacy by allowing merchants to authorize a sales transaction without access to the credit card number, and passes no information about the transaction other than payment to the merchant bank. The credit card number is hidden from the merchant using encryption technologies (see section 5.4).
Example The World Wide Web Consortium's P3P standard (Platform for Privacy Preferences) allows software and services to explicitly support a user's privacy preferences. Version 6 of Microsoft's Internet Explorer added several features based on P3P that help users understand and control the privacy implications of their browsing. One feature allows selective blocking of cookies from third-party sites; this information is typically deposited by aggregating advertising firms. Another feature reveals the privacy policies of visited Web sites, and users can configure the browser to make decisions based on the absence or presence of a standard policy document.
Is the personal identity of the user known or included in the user profile?
Is the user allowed some degree of control over attributes of the policy, what information is collected, and how it is used?
If there are user-selected options, what are the defaults if the user makes no explicit choice? Extreme cases are "opt in," where no information is collected unless the user explicitly so chooses, or "opt out," where all information is collected unless the user explicitly says no.
Who owns and exercises control over information that is captured?
With whom is a user's personal information shared, and how may they disseminate it further? What happens if a company merges or is acquired?
Over what period of time is personal information captured, and how long is it retained?
In summary, ensuring appropriate levels of privacy requires a combination of policies, security technologies, operations, law enforcement, and legal remedies. The essence of achieving privacy is offering the user control over what kind of information is gathered and how it is disseminated, and providing technological and nontechnological means to enforce privacy policies (see section 5.4).
Although a user's deliberate or inadvertent revealing of a password is beyond the control of the software supplier, the software can include heuristics to check password quality (Is it long enough? Is it in the dictionary? Does it include numbers as well as alphabetic characters?). The software can also maintain a history of previously used passwords and prevent the reuse of old passwords within some (long) time window.
A firewall can also be realized as software running on commodity hosts or within standard network routing equipment. Firewall software is also available to run on a single host (like a home desktop computer) to isolate that host from the network.
The AT&T Privacy Bird (<http://privacybird.com/>) is an Internet Explorer plug-in that translates P3P documents into easily understood text and can issue privacy warnings to the user.