The discussion in section 5.1 presumed that an application exists in a monolithic administrative domain with control over an application-global scope. Increasingly prominent are distributed management applications whose scope can include multiple autonomous organizations with an arms-length relationship as well as individual networked users.
Example E-commerce displays a rich set of distributed management applications. An example is supply chain management, in which suppliers and customers (autonomous companies) coordinate production and inventories, schedule shipments, and make payments through a distributed application. E-commerce also enables individual consumers to participate in a distributed application to purchase goods and services from online merchants.
Such distributed management applications are also distributed software systems, and hence all the issues discussed in section 4.5 arise. Distributed management, however, introduces many new challenges arising from the lack of centralized control, administration, and authority.
Example The public telephone network is a distributed management sociotechnical system. It is global in extent and spans virtually all national boundaries. It is also jointly owned and operated by a number of private and government-owned telephone companies. These companies cooperate to support phone calls between any two citizens, to ensure adequate quality, and to collect revenue and distribute it to the participating companies. Although this is challenging enough, some aspects of this example make it less challenging than many distributed management software applications. Telephone companies exist for the primary purpose of operating this network and selling a single telephone service, whereas a distributed software application is often operated by heterogeneous firms whose primary businesses have little to do with software. The interaction of individuals with distributed management applications is often complicated and intricate compared to a telephone (number dialing and talking).
As this example illustrates, the challenges of distributed management are not unique to distributed networked applications. Society has established many mechanisms and processes directly applicable to distributed management. These include a wealth of experience and best practices, an existing infrastructure for commercial transactions (e.g., insurance, escrow services, funds transfer), and an extensive legal infrastructure (laws, courts, law enforcement). Thus, the challenge for the software industry is at least threefold:
The technology must support and complement existing mechanisms and processes that organizations use to coordinate and manage distributed decision making and responsibility, accommodating heterogeneity in the management styles and processes of participating organizations.
The technology should accommodate different styles and practices in the administration within a common distributed application. In particular, the management tools mentioned in section 5.2 must work adequately in an environment of distributed decision making and responsibility.
The technology should allow different organizations some autonomy in their software acquisitions—including choosing different vendors for infrastructure and application software—and still allow them to participate jointly in applications (see chapter 7).
The challenges inherent in distributed management are diverse and extensive, so we use security (see section 5.3.1) to illustrate some representative challenges (Anderson 2001). Security serves as a good illustration because it brings in both technical and organizational issues and demonstrates their interdependence, and it is strongly affected by distributed management. Some specific security technologies described here also point to the potential and limitations of technological remedies for social issues like privacy (see section 5.3.2) and intellectual property rights management (see chapter 8).
The general participants in a security system and their roles were described in section 5.3.1, where it was emphasized that security technologies are only one pillar of a sociotechnical system that also incorporates individuals (both users and administrators) and end-user organizations. Technology cannot be fully effective by itself—the human element of the sociotechnical security system is critically important as well. A discussion of some specific security issues follows. Afterwards, we return to reflect on the general challenges facing distributed management applications.
A distributed management security system must address at least two challenges. The first is preventing deliberate attacks from crackers (who usually aren't legitimate users of the application). The second is determining a framework to manage the activities of legitimate users. The latter is actually more difficult because legitimate users have conditional rights, rather than no rights, within the application context. Security must deal with many nuances, such as a user who resides in one management domain and has limited (rather than no) rights within another.
What distinguishes a potential cracker from a legitimate user? Among the legitimate users, what distinguishes their different roles (such as administrator, user, customer, and supplier), and how do those roles lend them distinct privileges within the application?
Example A firm using a consumer e-commerce application allows all network citizens to examine the merchandise catalogs (it wants to advertise goods for sale very broadly) but allows only those who present valid credentials (like a verifiable credit card number) to submit a purchase order (because it wants to be paid for goods shipped). It trusts only its own workers to change prices in the catalog (because otherwise users could lower the prices before submitting a purchase order); in fact, it trusts only authorized workers to do so (others may have no understanding of its pricing policies).
Again, these issues are not unique to software. The challenge is to realize mechanisms in distributed applications similar to those in the physical world.
Example A bank branch is established specifically to allow bank customers access to their money and various other banking services. However, the customers must deal with the bank only through tellers at the window; access to the vaults where the money is stored is prohibited. Similarly, access to the vaults is restricted only to employees who need access to meet a legitimate job function. Meticulous audits of vault access are kept in case any question arises.
These examples illustrate that privileges are typically not ascribed to all individuals or to organizations as a whole, but rather are tied to specific actions or information in relation to specific roles in the organization. A general principle is to restrict users' privileges to actions and information legitimately ascribed to their roles.
Thus, the security framework focuses on three concerns: need, trust, and privileges. Each user is prescribed privileges, and the security system must prohibit access to anyone not explicitly possessing those privileges. Privileges are based on the following concerns: What does a particular user need to do, or what information does he need to see, as part of his prescribed role? If we want to expand privileges beyond explicitly anticipated needs (for example, to encourage individual initiative or enable response to an emergency), what level of trust are we willing to place in a user? Usually, trust of individuals comes in gradations, not extremes (an exception is a cracker, in whom we place no trust). Privileges are usually not generalized but pertain to specific actions or information.
At the level of technology, privileges are translated into access. Based on the privileges assigned to an individual and referenced to a specific action or content, restrictions are placed on his access to actions and information. Access itself is nuanced: in the case of information, it may be divided into access to see, or access to see and to change, or access to replicate and distribute, and so on.
A key element of security in a distributed application is enforcing access policies, which is the role of access control. The scope, specifics, and configurability of available access policies are determined during application development, and the enforcement of those policies is implemented by access control mechanisms within an application, within the infrastructure, or both. The specific access restrictions placed on an individual user can be configured (within the policy limits afforded by the developers) by a system administrator (who is given the privilege of controlling access policies and the responsibility to maintain security), or they may be determined dynamically by the application itself.
Example In a consumer e-commerce application, a system administrator will configure a worker's permanent access privileges to change the catalog information if that is the worker's intended role. On the other hand, the right of a consumer to submit a purchase order will be determined by the application itself, based on credit information it acquires from external bank or credit-rating agencies.
An important design issue is the granularity at which access is restricted. This issue comes up at both ends of the access: the people doing the access, and the information or application functionality being accessed. Considering first the people, access policies may be established for each individual, but more likely access will be role-based, that is, classes of individuals with similar roles are granted the same access privileges. Introducing roles leads to enhanced flexibility: an individual may play multiple roles and roles may be reassigned.
Example While she is on vacation, an employee may delegate some of her roles to a co-worker. Or to rebalance workload, a manager may reassign roles or assign more people to a demanding role. Any such change in a system directly associating individual users with access rights would be tedious: all access control lists on all relevant resources would have to be inspected and, if necessary, changed. This is costly, and opportunities for mistakes abound. In a role-based approach such complications only arise when it becomes necessary to redefine the roles themselves, not their association with access control lists or their assignment to individuals.
Regarding the information being accessed, distinct access control policies might be implemented for each elemental action or data item (fine granularity), or a common access control policy might be enforced across broad classes of actions or data (coarse granularity).
Example In the e-commerce example, the granularity may range from all sales employees' being granted the right to change all prices in the catalog (coarse granularity) to granting a particular individual the right to change prices in a particular category of merchandise (fine granularity). A customer may be granted the right to purchase any merchandise in the catalog, or only merchandise below a specific maximum price based upon his credit rating.
The form of access control policies appropriate to a particular application context would typically be established and configured during provisioning, and configuration of those policies applied to individual users would be configured (and occasionally updated) during operation.
In recognition of their importance, access control mechanisms are a standard feature of most infrastructures, although the Internet itself is a notable exception that has been addressed by some add-on technologies (such as the firewall and the virtual private network; see section 5.3.1). Some major access control capabilities provided by the infrastructure are listed in table 5.6.
The ability of the infrastructure to meet the special access control (and other security) needs of individual applications is limited because by design the infrastructure can know nothing of the internal workings of an application (see chapter 7). Thus, an application typically configures generic infrastructure capabilities to its needs and may add its own access control capabilities.
The access control mechanisms of table 5.6 actually predate the rising prevalence of distributed management applications. There remains much for research and development to do to arrive at solutions that meet emerging needs.
File system permissions
Each file has a set of attributes, one of which lists the permissions of specific users to read, or to read and change, that file.
Firewalls can create a protected enclave based on the topological point of connection to the network. Only users accessing the network from within that enclave can fully participate in the application. The firewall enforces security policies such as which applications can or cannot be accessed from outside the enclave. Finer-grain restrictions on how applications are used are not the province of a firewall, which may know of an application but not about it.
Creates a protected enclave that incorporates the public Internet. This is typically done by connecting intranets through the public Internet via secure links or by subscribing to virtual private networking (VPN) service from a networking service provider.
User access control
Permissions to read, or to read and change, individual tables in the database can be set for individual users. Finer-granularity access control within tables is also available.[a]
[a]Column-based access control is usually available, and some databases support predicative access control, e.g., a user can see all records of employees with salaries below some limit. One way to achieve both is to create a "view" that only displays the allowed rows and columns and then limit access to the view, not to the underlying tables.
Example The intranet and extranet concepts do not, by themselves, restrict access to users within the protected enclave they create. This is too coarse a granularity, because of the need to maintain role-based gradations of trust within a protected enclave. Worse, firewalls do not provide sufficiently nuanced control over access for workers while traveling or at home (leading to some well-publicized security leaks) nor take sufficient account of the need to accommodate conditional access policies for business partners or customers.
A fundamental issue in access control is verifying the identity of a user (or alternatively a host or an organization) across the Internet—this is called authentication. In the absence of the ability to authenticate reliably, access control can be circumvented by an imposter. In contrast to the physical world, authentication cannot rely on normal physical clues like sight, hearing, and smell. It is not sufficient to substitute electronic sensors for our senses, because an impostor can often defeat them, for example, by substituting a picture from storage for a picture captured in real time by a camera.
This challenge is not unique to networked computing—business has been conducted by mail for years. Some of the mechanisms that have been developed in the postal system, such as envelopes, permanent ink, and signatures, have direct analogies in digital security systems. Similar challenges and mechanisms arise in the physical world, such as identity badges.
Authentication of an individual, on the network or in person, can be based on something he knows (and only he knows, i.e., a secret), an artifact he possesses (and only he possesses), or something only he is. The third option—some physical characteristic that distinguishes one person from another—is known as biometric authentication.
Examples A password or a codeword (reputedly used by spies) is a distinguishing secret. A door key or a credit card is a unique artifact possessed by one person only. A person's face or fingerprint is a unique physical characteristic. A signature in handwriting is unique because it depends on physiological characteristics and experience unique to an individual. All these are used for authentication in the physical world.
Authentication across the network can leverage similar ideas. A secret is usually a password chosen by the user, or a collection of random-looking bits, called a cryptographic key, provided to the user. (The larger the number of bits in the key, the less easily guessed it is or the more keys would have to be exhaustively tried. Keys on the order of 56 to 1,024 bits are common.)
A common artifact used for authentication is a smartcard, which looks much like a credit card but contains an encapsulated microprocessor and storage. One smartcard can be distinguished from all others because of a unique secret cryptographic key encapsulated within it. Thus, the smartcard is merely a variation on the secrets approach, a convenient way for a user (particularly one who needs to travel and doesn't always have a computer around) to carry a cryptographic key and use it for authentication. (Smartcards can do much more than this, of course, like store medical or financial information.) Authentication based on possession of an artifact like a smartcard is reliable only if it is not easily replicated or forged. In the case of a smartcard, this means that the unique secret it harbors must be encapsulated so that it cannot be extracted (or is at least very difficult to extract).
How would you actually exploit a secret for authentication (including a secret in a smartcard)? Your first inclination might be to check whether the entity being authenticated can produce the secret. Seems simple enough, but there is a fatal flaw in this simplistic protocol. It requires that you know the secret, in which case it isn't secret. Woops! It also requires that the secret be sent over the network, where it could be intercepted by a cracker. These are weaknesses of most password authentication protocols.
A solution to this conundrum is a challenge-response protocol. A challenge is issued that requires the entity being authenticated to respond in a way that confirms it possesses the secret while not revealing that secret. Verification requires enough information to conclude an entity possesses the secret. For our purposes, call this the corroboration information for the secret (although this is not standard terminology). The secret and its corroboration information must be coordinated, that is, they must effectively originate from the same source. The secret is given to the entity that wants to be authenticated in the future, and the corroboration information is given to everybody who wants to authenticate that entity. It must not be feasible to infer the secret from its corroboration information.
The core technology that accomplishes this is cryptography, which consists of an encryption algorithm applied to an original message (the plaintext) based on a cryptographic key that yields a gibberish version of the message (the ciphertext). A coordinated decryption algorithm (based on another cryptographic key) can recover the original plaintext. When the encryption and decryption algorithms use identical keys, the algorithm is symmetric; otherwise, it is asymmetric (see figure 5.4).
Figure 5.4: An asymmetric encryption algorithm uses two coordinated encryption keys.
Example A standard challenge-response protocol can be based on the asymmetric encryption algorithm shown in figure 5.4. This workhorse uses two coordinated keys, key-1 and key-2. The algorithm and keys have the properties that (1) the plaintext cannot be recovered from the ciphertext without knowledge of key-2, and (2) key-1 and key-2 are coordinated so that the plaintext is recovered, and key-1 cannot be determined from knowledge of key-2, or vice versa. All these statements must be prefaced by the qualification "within a reasonable time using the fastest available computers."
With these properties in mind, the asymmetric encryption algorithm shown in figure 5.4 can be used in a challenge-response authentication protocol as follows. Key-1 is the secret held by the entity being authenticated, and key-2 is the corroboration information for that secret, which can be made public so that anybody can do an authentication. To authenticate that entity, you issue the following challenge: "Here is a random plaintext, and you must calculate the resulting ciphertext and return the result to me." You can verify that the entity possesses the secret key-1 by applying decryption to the returned ciphertext using key-2. If the result is the original random plaintext, authentication is successful. Note that the secret, key-1, is not revealed and cannot be determined from the corroboration information, key-2, or the encrypted ciphertext.
Important management issues arise in using a secret for authentication. How is a secret to be kept secret? There are two ways a secret could be improperly divulged. First, it can be lost or stolen. Means must be taken to prevent this (like encapsulating the secret in a smartcard). As in all areas of security, human issues loom large. Many security breaches result from deliberate actions on the part of people, like the theft of a secret by an unscrupulous employee or inadvertent revealing of a secret through carelessness. Second, the holder of a secret might deliberately divulge it for economic gain or other advantage. This is unlikely in the case of secrets used for authentication, since rarely is there a motivation or incentive to deliberately empower an imposter. The situation is entirely different in other cases, like rights management (see chapter 8).
Authentication by biometric identification requires a sensor to gather the biometric information, like a fingerprint or a retinal scan. This has the inherent problem mentioned earlier, namely, biometric information can be faked, although sophisticated sensors measure dynamic information that is difficult to take. This can be overcome to some degree by using a trusted system (see section 4.4.5) to gather the biometric data. A trusted system is relied upon to enforce certain security policies, in this case gather information from a sensor (like fingerprint scanner or camera) that is encapsulated in the trusted system and transmit that over the network without modification. Biometric authentication also requires corroboration information to interpret the biometric information that is gathered.
All these means of authentication have serious obstacles to being widely adopted, among them network effects (see section 3.2.3). How do we ensure that everybody who needs to be authenticated is supplied the needed secret (or trusted system) and that everybody else is supplied corroboration information they can trust. We can appreciate these obstacles further by examining some additional issues in authentication.
Cryptography, smartcards, and challenge-response protocols are impressive technologies, but technologies alone cannot solve the access control and authentication problem. A social system or organization must back up the technology by establishing the level of trust to be placed in each individual (or host or organization) and verifying identities for the benefit of authentication. There must be someone or something to whom we grant the authority to tell us whom to trust. Not surprisingly, this entity is called a trusted authority.
An immediate need for a trusted authority arises in authentication. Suppose you are to authenticate an entity using a secret and its corroboration information. Where is the corroboration information obtained? Clearly it has to come from a trusted authority, not from the entity being authenticated; otherwise, you could easily be fooled.
Example Consider the security associated with entering a country. If Eve could manufacture her own passport, she could fool the immigration authorities by manufacturing a forged passport. That passport might associate the identity of someone else (say Alice) with Eve's picture, and Eve could then enter the country as Alice. The immigration authority accepts Alice's passport only because it is manufactured by a trusted authority (e.g., the U.S. State Department), and it trusts that this authority has taken appropriate precautions to ensure that Alice's (and not Eve's) picture appears in Alice's passport and to ensure that the passport cannot be easily modified.
Thus, authentication is fundamentally a social phenomenon arising from two or more entities (people, organizations, hosts, smartcards) participating in some application over the network with the assistance of one or more trusted authorities. The role of a trusted authority is (1) to establish to its own satisfaction the identity of and trust to be placed in some entity, (2) to provide that entity with a secret (one that the authority creates, for example, by generating a large random number) to be used to authenticate the entity in the future, and (3) to assist others in authenticating that entity by providing corroboration information for that secret. The authority is legally responsible for negligence in any of these roles.
Example In a closed administrative domain (like a corporate intranet), the system administrator may serve as a trusted authority. In a distributed management (but sensitive and closed) application like an electronic funds network or credit card verification network, the operator of the network may serve as a trusted authority. In Internet e-commerce today, a certificate authority is a company that specializes in serving as a trusted authority for merchants and other providers, and the acquirer bank who supplies a credit card to an individual acts as the trusted authority for that individual. Serving as a trusted authority is potentially a role for government as well, as in other contexts (driver's license, passport). For a trusted system, its manufacturer is a trusted authority.
There are several methods of distributing and utilizing secrets and using them for authentication (see figure 5.5). These different methods are appropriate in different circumstances, illustrating how technology can be molded to social needs. Specifically, the three methods shown can apply to a closed environment (e.g., a departmental application where the trusted authority is a local system administrator), to a proprietary public environment (e.g., where a set of users subscribe to an authentication service), and to an open public environment (e.g., where there is a desire for any network citizen to be authenticated, as in e-commerce).
Figure 5.5: Three methods of secret distribution in security systems.
In the closed environment it is feasible for the authority to create and distribute secrets to all entities needing to be authenticated (see figure 5.5a). In this case, the challenge-response protocol can be based on a single shared secret, shared between the entity being authenticated and the entity doing the authentication (the secret is its own corroboration information in this case).
Example If entity A needs to authenticate entity B, a trusted authority anticipating this need could create and distribute a single secret to be shared between entity A and entity B. The secret might be distributed by physically visiting each party rather than communicating it over the network, where it could be intercepted. Entity A can then authenticate entity B by issuing a challenge (such as "encrypt this plaintext using our shared secret" or "decrypt this ciphertext using our shared secret"), thereby avoiding sending the secret over the network.
A shared secret approach does not scale to large numbers of users because of the proliferation of secrets to distribute and manage. If there are n entities, then each entity has to store n − 1 secrets (one it shares with each of the other entities) and the authority has to create and distribute n $ (n − 1) secrets total (roughly a trillion secrets for a million entities, for example). The number of secrets can be dramatically reduced by the shared secure server approach (see figure 5.5b). (This is used in a popular public domain protocol called Kerberos.) Here, a trusted authority creates a secret shared between itself and each entity (n secrets for n entities) and operates a secure server to help any two entities authenticate one another. (The server becomes a single point of security vulnerability, which is why it must be "secure.") If entity A wants to authenticate entity B, then A can first authenticate the secure server (using the secret shared between the secure server and entity A) and then ask the secure server to authenticate entity B (using the secret shared between the secure server and entity B).
The shared secure server approach is not practical when the community of entities to be authenticated is very large or dynamic. An extreme case is the public Internet, where it would be desirable for any network citizen to be able to authenticate any other citizen, or at a minimum, any application or service to be able to authenticate any citizen. It is hard to believe that the entire citizenry could agree on a single trusted authority, unless such an authority were imposed (say, by some global government), and use a single trusted server to assist in every authentication. There is also the question of a business model: How does an authentication service recover its costs? On the other hand, authentication is affected by direct network effects (see section 3.2.3), which reduce the value of any fragmented solution that prevents users or applications employing one authentication service from authenticating entities employing a different service. Desirable is a solution that appears uniform to users (any entity can authenticate any other entity) but that allows distributed trusted authorities (each dealing with a local population with which it is familiar), competition among competing solutions, and the flexibility for technology and capability to advance. The first two objectives are described as federation of a distributed system, the idea being that the federated system appears uniform and homogeneous to the user, but without imposing centralized control or requirements. Successful federation must be based on interoperability standards (see section 4.3 and chapter 7) that allow heterogeneous technical solutions and competitive options to work in concert toward a common goal through standardized interfaces and network protocols.
It must also be recognized that authentication does not exist in isolation; it is tied to the more general issue of personal identity and privacy (see section 5.3.2). A service that not only allows users to enter identity information once and only once (rather than at every application and every site), allows users and applications to validate their mutual identity through authentication, and deals with privacy concerns by giving users control over which personal information is disclosed to applications and sites, is desirable. Services with most of these characteristics based on a secure server have been offered for some time.
Example AOL's Magic Carpet, Microsoft's Passport, and Yahoo! Wallet are services that allow individuals to provide personal identity information (including credit card information) to a secure server. Today the servers use a password provided by the user as a shared secret for authentication. The user can then access a number of partner e-commerce sites without entering the information again, and the secure server assists the site in authenticating the user without her reentering the password. The trusted authority (from the perspective of validating the user identity) is the credit card company, which checks the consistency of personal identity information against its records and authorizes the use of the credit card. These services compete on the basis of the quantity and quality of partner e-commerce sites, and each brings a natural user community to the table, giving them natural advantages in competing for merchant partners. However, they are unnecessarily fragmented, and recognizing the importance of direct network effects, initiatives are under way to create federated solutions. Sun Microsystems initiated the Liberty Alliance, an industry standards group dedicated to creating open standards, which includes AOL/Time Warner (as well as other software companies), online merchants, and banks. Microsoft, while not participating in an industry standards group, has created its own open standards that allow other services to federate with Passport. Thus, there are at present two major federated alliances forming, and individual users will have the option of joining one or both.
A third technical solution to the authentication challenge is the credentialing approach (see figure 5.5c), which eliminates the need for a secure server's participating in every authentication. In this approach, each entity is associated with its own trusted authority. In the case of entity B, this authority is B's certificate authority (CA). This CA must be public, meaning that all entities wishing to authenticate B (potentially all network citizens) must recognize this CA's judgment as to the level of trust to be placed in entity B, The CA initially establishes the identity of entity B to its satisfaction, and also establishes a level of trust to be placed in entity B, taking necessary measures that account for its legal responsibility. The CA then creates a secret that it supplies to B, as well as corroboration information it is prepared to provide to anyone wishing to authenticate B.
Example A merchant who wishes to sell items on the Internet will seek out a CA that is widely known and whose authority is respected by the potential customers of this firm. To establish the legitimacy and responsibility of this merchant, the CA will likely take a number of actions. It will inspect its physical premises, talk to executives, and check the reputation and financial resources of the firm with ratings sources (such as Dun and Bradstreet). Having convinced itself that its own reputation (or pocketbook) will not be harmed by its endorsement of this merchant to potential customers, the CA will create a secret and physically convey this secret to the firm. It is the solemn responsibility of the firm to preserve secrecy because any other party obtaining this secret could masquerade as this merchant, potentially defrauding customers and sullying its reputation.
Once B has established a relationship with this CA, the CA assists other entities in authenticating B. Entity A, acknowledging the authority of the CA, requests the assistance of this CA in authenticating entity B. The CA might provide the necessary information in the form: "Here is the corroboration information necessary to authenticate B, and here is the protocol you should use to issue a challenge to B. If B's response conforms to this, then you can be sure that B possesses the secret I gave it." In practice, this direct CA involvement in every authentication is avoided by credentialing. The CA issues a credential for entity B (analogous to a government passport) that can be used by anybody that respects the authority of the CA to authenticate entity B. This credential (called a digital certificate; hence "certificate authority") includes identity information for entity B, the level of trust that can be placed in B, and corroboration information for entity B's secret. This digital certificate, together with a challenge-response protocol that the CA makes publicly available, can be used to authenticate B. In practice, entity B provides its own certificate as a credential to whomever wishes to authenticate it.
Example The Netscape Communicator and Microsoft Internet Explorer Web browsers implement an authentication protocol within the secure socket layer (SSL), originally developed by Netscape. SSL allows any browser to authenticate (and communicate confidentially with) secure Web servers that implement SSL. A secure Web server must obtain a secret and a digital certificate from a public certificate authority. Which CAs are recognized as authoritative by the browser? Today the browser suppliers largely make this determination by building a list of CAs whose authority is recognized in the browser software distributed to users. (Internet Explorer allows for the addition of new CA root certificates, typically distributed through a trusted channel such as the Windows Update mechanism.)
The public infrastructure of certificate authorities and digital certificates is called a public key infrastructure (PKI). Unlike the shared server approach, credentialing requires special software in the user's client, in this example incorporated into a Web browser. It combines authentication with secure personal identity information; the latter can be included in the certificate. However, it is relatively inflexible in that any change to this information must pass through the CA, and it is also relatively cumbersome to incorporate privacy features into this protocol. One detail: if entity B is to provide its own certificate, there must be a reliable way to establish that the certificate was created by the CA, not by entity B, and that the certificate has not been modified. This is the question of accountability (see section 5.4.6).
Confidentiality makes it possible to share information with someone or something without making it available (in usable form at least) to unauthorized third parties. Technologies can help ensure confidentiality, but legal restrictions (laws and contracts) play an important role also, particularly in ensuring that the recipients of confidential information do not voluntarily distribute it to third parties. The primary role of technological measures is preventing third parties from benefiting from information that is stored or communicated in a public place, or as a second line of defense against unauthorized access to information.
Example A consumer may need to transmit a credit card number to an online merchant over the Internet to pay for merchandise. The conversation of a user talking on a cell phone can be monitored or recorded with appropriate radio equipment. A cracker may gain unauthorized access to an intranet and view information stored on servers there. In all these cases, confidentiality technologies can prevent that cracker from using or gaining any benefit from the information.
Confidentiality is important to digital rights management. As discussed in section 2.1.1, information in digital form is easily replicated. This creates difficulties in enforcing the privileges of ownership to information, as conveyed by the copyright laws (see chapter 8). The owners of information often use confidentiality technologies (as well as the laws) to restrict information to paid subscribers.
Example A provider wanting to sell subscriptions to music or movies downloaded or streamed over the network faces two challenges. First, this information may be stolen while it traverses the network, and second, the legitimate subscriber may replicate it and provide it to others (this is termed "second use" in the copyright laws). The provider can attempt to prevent both these actions by using confidentiality technologies.
Confidentiality can help enforce privacy policies.
Example An online merchant maintains a repository of information about customers, including their purchases and credit card numbers. The merchant may have privacy policies that restrict the dissemination of this information, or government laws may impose some restrictions. Employee training will hopefully prevent the unauthorized divulging of this information, but it cannot preclude a cracker from gaining unauthorized access. Access control will defend against crackers, but should information be stolen in spite of this, confidentiality technologies can prevent the use of the stolen information.
Like authentication, confidentiality is based on secrets. Suppose that information is to be sent from entity A to entity B (users, hosts, or organizations). We can allow entity B to use the information while preventing the use by any other entity if entity B possesses a secret not available to anybody else.
Example A plaintext can be sent confidentially to an entity possessing key-2, which is presumed to be secret (see figure 5.4). Anybody else possessing key-1, which can be public information, can encrypt the message using key-1 to yield the ciphertext, and only the entity possessing key-2 can recover the plaintext from the ciphertext.
The use of asymmetric encryption for confidentiality does not require knowledge of entity B's secret to encrypt the information. The same secret that is distributed to entity B to aid in authentication can be used by entity B for decryption, and the same form of corroboration information that is used for authentication can be used for encryption. Entity B's digital certificate serves the double purpose of enabling authentication and allowing information to be conveyed confidentially to entity B.
Another form of cryptography (called symmetric encryption) requires a shared secret for confidentiality: the same secret cryptographic key is used for encryption and decryption. An advantage of symmetric cryptography is that it requires much less (roughly a thousandfold less) processing power. Thus, a common form of confidentiality is for one entity to create a random key for temporary use (called a session key) and convey it confidentially to another entity using asymmetric cryptography. Subsequently the session key can be used for symmetric encryption until it is discarded when no longer needed.
Example The secure socket layer (SSL) (see section 5.4.4) can maintain confidentiality (in both directions) throughout a session with a secure Web server. After authentication of the server by the client using the server's digital certificate, the client generates a random session key and sends it confidentially to the client. Subsequently, this session key is used to ensure confidentiality for information sent in both directions using this shared secret session key with symmetric encryption. Note that SSL does not assume that the client possesses a secret initially or a digital certificate, and authenticates the server but not the client. This is a pragmatic concession—only the relatively few secure Web servers need to possess secrets and certificates.
The authentication and confidentiality protocol of this example is often combined with password authentication of the user of a client, as in the Magic Carpet and Microsoft Passport example (see section 5.4.4). This addresses two issues: equipping all users with digital certificates is a daunting problem, and passwords serve to authenticate a user rather than a computer, a desirable feature because two or more users may share a computer.
Another important social issue in distributed management applications is accountability. When entities (users, organizations, or hosts) initiate actions, trust by itself is sometimes inadequate, and it is important to hold users accountable. Society sanctions mechanisms to enforce accountability, like contracts, laws, and courts, but they work only if the technology can produce credible evidence. Nonrepudiation is a security technology that provides evidence preventing an entity from repudiating an action or information that it originated.
Example In a consumer marketplace, a merchant needs to enforce the terms and conditions of a sale, including promises of payment once goods have been shipped. If the purchaser tries to repudiate the order ("somebody else must have ordered and downloaded this software"), then the merchant can produce proof (satisfactory to a court) to collect payment.
There are two issues in nonrepudiation. First, credible evidence must be gathered that an action or information originated from an accountable entity (user, organization, or host), and not an impostor, and that there has been no modification of that action or information beyond the control of the accountable entity. (Otherwise, repudiation would be as simple as claiming that something was changed elsewhere.)
Nonrepudiation depends on a secret available only to the accountable entity. The accountable entity produces a digital signature that can only be produced by someone possessing that secret. To complete the protocol, the digital signature must be checked for validity, using corroboration information for the secret. If it is valid, this provides lasting evidence that the action or information was generated by the entity possessing the secret. Repudiation is still possible by claiming that the secret was lost or stolen. This requires a policy that lost or stolen secrets be reported, and in the absence of such a report the presumption is that the secret was not lost.
Example The situation is similar with credit card purchases. The purchaser can later repudiate a purchase by claiming that his or her credit card number had been stolen. If the theft had not previously been reported, then this form of repudiation may not be accepted by the credit card issuer. (Of course, credit card issuers may be required to pay even if a lost or stolen card is not reported, but that is a separate issue.)
The digital signature works much like a paper signature. Given a piece of information, the signature is a piece of data determined by a calculation performed on the information and the signing entity's secret. The same secret can be used for authentication, confidentiality, and a digital signature. The signature is validated using corroboration information for the secret included in the signer's digital certificate. The signature can be retained along with the digital certificate, as evidence that the information originated with the entity possessing the secret associated with corroboration information in the digital certificate. The credibility of this evidence depends on the credibility of the certificate authority.
Example A digital signature algorithm can be based on an asymmetric encryption algorithm (see figure 5.4). The entity that wishes to sign a plaintext is assumed to possess a secret key-1. The signature is simply the ciphertext resulting from the encryption of the plaintext. Then the plaintext, the signature, and the digital certificate are sent to the recipient, who verifies the signature by decrypting it using key-2 (obtained from the digital certificate) and comparing the result to the plaintext. If they agree, the signature must have been created by an entity possessing key-1.
A digital signature also establishes that the signed information has not been modified since it was created. Lacking the secret, no other entity could have modified the information and signature in such a way that the latter would still validate the former.
The digital signature solves another problem: How do we verify that a digital certificate came from the certificate authority it claims, and that the certificate has not been modified since it left the authority? The certificate includes a body of information together with a digital signature created by the authority. The authenticity and integrity of the certificate can be verified by validating the signature, requiring the corroborating information for the authority's secret. Ultimately there has to be a root authority that is trusted but not substantiated by any other authority.
Example As explained earlier, the Web browser recognizes a set of certificate authorities. Provided in the browser software distribution is corroboration information for each of those certificate authorities recognized by the browser's supplier. Another approach is a hierarchy of trust, in which one authority's certificate is signed by another (higher) authority, and its certificate is signed by another (even higher) authority, and so on. In this manner, all authority can be referenced back to a root authority, such as an agency of the United Nations. Corroboration information for the root authority must be known to the user.
Security is a sociotechnical system (see section 3.1.4), and all the elements of a security system must work in concert. Trusted authorities are an essential element of this system, together with all the security technologies that have been described.
After browsing the catalog awhile, Alice finds a book she wants to buy and submits her order to Bob's confidentially. However, before Bob's will accept this order, it imposes two requirements. First, it authenticates Alice using information obtained from a certificate she supplies to Bob's. (That certificate was issued by Demon's Authoritative Certificates, which Bob's accepts as a trusted authority, and Bob's of course first validates Demon's signature on that certificate.) Second, Bob's requires that Alice attach a digital signature to her order, and validates that signature using corroboration information obtained from her certificate. Bob's permanently stores the order, Alice's signature, and her certificate so that Bob's can later produce them as credible evidence that the order originated from Alice in the unlikely event that she later tries to repudiate it. Finally, Bob's ships the book and charges Alice's credit card through Bob's acquiring bank.
It may appear from this example that security places a great burden on users. In fact, these detailed protocols are encapsulated in application software and are thus largely transparent to users. Of course, users do have a solemn responsibility to avoid the accidental or deliberate divulgence of their respective secrets.
Security is only one issue distinguishing a distributed management application from one that is closed or centralized. However, security does illustrate the types of issues arising, and in particular the strong influence of distributed management on software requirements. Trust, access, administration, laws, and commercial relationships are all issues. The analysis leading to software requirements must recognize and take into account these and many other issues.
In provisioning, the opportunities for coordination of technology choices may be limited by the participation of multiple autonomous end-user organizations. New end-user organizations may even join an application during operation. Even where end-users acquire technology from the same supplier, it is difficult to avoid distinct versions of that software (see section 5.1.2). This places an additional composability burden on the participating suppliers, and is also an opportunity to expand the role of infrastructure to facilitate this composability (see chapter 7). Testing becomes much more complex in both development and provisioning, because it potentially involves multiple customers and suppliers.
At the operational stage, differences in procedures and processes across autonomous end-user organizations must be accommodated. Where administrative functions cross organizational boundaries, as with establishing and enforcing trust and access, arms-length coordination mechanisms must be supported by the application. Distributed administration may be made easier and more effective through improved tools.
Distributed management applications offer an opportunity for the software industry, because they force it to reconceptualize software in a way that is much more distributed and decentralized, administered and provisioned by multiple autonomous or semiautonomous agents, coordinated through arms-length compromise, adaptation, and negotiation rather than by centralized planning and administration. Like an evolution from a centrally planned to a market economy would, these techniques penetrate and benefit all large-scale software systems, allowing software to move to a scale of scope and complexity that can only be imagined today.
The usual explanation for this is that the Internet arose from an academic and scholarly environment that was trusting of all users. A more accurate, second explanation is that keeping the network simple while adding capability at the endpoints makes the technology more flexible (this is called the "end-to-end principle" (SalTzer, Reed, and Clark 1984). A third argument is economic: security features should be an add-on because not all users and applications desire them.
Some approaches based on the secure server can ensure that a password is not revealed outside a secure enclave. Also, it is possible to check a password using equivalent information without knowing that password directly. Thus, it is possible (but arguably not the norm) to build fairly secure authentication techniques around passwords.
Note that the secret is provided by and thus is known to the authority, which is responsible for destroying it or not divulging it to others. There are many complications not dealt with here, such as how to recover from the loss of a secret.
Practical and secure authentication protocols are considerably more complex than our description here, which tries to convey the essence of the idea without getting into details. There are many subtleties to consider, and desiging these protocols is best left to an expert.