Securing Web Services with WS-Security. Demystifying WS-Security, WS-Policy, SAML, XML Signature, and XML Encryption
Authors: Rosenberg J., Remy D.
Published year: 2004
|< Day Day Up >|
Public Key Technologies
Public key technologies, including public key encryption and digital signature, will be your tools for delivering integrity, non- repudiation , and authentication to XML messages and more generally to Web services. The following sections begin with an explanation of the concepts behind public key encryption. We expand this description so that you can apply this knowledge to build the basis for digital signatures and then apply these concepts more specifically to digital signatures in XML. A discussion of public key technologies is not complete without a description of public key infrastructure and the issues of establishing trust, which are covered at the end of this section.
Public Key Encryption
Public key encryption is also referred to as asymmetric encryption because there is not just one key used in both directions, as with the symmetric form of encryption used in shared key algorithms. In public key encryption, there is a matched set of two keys; whichever one is used to encrypt requires the other be used to decrypt. In this book, we use the term public key encryption to help establish context and contrast it with shared key encryption.
The keys in public key encryption are non-matching, but they are mathematically related . One key (it does not matter which) is used for encryption; that key is useless for decryption. Only the matching key can be used for decryption. This concept provides the critical facility you need for secure key exchange to establish and transport a shared key.
The diagram from Chapter 1 that shows how basic public key encryption works is reproduced here in Figure 3.3.
Figure 3.3. Public key encryption uses one key to encrypt the incoming plaintext, and only the other key can be used to successfully decrypt that ciphertext back into its original plaintext.
Although it is true that Kerberos provides a mechanism for distributing shared keys, Kerberos applies only to a closed environment where all principals requiring keys share direct access to trusted Key Distribution Centers (KDCs) and all principals share a key with that KDC. Thus, Kerberos does not provide Web services with a general mechanism for shared key distribution; public key systems without the restriction of working only in closed environments take that role. Public key systems work with paired keys, one of which (the private key) is kept strictly private and the other (the public key) is freely distributed; in particular, the public key is made broadly accessible to the other party in secure communications.
Parties requiring bidirectional, confidential communication must each generate a pair of keys (four keys in total). One of the keys, the private key, will never leave the possession of its respective creator. Each party to the communication passes his public key to the other party. The associated public key encryption algorithms are pure mathematical magic because whatever is encrypted with one half of the key pair can only be decrypted with its mate. Figure 3.4 shows how the key pairs are each used for one direction of the confidential message exchange.
Figure 3.4. Two key pairs are involved in bidirectional, confidential message exchange. One pair is used for each direction. The sender always uses his private key for encryption, and the recipient always uses the sender's public key for decryption.
For Alice to send a confidential message to Bob, Alice must obtain Bob's public key. That's easy because anyone can have Bob's public key at no risk to Bob; it is just for encrypting data. Alice takes Bob's public key and provides it to the standard encryption algorithm to encrypt her message to Bob. Because of the nature of the public-private key pair and the fact that Alice and Bob agree on a public, standard encryption algorithm (like RSA), Bob can use his private key to decrypt Alice's message. Most importantly, only Bob ”because no one will ever get her hands on Bob's private key ”can decrypt Alice's message. Alice just sent Bob a confidential message. Any outsiders intercepting it will see useless scrambled data because they don't have Bob's private key.
We will describe digital signatures in a moment, but first we want you to notice something interesting about doing things just the reverse of Alice's confidential message. If Alice encrypts a message with her private key, which only Alice can possess, and if Alice makes sure Bob has her public key, Bob can see that Alice and only Alice could have encrypted that message. In fact, because Alice's public key is, in theory, accessible to the entire world, anyone can tell that Alice and only Alice encrypted that message. The identity of the sender was established. That is a principle of digital signatures.
The simple fact that in public key cryptography, whatever is encrypted with one half of the key pair can only be decrypted with its mate, combined with the strict rule that private keys remain private and only public keys can be distributed, leads to a very interesting and powerful matrix of how public key encryption interrelates to confidentiality and identity. This matrix is shown in Table 3.1.
Table 3.1. The Matrix of Uses for Key Pairs in Public Key Cryptography Showing What Security Principle Applies and What Part of Web Services Security It Affects
Public key encryption is based on the mathematics of factoring large numbers into their prime factors. This problem is thought to be computationally intractable if the numbers are large enough. But a limitation of public key encryption is that it can be applied only to small messages. To achieve the goal of distributing shared keys, this is no problem; those keys are not larger than the message size limitation of public key algorithms. To achieve the goal of digital signatures on arbitrarily large messages, you can apply a neat trick to remain within this size limitation, as discussed next .
Limitations of Public Key Encryption
Even when implemented in hardware, shared key algorithms are many orders of magnitude faster than public key algorithms. For instance, in hardware, RSA is about 1,000 times slower than DES.
The first performance hit comes from key generation. You must find two multi-hundred-bit prime numbers that are near the same length. Then these two primes must be tested for primality. Primality testing is a very expensive operation requiring a series of steps that each has a certain probability of determining that the values are relatively prime. These steps must be run several times to make the probability high enough as to have an acceptably infinitesimal risk of being wrong.
The second reason that public key encryption is so much slower than shared key algorithms is that RSA encryption/decryption is based on the mathematics of modular exponentiation. This means you are taking each input value, raising it to a power (requiring a large number of multiplications), and then performing the modulo operation (the remainder after doing integer division). On the other hand, shared key ciphers are based on much faster logical operations on bit arrays. Public key algorithms are called asymmetric for a reason. Because the private key has a much larger exponent than the public key, private key operations take substantially longer than do public key operations. To establish identity and support non-repudiation (for example, in digital signatures) where the public key is used for encryption, decryption takes substantially longer than encryption. In integrity applications (also in digital signatures) where the private key is used for encryption, it is the other way around. This imbalance would be a problem when applied to large messages but is not an issue when applied only to small messages such as the 200-bit key being exchanged to enable shared key encryption or the 20-byte message digests used in digital signatures, which will be discussed shortly.
The third reason to be concerned about the computational complexity of public key encryption is the padding issue. The input to RSA encryption operations is interpreted as a number, so special padding is required to make the input totally consistent. The total length of the data must be a multiple of the modulus size, and the data must be numerically less than the modulus. A 1,024-bit RSA key has a 128-byte modulus. Therefore, data must be encrypted in blocks of 128 bytes. Each input number must be padded with zeros until its numerical value is less than that of the modulus. This padding places a critical restriction on the size of data that RSA can encrypt. This is why RSA is never used to encrypt the entire plaintext message but only encrypts the shared key being exchanged between communicating parties. Then, after the shared key is established safely between the parties, RSA (public key) encryption is no longer used, and instead AES (shared key) encryption is used on the plaintext message itself.
Digital Signature Basics
Digital signature is the tool you use to achieve the information security principle of integrity. Digital signature involves utilizing a one-way mathematical function called hashing followed by public key encryption. The basic idea is to use a hash function to create a message digest of fixed and short length and then to encrypt this short message digest. A message digest is a short representation (usually 20 bytes) of the full message. You need to do this because, as you have just seen, public key encryption is slow and is limited in the size of message it can encrypt. A hash is a one-way mathematical function that creates a unique fixed-size message digest from an arbitrary-size text message. One-way means that you can never take the hash value and re-create the original message. Hash functions are designed to be very fast and are good at never creating the same result value from two different messages (they avoid collisions). Uniqueness is critical to make sure an attacker can never just replace one message with another and have the message digest come out the same anyway; that would essentially ruin the goal of providing message integrity through digital signatures.
You know that public key encryption works only on small-size messages. You also know that if Alice encrypts a small message with her private key and sends the message to Bob, Bob can use Alice's public key to prove that the message could only have come from Alice (as long as you are sure she protected her private key). This identification of the sender is one half of what digital signature is all about. The other half relates to obtaining the goal of verifying the integrity of the message. By integrity, we mean that you can tell whether the message has changed by even one bit since it was sent. The key to integrity in the digital signature design is the use of a hash function to create a message digest.
Hashing the Message to Create a Message Digest
A hash function creates a message digest that can be used as a proxy for the original message. You want this function to be very fast because, as you will see, you need to run this function on both the sending and verifying ends of a communication. Most importantly, you must be certain that it is impossible for two messages to create the same output hash value to achieve integrity; otherwise , you lose your goal of integrity. If it were possible for two messages to create the same message digest, it would mean that someone could substitute a new message for the original and fool the recipient into thinking the new fraudulent message is the correct one. For integrity's sake, it must also be impossible to reverse the function and take the small output value and regenerate the original message (that is, it must truly be a one-way function).
Functions that avoid duplicate values are exceedingly rare. The birthday paradox demonstrates this point. With any random 23 people in a room, the chance that two have the same birthday is 50%. Think of these people as representing 23 different plaintext messages. If a hash function applied to just 23 messages (or 2,300 or 23,000 messages) had a 50% chance of two of them resulting in the same message digest, you would not accept this type of hash function to deliver integrity.
Several one-way hash functions that have excellent collision avoidance properties have been designed and deployed, including MD4, MD5, and SHA1. Weaknesses have been found in the first two, and currently most security systems, and all the standards for Web services security, use SHA1. (Interestingly, SHA1 is a replacement for SHA which corrected a flaw in the original SHA algorithm.) 
SHA1, which stands for Secure Hash Algorithm 1, refers to it being the first ( implying there may be the need for variants in the future). NIST and the NSA designed the algorithm for use with the Digital Signature Algorithm (DSA).
SHA1 meets all the requirements for a secure one-way hash algorithm. In particular, it is not reversible. Guessing a 20-byte hash (160 bits) has a 1 in 2 160 chance of coming up with the original message.
By way of scaling how unlikely guessing a 1 in 2 160 chance is:
On a computer running 1 billion hashes per second (still beyond computing capacity today), brute-force guessing of a 20-byte hash would take 10 22 billion years to accomplish, which is more than the lifetime of the universe.
If you hash the entire plaintext message and then protect the resulting message digest from being modified in any way, and if the sender and receiver use the exact same input message and hash algorithm, the recipient can check and verify the integrity of the message without going to the huge expense of trying to encrypt the entire message. So now let's discuss how to protect the message digest.
Public Key Encryption of the Message Digest
Protecting the message digest simply involves encrypting it with the private key of the sender and sending the original message along with the encrypted message digest to the recipient. Public key encryption of the message digest provides non-repudiation because only the subject identity with the private key that did the encryption could have initiated the message. Protecting the message digest by encrypting it such that no middleman attacker could have modified it provides message integrity.
Digital Signature Signing Process
You are now ready to put all this information together and create a digital signature. You will combine the hashing function (SHA1) used to convert the plaintext message into a fixed-size message digest with public key encryption (RSA or DSA) of that message digest to create a digital signature you can send to the selected recipient. The basic steps for creating a digital signature are as follows :
These steps are shown in Figure 3.5, where the hash algorithm being used is SHA1.
Figure 3.5. Digital signature signing process.
Digital Signature Verification Process
Signature verification is the process the message recipient goes through to determine the identity of the sender and to determine that the message arrived intact and unaltered. In other words, signature verification is necessary to achieve the security principles of message integrity and non-repudiation. The steps in signature verification are as follows:
The verification process just outlined is shown in Figure 3.6.
Figure 3.6. Digital signature verification process.
You now know for sure that the unique private key that matches this public key is the one that encrypted the message digest, which tells you the identity of the signer if you are certain she has protected her private key ”all of which gives you non-repudiation. You also know that the message was sent unaltered, so you have integrity. What you still need, and will be discussed in a few moments, is assurance that you know the identity of the owner of the public key you just used.
RSA is the most commonly accepted digital signature algorithm used in Web services security, although officially DSA is also allowed.
Integrity Without Non-Repudiation
When non-repudiation is not a goal, a very different approach to verifying message integrity called Message Authentication Code (MAC) is used. This approach is like creating a cryptographic checksum of a message. The MAC's class of algorithms provides pure message integrity protection based on a secret shared key. To limit the size message MAC will operate on, Web Services Security combines pure MAC with a hash, so the acronym becomes HMAC. Figure 3.7 shows how an HMAC functions.
Figure 3.7. The Hashed Message Authentication Code (HMAC) algorithm.
Think of an HMAC as a key-dependent one-way hash function. Only someone with the identical key can verify the hash. You know that hashing is a very fast operation, so these types of functions are useful for guaranteeing message authenticity when secrecy and non-repudiation are not important but speed is. These algorithms are different from a straight hash because the hash value is encrypted and protected with a key. The algorithm is symmetric: The sender and recipient possess a shared key. You will see HMAC again in Chapter 4, "Safeguarding the Identity and Integrity of XML Messages," as part of the XML Signature discussion.
A Digital Signature Expressed in XML
Web services messages are XML-based. Message-level security in Web services requires that you apply digital signatures in an XML setting. Therefore, you need a digital signature expressed in XML. That is the simplest description of what the XML Signature standard is. XML Signature was designed to have a great deal of flexibility. An XML Signature can be placed inside an XML document, or it can refer to external elements that need to be signed. For example, an XML Signature could be applied just to a credit card number inside the XML document, or it could be applied to a complete personal medical history. It is also important to be able to sign external elements like online resources such as Web pages to prevent defacement. XML Signature supports that option as well.
The structure of an XML Signature is shown in Listing 3.1:
Listing 3.1. The Structure of an XML Signature
<Signature> <SignedInfo> (CanonicalizationMethod) (SignatureMethod) (<Reference (URI=)? > (Transforms)? (DigestMethod) (DigestValue) </Reference>)+ <SignedInfo> (SignatureValue) (KeyInfo)? (Object)* </Signature>
We expand on this structure and explore the full richness of XML Signature in Chapter 4.
Public Key Infrastructure
As you saw earlier, digital signature verification required that the recipient obtain the sender's public key. We were not specific about how that happened . This step is incredibly important to establishing trust between sender and recipient. The recipient needs to rely on the trust implied in the public key that it is really from the sender and that the sender has maintained custody and sanctity of his private key. This is the domain of Public Key Infrastructure (PKI). PKI emcompasses certificates, certificate authorities, and trust.
In all our discussions of public key encryption and its application to digital signatures, we oversimplified to the extreme when we said the public key is just sent to a recipient. In fact, the key alone is not enough. You need more than just the public key itself if the public key is from someone you don't know well. You need identity information associated with the public key. You also need a way to know whether someone you trust has verified this identity so that you can trust this entire transaction. Trust is what PKI is all about.
Digital Certificates Are Containers for Public Keys
A digital certificate is a data structure that contains identity information along with an individual's public key and is signed by a certificate authority (CA) . The official designation of standard digital certificates with which we will be dealing is X.509. By signing the certificate, the CA is vouching for the identity of the individual described in the certificate. This is so that relying parties (message recipients) can trust the public key contained in the certificate.
Bob, the relying party, must be certain that this is really Alice's key. He ensures that it is by checking the identity of the CA that signed this certificate (how he trusts the CA in the first place we will get to in a moment) and by verifying both the identity and integrity of the certificate through the CA's attached signature. A validity date included in X.509 certificates helps ensure against compromised (or out-of-date and invalid) keys.
The X.509 digital certificate trust model is a very general one. Each subject identity has a distinct name . The subject must be certified by the CA using some well-defined certification process it must describe and publish in a Certification Practice Statement (CPS). The CA assigns a unique distinguished name to each user and issues a signed certificate containing the name and the user 's public key.
Version 3 of the X.509 standard specifies a certificate structure including certificate extensions. The v3 certificate extensions enable extra functionality. For example, they allow incorporating authorization information in the certificate, providing the possibility of defining special authorization certificates. However, there is no guarantee of uniformity in the use of extensions, which can lead to interoperability problems.
The most important fields in the X.509 structure include
The issuing certificate authority always signs the certificate. Figure 3.8 shows how a digital certificate is represented by the Windows XP operating system.
Figure 3.8. Certificate display screenshot from Windows XP.
Certificate Authorities Issue (and Sign) Digital Certificates
The CA signs the certificate with a standard digital signature using the private key of the CA. Like any digital signature, it allows anyone with the CA's matching public key to verify that this certificate was indeed signed by the CA and is not fraudulent. The signature is an encrypted hash (called the thumbprint ) of the contents of the certificate, so standard signature verification guarantees integrity of the certificate data as well. That, in turn , allows you to believe the information contained in the certificate. Of course, what you are really after is trust in the validity of the Subject's (that is, the sender's/signer's) public key contained in the certificate.
So far, so good ”if you trust the CA who signed this certificate.
The entire world may rely on such a signature, so you can be sure the CA goes to extraordinary lengths to protect its private key, including armed guards , copper -clad enclosures, and special hardware protecting the private key.
Typical trusted CAs really do have armed guards protecting their buildings and areas called man traps that protect entry to the rooms in which the keys are contained. Two people must enter at once or not at all. The room contains a copper-clad enclosure that prevents stray electromagnetic radiation from emanating where it could be picked up by an intruder. Inside this copper-clad enclosure called a tempest room are the hardware devices that contain the keys. Changes in temperature or any attempts to move these devices cause them to self-destruct ”yes, really self-destruct. To re-create a destroyed key requires five or more separate individuals taking their own special hardware card and combining them all under the observation of a trained auditor . The point is that these private keys are well protected.
The public key of the CA is typically very widely distributed. In fact, the public keys for SSL certificates ”X.509 certificates issued to organizations for their Web sites ”are found in all Web servers and all Web browsers to make sure relying parties can always verify certificates signed by those CAs.
The key to trusting the signed certificate is what process the CA used to verify the identity of the subject prior to the issuance of the certificate. It might be based on individuals being employees . It might require they produce a driver's license. Or it might be that they must correctly answer a set of shared secret questions drawn automatically from databases that know about all individuals, such as the telephone company, driver's license bureau, or credit bureau . In extreme cases in which no doubt is tolerable (national security, for example), a blood or DNA sample might have to be produced.
You can think of the CA as a digital notary. An individual's identity is based on the assurance (honesty) of the notary. A certificate policy specifies the levels of assurance the CA has to provide, and the CPS specifies the mechanisms and procedures to be used to achieve a level of assurance. Development of the CPS is the most time-consuming and essential component of establishing a CA. The planning and development of the certificate policies and procedures require the definition of requirements, such as key escrow, and processes, such as certificate revocation, which are covered later in this chapter.
A CA may be the guy down the hall, the HR department of your company, a local external company, a public CA, or the government.
CAs Must Be Trusted or Vouched For by a Trusted CA
If the CA that signed the certificate is not known to or not trusted by the relying party, that CA must itself be vouched for by a more trusted CA that satisfies the relying party. This certificate chain can continue indefinitely until eventually the chain reaches a trusted CA or it reaches a root certificate. In a root certificate, the issuer of the certificate is also the subject of that certificate (that is, the certificate is self-signed); it's a dead end. If the relying party does not trust the root CA, that party is out of luck and will have to reject the transaction being requested . If you are using a certificate issued by GeoTrust, you might have two certificates in a certificate chain, as shown in Figure 3.9.
Figure 3.9. Screenshot showing a certification path . Certificate issued to www.geotrust.com is linked back to a certificate issued to (and by) Equifax Secure which is self-signed.
Two certificates are involved in this example. One certificate is issued directly to www.geotrust.com, who is the subject, and Equifax Secure is the issuer. The second certificate is the Equifax Secure root certificate, which is self-signed: The subject is Equifax, and the issuer is Equifax. Typically, you have a "trust store" or "trust list" in some database (it exists in Web servers, Web browsers, and in the operating system itself in many cases and for Web services will be placed in additional accessible places) containing the certificates of the certificate issuers that you are willing to trust. Through a process called certificate path validation, an attempt is made to create a "path" of valid, non- revoked certificates to one of the defined trusted certificate issuers in your trust list. This process can become quite complex; hence, one of the goals of the XML Key Management Specification (XKMS) discussed in Chapter 9, "Trust, Access Control, and Rights for Web Services," is to offload this complexity to a "trust engine" Web service that will handle this validation for other Web services.
A reason for this complexity and for the long discussion about it here is that people believe that there will be hundreds and thousands of registration authorities eventually. Registration authorities are the organizations that establish and maintain the identities of individuals known to them. Company HR departments, universities, retailers, and hundreds of other types of organizations either already are or plan to become registration authorities. CAs are registration authorities themselves , or they can accept input from RAs and just be certificate issuers. Given this explosion in the number of RAs/CAs, it becomes quite clear why certification chains become complex and important in establishing trust.
If the relying party does not have the public key of the CA that signed the certificate presented, that party must acquire the key from someone else who vouches for that CA for the sub-CA below her. This means moving up the trust hierarchy to another CA that signed the certificate. This concept is shown in the diagram in Figure 3.10. Here, relyinng parties are end-entities (EE) that are presented with certificates from sub-CAs. Since the sub-CAs are not known or trusteed by the EEs, they require the certificate of the CA that signed the sub-CAs certificates. These CAs further up the trust hierarchy vouch for the sub-CA. These root CAs are the trust anchors for the EE alllowing them to trust the certificates from the sub-CAs.
Figure 3.10. Certificate authority trust hierarchy.
Root CAs Are Trusted by Everyone
There are not many root CAs because, for their certificates to be understood by your tools, the public key for their self-signed certificates must already be accessible to those tools. For the most common kinds of certificates ”those used for SSL ”this has occurred by embedding the root keys for the root CAs right in the browser. In fact, the browsers typically have dozens of root keys and another batch of intermediate CA keys as well. Just a portion of the pre-installed roots on Windows XP are shown in Figure 3.12.
Figure 3.12. Pre-installed roots in Windows XP.
With Web services, it is not the browser having the keys that matters. It is the nodes or termination points of the Web service that matter, where signature validation or decryption must occur. As a Web service is deployed, a crucial step in that deployment is to make sure the appropriate public keys of all CAs whose certificates may be seen by that Web service are installed at all endpoint servers.
Key Escrow for Recovering Lost Private Keys
Key escrow is a very important and controversial aspect of PKI. This technology is about storage and retrieval of private keys to recover data in the absence of the private key owner. Key escrow goes against the very idea of a private key. The private key may ultimately be accessed by more than the owner of the key, which lessens the case for non-repudiation. Key escrow is often considered a necessary evil when critical information may be encrypted with this private key and loss of the key, death of its owner, or some sort of fraud means that information might be forever irretrievable. Requirements for key escrow/recovery systems may come from customer support or legal or policy requirements. International PKI implementations may require key escrow to comply with government and law enforcement restrictions.
The concept of a private key is crucial to the effectiveness of PKI because so many downstream PKI concepts depend heavily on the assumption that the private key is never compromised. And yet, humans are fallible and do indeed lose keys. If a critical private key is lost, there truly is no way to decrypt any data its matching public key encrypted. That data is lost forever ”even to the CIA, NSA, or FBI. The risk of losing that data are too high for many reasons, which is what drives the need for a key recovery scheme. There are other reasons for a server-based key escrow scheme as well.
Key escrow may be an important consideration in Web service deployments. Critical information flowing through the Web services infrastructure may have been encrypted, and the governing organization, although it wants to maintain the confidentiality of this data, cannot afford to lose access to it forever. If the private keys used to encrypt it are lost, that is what would happen.
When an enterprise entrusts employees or agents with confidential data, this data is at risk of being forever lost if an employee loses her private key. Therefore, the employer may want a way to recover the private key in that eventuality. If the employee uses the key to encrypt email messages, and she violates company policy or if law enforcement subpoenas the email, key recovery is needed to decrypt the email. Finally, if an employee leaves the company, he may not cooperate with the return of the private key and that key may need to be recovered.
Certificate Revocation for Dealing with Public Keys Gone Bad
Key escrow is an optional feature of PKI, but certificate revocation is not. Revocation is an essential part of the certificate process to establish and maintain trust. Authentication of clients and servers requires a way to verify each certificate within the chain, as well as a way to determine whether a certificate is valid or revoked. A certificate could be revoked if a key is compromised or lost or as a result of modification of privileges, misuse, or employment termination. It is essential, especially for Web services, that near real-time revocation of certificates be achieved.
Currently, two technologies are used for revocation: certificate revocation lists (CRLs) and online certificate status protocol (OCSP).
CRL Certificate Revocation Checking
The CRL is an up-to-date list of all certificates revoked; a CA must keep this list accessible to relying parties. It goes without saying that the CA must make it easy for registration authorities to revoke any given certificate (but prove that they have the right to do so). With CRLs, relying parties have the burden of checking this list each time a certificate is presented. Best practices call for a certificate deployment point (CDP) URL to be embedded in the certificate. A CDP is a pointer to the location of the CRL on the Internet, accessible programmatically by any relying party's applications.
CRLs are usually updated once per day because the process of generating them is non-trivial and time-consuming. When an organization is dealing with a compromised key or a rogue employee, once-per-day updates can mean a huge loss during the day compromise occurred, especially when interactions are automated with Web services. Currently, almost no one checks revocation lists. Although CRLs are created obediently by the sponsoring CAs and numerous tools can and do process them, there are so many unsolved problems with them that, in our view, CRLs on the Internet are a technological failure.
OCSP Certificate Revocation Checking
OCSP was an attempt to create a much finer-grained protocol for essentially real-time revocation checking. But as in CRLs, the trust information provided requires it be signed by the originating CA, which is an expensive operation to perform in real time. The best case on an unloaded system of moderate speed is 26ms response time for a single OCSP request in our tests. In our view, this makes standard OCSP so limited in scope that it will continue to be only a bit player in revocation solutions.
In our opinion, there is much promise in emerging techniques that are scalable and respond in microseconds and still conform to the OCSP standard without requiring time-consuming digital signatures on each request. One such approach is Silvio Micali's technique based on chains of hashed secret codes. Web services will require such a high-speed revocation system. For details, see the section on Silvio Michali's High-Speed Validation/Revocation in Appendix A.
What this discussion about CAs, trust chains, cross-certification, key escrow, and certificate revocation leads to is an overriding need for Web services that offer trust services to other Web services. The developer of a Web service does not want to be burdened with all the issues we have just outlined. Furthermore, companies will consider how these issues are handled as core corporate security policies and will want consistent, reliable, and centralized administration of these policies.
Having trust services available as Web services means developers can avoid writing key management and signature processes in every application. All the critical trust services will be encapsulated into a reusable service. Trust services must deal with the complexities of PKI, keys, signatures, encryption, and the like. Trust services are Web services that provide these and other security services for any application that needs them. Because they are just Web services themselves, they are accessed through SOAP messages. With this approach, special PKI client code does not need to be deployed any longer.
Key Management Services
The first trust service needed is one that manages the registration, distribution, and life cycle of public keys. Like any Web service, it needs to be a service that has a WSDL and is accessed via SOAP.
The W3C XML Key Management Specification (XKMS) specifies protocols for distributing and registering public keys suitable for use in conjunction with XML Signature and XML Encryption. See Chapter 9 for details on XKMS.
Digital Signature Services
The OASIS Digital Signature Services (DSS) is developing a set of services to help manage signature creation and validation. This will completely hide the PKI complexities of signature processing.
Single Sign-On Services
Single sign-on (SSO) services help facilitate authentication for Web services. Using SAML as the base standard (the subject of Chapter 6, "Portable Identity, Authentication, and Authorization"), a user can log in to the SSO service, provide one or two factor authentication challenges (user ID, password, and perhaps smart card or biometric), obtain a credential (or security token), and use that credential in SOAP headers on all subsequent Web services to provide necessary authentication information required by the Web services. Because Web services are not interactive (they are computer-to-computer), this approach is usually required when using Web services needing authentication. Single sign-on services are being developed by the Liberty Alliance and by Microsoft Passport.
Access Control Services
Sometimes referred to as Entitlement Services, Access Control Services (ACS) provide centralized access control policies. They support one of several evolving OASIS access control standards such as XACML and XrML. These standards are covered in Chapter 9.
Security in a Box
Because of its compute-intensive nature, application-level XML security is being built into hardware for acceleration. Such security will be required in high-volume applications. The box will be the termination point for encryption, signature, and transport security. These hardware accelerators will provide dramatic acceleration of time-consuming, compute- intensive cryptography tasks for Web services.
Billing and Metering Services
The last of the security-related Web services is billing and metering services. These services keep track of a user's service utilization and accrue billing information. They are vital to the business models that Web services will enable and encourage . You can't charge for the use of Web services if you cannot authenticate legitimate users and bill them for their usage.
SSL Transport Layer Security
Secure Socket Layer (SSL), also called Transport Layer Security (TLS), was invented by Netscape to provide for secure e-commerce transactions between a Web browser and a Web server  . SSL is arguably the most widely used implementation of PKI. It is important and relevant to a discussion of Web Services Security because it is so easy to use, it is already deployed in virtually every organization, and it is so effective for certain types of Web service deployments.
A Description of the SSL Protocol
SSL security is most commonly used for browser-to-server security in e-commerce transactions. Virtually all browsers support SSL. Likewise, virtually all Web servers ”or more correctly, Web application containers, such as Microsoft Internet Information Server, BEA's WebLogic Server, IBM's WebSphere ”do as well. Because most, if not all, of the current Web services containers (for example, Microsoft's .NET framework, BEA's WebLogic Workshop, IBM's WebSphere Studio Application Developer) are also Web application containers, you can use the existing built-in Transport Layer Security for Web services without modifying a thing.
SSL is effective at maintaining confidentiality of transactions. It implements shared key encryption between its endpoints after first transporting the shared key via public key cryptography. SSL will prove to be broadly useful for Web Services Security, especially in early implementations, because those early Web services require only simple point-to-point encryption and possibly the level of authentication SSL can provide. As Chapter 7 on WS-Security explains, SSL also will be useful as an added layer of transport security underneath the message-level security this book describes.
Four options are available when you are using SSL Transport Layer Security over HTTP (which you will see as HTTPS):
We will briefly explain how the SSL protocol works using an example in which the "client" is the Web service requestor and the "server" is the Web service provider . The steps outlined in Figure 3.13 are as follows:
Figure 3.13. The full SSL Protocol showing two-way authentication.
When people talk about SSL Transport Layer Security, they sometimes use the term secure pipe as a metaphor. This means that after the SSL endpoints go through their protocol (either one-way or two-way SSL), a cryptographic pathway is created between the two endpoints (see Figure 3.14). Web services that are based on HTTP as the transport flow through this secure pipe, making all messages sent back and forth confidential. Remember, though, that the messages are encrypted just during transport. At the receiving endpoint, the messages are decrypted by the server. If the Web service uses multiple hops on its way to its real destination or if persistent confidentiality (encryption) is needed for the messages, SSL does not provide a solution. Hence, we still need to learn a lot more about message-level security applied to Web services messages.
Figure 3.14. The SSL "secure pipe."
|< Day Day Up >|
Securing Web Services with WS-Security. Demystifying WS-Security, WS-Policy, SAML, XML Signature, and XML Encryption
Authors: Rosenberg J., Remy D.
Published year: 2004
Web Services Security
Developing Web Services with Apache CXF and Axis2 (3rd edition)