10.5 Transport Layer Security (TLS)


The TLS protocol is designed to establish a reliable connection secure from eavesdropping [TLS]. It was the enabler of e-commerce and TV-commerce, as it enabled securely exchanging financial information such as social security numbers, credit card information and account numbers .

10.5.1 History

TLS had a long development history before becoming a widely accepted industry standard (see Figure 10.3). The first version, called Secure Sockets Layer (SSL), was developed at Netscape in 1994; this was merely a prototype.The specification of next version, SSL v2, was released for public comment in November 1994. The original Netscape implementation of the SSL specification was weak to the point that a secure connection could be broken within an hour .

Figure 10.3. The history of TLS.

The SSL specification was developed without vendor input. Therefore, a number of incompatible independent variants were developed to fix problems with the protocol. The most important of those implementations is Microsoft's Private Communications Technology (PCT), developed by Josh Benaloh, Butler Lampson, Daniel Simin, Terence Spies and Benet Yee and published in October 1995.

Although backwards compatible with SSL, PCT had better security properties. It included a non-encrypted operation mode, providing only data authentication. It tightened up the key transform, and had an export mode in which key sizes were limited to 40 bits to satisfy U.S. export restrictions; this mode allowed weak encryption to coexist with strong authentication. Finally, PCT improved performance by reducing the number of round trips required.

Although work began on v2.1 fixing problems with v2 Netscape decided to start the design of a new version from scratch (partly due to competition with Microsoft's PCT). They tasked Paul Kocher, Allan Freier and Phil Karlton with the development of a new version of SSL to be called SSL v3 [SSL]. It improved on PCT by introducing a number of new ciphers, including DSS, Diffie-Hellman (DH), and the National Security Agency's FORTEZZA. Additional improvements included the support for closure handshake for averting truncation attacks, closing a security hole with SSL v2 which allowed forging a TCP connection closure without detection. The new version had a new record type and other features that were not backwards compatible with the previous version. As expected, Microsoft responded with an improved version, called Secure Transport Layer Protocol (STLP), which introduce a number of new features, such as UDP support and support for client authentication using shared secret keys; STLP was released in 1996.

In May 1996, the IETF chartered the TLS working group to try to standardize an SSL-like protocol. It was commonly expected that this standard would harmonize the Netscape and Microsoft versions. Netscape submitted SSL v3 and Microsoft submitted its STLP protocol. As the group met, it became apparent that there was a lot of resistance to modifications of either protocols. It was decided to settle on minor modifications to SSL v3, and require implementers to support DH, DSS, and 3DES in addition to RSA. The inclusion of 3DES was implied by the Danvers Doctrine, established in an April 1995 IETF meeting in Danvers Massachusetts, which stated that the IETF would design protocols that embodied good engineering principles regardless of exportability.

In late 1997, as the TLS working group completed its work, it submitted its documents to the Internet Engineering Steering Group (IESG), which sent it back with instructions to add DSS for authentication, DH for key agreement, and 3DES for encryption. These changes were implemented despite a prolonged standoff through grudging consensus.

In the meantime, the IETF Public Key Infrastructure X.509 (PKIX) working group, which was tasked with standardizing a profile for X.509 certificates, was in the final stages of its work. Since TLS dependent on certificates, IETF rules prohibited advancing TLS to an RFC status without the promotion of PKIX to RFC status. Finally, in January 1999, over two years late, TLS was published as RFC 2246.

Following TLS, Wireless TLS (WTLS) emerged. In 1996 mobile phone vendors were rushing to sell wireless information. Nokia had Smart Messaging for GSM (SMS), Unwired Planet had HDML+HDTP and Ericsson had ITAP in its labs. Although proprietary solutions had limited success in some areas, they did not gain market acceptance: being tied to a single vendor was unacceptable to carriers and the user base was not large enough to attract third-party content and services. Consequently, in June 1997, Phone.com (formerly Unwired Planet) cofounded the Wireless Applications (WAP) Forum with Ericsson, Motorola and Nokia (the world's three largest wireless handset manufacturers) to provide a worldwide standard for the delivery of Internet-based services to mass-market mobile phones [WAP].

Today, through many standardization discussions and debates, the wireless industry seems to have settled on a four-layer architecture in which the Wireless Datagram Protocol (WDP) is used to encapsulate the WTLS, which encapsulates the Wireless Transaction Protocol (WTP) which encapsulates the Wireless Session Protocol (WSP), which is used to support the Wireless Application Environment (WAE). WTLS is modeled after the SSL, and as such, it allows for the transmission of data along a secure connection. WTLS handles authentication and denial of service while ensuring privacy through encryption and decryption.

10.5.2 SSL and TLS Integration

The primary utility of SSL and TLS is to protect HTTP traffic. In HTTP, a TCP connection is created and the client sends a request. The server responds with a document. When SSL is used, the client creates a TCP connection and establishes an SSL channel relying the TCP connection. The HTTP request is sent through the SSL connection rather than directly through the TCP connection. The SSL and TLS handshake is not understood by ordinary HTTP servers, therefore the protocol specification https is used instead of http (e.g., https://www.xyz.com ) to indicate a request to open a TLS connection for the submission of the HTTP request. The combination of HTTP running over SSL is often referred to as HTTPS.

Many protocols run over TCP, and SSL connection is the secure version of TCP. Therefore, securing existing connections by adding the SSL layer between TCP and those protocols is a simple and attractive proposition. As a result, in addition to HTTP, other protocols such as NNTP [SNEWS] SMTP and FTP [SFTP] run over SSL.

10.5.3 Transmission and Reception Processes

All security protocols are designed to securely transmit messages. To transmit a secure n = message, the following steps are required (see Figure 10.4):

  1. The emitter computed a digest of the message. This is performed by applying a digest algorithm which takes a message as input and produces a digest message which is much smaller and has a fixed length, e.g., 4 bytes, regardless of the length of the input.

  2. The emitter signs the message together with the digest, resulting in a digital signature and a certificate, both associated with that specific message.

  3. The emitter produces a random session key and uses it to encrypt the signed message certificate and signature.

  4. Finally, the emitter encrypts the session key using the receiver's public key and attaches the wrapped session key to the message. The resulting message is sent out.

Figure 10.4. Secure message emission process.

The message reception process mirrors the transmission process. Initially, the receiver uses its private key to decrypt the session key. If the decrypted session number matches the session number established through the handshake process then the session keys, digest, certificate, and signature are obtained and the message is decrypted. Next, the message digest is recomputed by the receiver and the result is verified against the received digest. Next, the emitter's certificate is verified using the emitter's public key. Finally, the emitter's signature is verified with the emitter's public key.

10.5.4 The Handshake Protocol

The lifecycle of a TLS connection comprises the steps of setup, data transfer and teardown . During the setup step, a handshake protocol is used to authenticate the server, optionally authenticate the client, and establish the cryptographic keys and algorithms used to encode and decode the transmission. Once the keys and algorithms are determined, the data transfer mode is entered. During teardown the client alerts the server that it is about to close the connection, and subsequently closes the connection; the notification ensures that the server does not keep waiting for data from the client using the same parameters as those used in the closed session.

The TLS Handshake Protocol consists of a suite of three subprotocols that are used to allow peers to agree on security parameters for the record layer, authenticate themselves , instantiate negotiated security parameters, and report error conditions to each other. Section 7.3 in RFC 2246 presents a short overview of the handshake process. A simplified implementation view of the handshake protocol is as follows (see Figure 10.5):

  • Step 1: ClientHello: The client sends to the server a list of cipher algorithms it is capable of supporting, along with a random number used as input to the key generation process (see first arrow form client to server in Figure 10.5). This step is implemented using the ClientInfo message. This is part of the first step specified in Section 7.3 of RFC 2246 and of the first step in Figure 1 of RFC 2246.

  • Step 2: ServerHello: The server selects a cipher algorithm from the list and sends it back along with a random number used for the key generation process, and a certificate (i.e., data structure) containing the server's public key (see first arrow form server to client in Figure 10.5). The certificate provides the client all information needed to authenticate the server. The server performs this step by sending the ServerHello message, followed by a ServerCertificate message, and terminated by ServerHelloDone (the ServerCertificate and ServerHelloDone are not part of the ServerHello message). Often, more complicated handshake mechanisms are used in which additional messages are sent before the ServerHelloDone message. This is part of the first step specified in Section 7.3 of RFC 2246 and of the first step in Figure 1 of RFC 2246.

  • Step 3: PreMasterSecret: The client verifies the server's certificate and extracts the server's public key. The client then generates a random secret string called the server's pre_master_secret to the server (see arrow from server to client labeled 'key step' in Figure 10.5). This is part of the second and third step specified in Section 7.3 of RFC 2246, of the third step in Figure 1 of RFC 2246. It is also part of the server key exchange message specified in Section 7.4.3 of RFC 2246.

  • Step 4: CalculateKeys: The client and server independently compute the encryption and MAC keys from the pre_master_secret and the random numbers exchanged in steps 1 and 2. This is part of the fourth step specified in Section 7.3 (and the third step in Figure 1) of RFC 2246.

  • Step 5: ClientFinished: This step is performed by the client by sending the finished message, which is the first message sent utilizing the negotiated algorithms and keys. To protect the message from tampering, it contains the MAC all handshake messages to the server.

  • Step 6: ServerFinished: The server sends a Finished message containing the MAC of all handshake messages to the client.

Figure 10.5. Simplified Handshake protocol.

Once the handshake is complete, the two parties have shared secrets that are used to encrypt records and compute keyed MACs on their contents (see the last step in Figure 1 of RFC 2246).

10.5.5 Data Transfer Protocol

The data transfer protocol is based on breaking the data into chunks and wrapping each chunk as a record; the security protocols transfer records only (see Section 6.2 in RFC 2246). On the receiving end, each record is decoded, decrypted and verified independently. Once received, the records are assembled to reconstruct the data set from which they were constructed .

To protect each record from an attack, a MAC is computed and attached to the tail of the data fragment (Figure 10.6); this MAC must be verified by the receiver. Prior to transmission, the concatenated data and MAC are encrypted, resulting in the encrypted payload, to which a record header is added. Subsequently, the records comprising the header and payload are transmitted. The receiver uses the session keys to decrypt each payload, the MAC to verify the integrity of each payload, and the record headers to assemble the data.

Figure 10.6. The fragmentation of data within a transport.

The record header provides all the information needed to decode the record. It specified the content type, length and protocol version. Four content types are supported: application_data, alert, handshake and change_cipher_spec. A distinction is made between the application and protocol implementation: Whereas the application_data type is used by the application, the other three types, change_cipher_spec, alert and handshake (see Section 6.2.1 in RFC 2246) are used by the protocol implementation itself. The alert content type is used for signaling errors such as problems in the handshake, authentication or decryption errors. The alert message is often used to carry the close_notify message as an early warning before the connection is about to close. The change_cipher_spec has a special role, as it indicates a change in the session parameters used for the encryption and authentication of records.

The handshake content type is used to carry the handshake messages only. This includes the initial connection setup messages. Records of this type, which carry data before the ciphers and keys are established, may be in the clear and unauthenticated. However, when a handshake is initiated over an established session with established keys, those records are encrypted.

10.5.6 Key Derivation

Key derivation has two phases: The pre_master_secret is converted into the master_secret which is then converted into the individual cryptographic keys. These conversions are performed using a Key Derivation Function (KDF), which takes as input three parameters: a client random, server random, and a key, and produces a derived key. In the first step, it takes the premaster_secret and in the second step is takes the master_secret (see Figure 10.7).

Figure 10.7. The two step key derivation process and the use of KDF.

10.5.7 Protecting Memory

Unless the SSL implementation is in hardware, the master_secret lives in the host's memory. This means that any attacker who can read the memory location at which the master_secret is stored (e.g., using administrator privileges) can break the security system.

In general, implementations should refrain from writing secret data to a disk at all. However, when this is necessary, the implementation of a middleware execution environment should be careful with granting access permissions to that information to prevent unauthorized users from accessing the data. An implementation should ensure that application's files are deleted, beyond recovery, before it terminates. A good practice is to scrub the data using three overwrite passes : the first of zeros or ones, the second of random data, and the third of zeros.

With execution environments supporting virtual memory, it is useful to store the master_secret in memory locations that are never swapped to disk. Once that memory is swapped, any subsequent operation with the data would require swapping back that memory. Avoiding this back-and-forth swapping can often be achieved by requesting the operating system to lock that memory location as long as that master_secret is in scope; such locking indicates to the memory manager that memory is in use and prevents the swapping. This, however, merely protects from access after the process is terminated.

Finally, a common and often overlooked security hole core dumps. On many operating systems, when an application generates certain errors, a core image is written to disk. This image contains the entire memory state of the process, including any secret information such as the master_secret. Preventing such uncontrolled core dumps can be achieved by ensuring that appropriate exception handlers are catching these error events and selectively (or never) writing error images to the disk.

10.5.8 Securing Servers

Servers with static key pairs when they establish their master_secret are the most vulnerable. The most common case is that the server has a single RSA key pair with the public key in its certificate. The key found in that certificate is used by the client to transmit the master_secret that is used to establish a session. An attacker who gained access to the master_secret could derive the private key and decrypt all messages sent to that server. Utility programs, such as the ssldump , are designed to record all incoming and outgoing messages. A compromise of the server's static private key may imply the compromise of all sessions that are established with that key.

To address this vulnerability, ephemeral keying can be used. The server generates an ephemeral key pair that is used for key establishment and uses its static key pair to authenticate the ephemeral public key to the client. With ephemeral keying, knowledge of the ephemeral key is only useful for decrypting the information used with that key, and therefore has local temporary implications. Knowledge of the static key has no value for decryption of traffic, because it is not used for traffic protection and only the ephemeral keys are used for traffic encryption. Although such a compromise does allow compromising the ephemeral keys derived for a given session, it has no impact once the session is complete.

10.5.9 The Choice of Signature Algorithms

When deploying SSL the choice of signature algorithm often depends on intellectual property or compatibility factors. However, in cases where performance is the dominant factor, RSA is the algorithm of choice over DSA or DH because it is much faster. However, the computational cost is paid mainly by the client, and thus only impacts latency and not throughput. A detailed comparison and analysis can be found at [SFAQ].

Although both RSA and DSA can be used in ephemeral mode, the DSA cipher suites are nearly always deployed with ephemeral keys. This requires substantial extra work on both sides and dramatically reduces throughput. In practice, the DSA cipher suites tend to be between 2 and 10 times slower than RSA cipher suites of comparable strength.



ITV Handbook. Technologies and Standards
ITV Handbook: Technologies and Standards
ISBN: 0131003127
EAN: 2147483647
Year: 2003
Pages: 170

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net