10.2 The Public Key Infrastructure (PKI)


10.2.1 Cryptographic Algorithms (Ciphers) Type

Cryptographic algorithms can be grouped into one-way and two-way algorithms, also known as ciphers: while two-way methods are intended to allow recovery of the original content, one-way methods are not intended to allow recovery. The two-way methods could be further organized into the three categories of symmetric, asymmetric, and hybrid. Typically, due to key management issues, asymmetric methods are more secure than symmetric methods; the trade-off is, however, that asymmetric methods are typically much more computationally intensive .

10.2.1.1 Two-way Symmetric Methods

These methods use the same key for encryption and decryption. This means that all people and software agents with access to a key can perform both encryption and decryption. Therefore, these methods are often referred to as the private key methods, because if the key was made public, then it would not be possible to control the membership of the group that allows access to information encrypted with symmetric methods.

The symmetric methods could be further divided into block and stream based methods. Stream-based methods can effectively process data in small chunks (bits or individual bytes), one chunk at a time. Block based methods can effectively process larger chunks of data, typically 1KB10KB larger than the chunks processed by stream-based methods.

10.2.1.2 Two-way Asymmetric Methods

Asymmetric methods use one key for encryption and one for decryption where the decryption key cannot be derived from the encryption key (e.g., the Rivest Shamir Adelman (RSA) cipher). Asymmetric methods are also referred to as public key, rendering knowledge of the encryption key public does not release control over the membership of the group that allows access to the information. The encryption key is called therefore the public key, and the decryption key is called the private key or secret key. With this method, message senders encrypt the message using the public key of the intended recipients.

It is possible to generate a new key pair for every message sent. The recipient uses that new public key to reply. Once the private key is used to decrypt the content, it can then be destroyed . This approach gives rise to a family of key exchange techniques having various degrees of forward secrecy . The degree of forward secrecy is the confidence that the compromise of a long- term private key does not compromise any earlier session keys. The following forward secrecy degrees are defined:

  • Perfect : A degree of perfect means that even the exposure of the private keys of all parties does not reveal the exchanged keys.

  • Good : A degree of good means that the exposure of only one party's private key does not reveal the exchanged key but the exposure of both private keys does reveal it.

  • None : A degree of none indicates that with one party's private key exposed then all the keys exchanged with that party are revealed.

10.2.1.3 Two-way Hybrid Methods

Hybrid algorithms use a combination of symmetric and asymmetric algorithms. When used together, a layered architecture is used, whereby a public key method is used to encrypt a randomly generated encryption key to be used in a symmetric method. To exchange a message with this method, the sender generates a random key, uses a symmetric method to encrypt the message using this key, and encrypts that random key separately using the public key of the recipient group so that only the intended recipients can recover the key. Subsequently, the encrypted message is transmitted along with the encrypted key. On reception , the recipients use their private key to recover the random key, which is then used to decrypt the message using the same exact symmetric method used to encrypt it at the time of transmission. The TLS standard (described in this chapter) is an example application of hybrid methods [TLS].

The hybrid method combines the best of the both worlds , as it exhibits the strength of the public key method with the efficiency of the private key method. Strength is achieved using the public key method to manage private keys; obviously, the strength of the message encryption depends on the strength of the symmetric encryption method, which can be controlled by the length of the private key it utilizes. Efficiency is achieved by applying the symmetric method on the data stream or blocks instead of applying the computational intensive public key method; the use of the public key method to process the relatively short keys of the symmetric method (compared to the length of the message) requires relatively little computational resources.

10.2.1.4 One-way digest methods

Two-way methods are not appropriate for all applications. Certain applications, such as digital signatures, require a short representation of the content, known as a digest which is often confused with the signature. In principle, everything could be digested (and therefore signed), including streams, files, electronic business cards, or individual email messages.

A digest of the message is a data structure, or a sequence of bits, of a pre-defined length, representing some calculation performed on the message one chunk (typically one character or byte) at a time. A function that computes a digest is known as a digest, or a hash function, and the resulting digest value is referred to as the message digest code. For example, one could break a message into 16-bit words (i.e., one UTF-16 or two UTF-8 characters at a time), and add the unsigned integer represented by each word to a 32-bit unsigned integer sum that is initialized to 0. When the sum exceeds the maximum unsigned integer representable by a 32-bit integer (i.e., >2 32 ), the carry bit is simply ignored and the sum continues to accumulate; CRC is an example of a digest method using such a principle. Other digest algorithms include Message Digest 5 (MD5) and Secure Hashing Algorithm (SHA1). It is generally possible to compute a digest of a message more efficiently than encrypting that same message.

The requirement for a short fixed length digest exists for a variety of reasons; typically efficiency of computation and low-bandwidth transmission overheads are primary considerations. It is certainly easier to encrypt (and decrypt) only a short digest of the message, producing a lower overhead than encrypting (and decrypting ) the entire message. Although it would be ideal if that digest would uniquely identify the message, the digest of a message is not guaranteed to be unique for a simple reason: the length of the digest is fixed and the message is much longer and may have variable length. Therefore, the total number of possible messages is much larger than the total number of distinct digest values. This means that given a specific digest method, there is a great number of messages for which the same digest is computed; note that the name hash implies exactly that: there are groups of messages that have the same hash code. However, although uniqueness is not guaranteed , because the digest (and the signature) is derived from the message and must match its content, the possibility of non-unique digests does not make it easy to recycle or forge digital signatures derived from them. Given a digest value and a message, it is difficult to generate a revised message that has the same exact interpretation as the original message yet has the prespecified digest (or the signature). Nevertheless, the longer the digest value is, the less likely are two independently authored messages to have the same digest value.

10.2.2 Standard Ciphers

10.2.2.1 Data Encryption Standard (DES)

Early versions of the security systems used today date back to the DES, also known as FIPS46-2, originally developed in the 1970s [DES]. At the time it was introduced, it was so difficult to crack that it was restricted from exportation to other countries . It is a block cipher encrypting a fixed-size block of data. DES works by processing 64-bit chunks of data through 16 iterations of the algorithm or key. The 56-bit key length used by DES generates over 36,000,000,000,000,000 (i.e., 3.6E16) possible encryption keys. Due to the relatively small key size with current technology it is easy to break with the right hardware (a current ASIC chip can test 200 million combinations every second and costs about $10).

Recently, National Institute of Standards and Technology (NIST) has suggested that FIPS46-2 should be superseded by the Triple-DES (3DES) algorithm that uses DES three times with different keys [DES], which is considered to be more secure than DES. 3DES consists of three applications of the DES cipher in Encrypt-Decrypt-Encrypt (EDE) configuration with independent keys. 3DES has a block size of 64-bits and a key length of 168 (3 x 56). Because of the construction of 3DES, it is thought to offer strength equivalent to a 112-bit block cipher.

10.2.2.2 International Data Encryption Algorithm (IDEA)

A more advanced system, the IDEA is a strong block cipher [IDEA]. It is an 8-round cipher with a 64-bit block and with 128-bit keys. The same algorithm is used for both encryption and decryption. It is considered to be very secure. The strength of the cipher is provided by mixing operations from different algebraic groups which is resistant to both differential and linear cryptoanalysis. Currently, there is no known way of breaking IDEA short of brute force.

10.2.2.3 Diffie Hellman (DH)

DH was the first openly published public key system [DH]. DH, along with derivatives such as ElGamal were covered by U.S. patent number 4,200,770, which expired in September, 1997.

DH is an algorithm for achieving agreement about keys over an unreliable connection. DH can be used with more than two parties but it does not contain any authentication of those parties; a number of extensions were developed for use in authentication (e.g., EKE by Bellovin-Merritt) that significantly increase the computational complexity of the algorithm.

The security of the DH system is based on the DH Problem (DHP), which is conjectured (but not proven) to be equivalent to the Discrete Logarithm Problem (DLP) under the Diffie-Hellman assumption that it is infeasible to compute g ab knowing only g a and g b . With DH, assuming a well-known g and p, parties communicate as follows :

  1. A selects a random number 'a' and send the result of (g a mod p) to B.

  2. B selects a random number 'b' and send the result of (g b mod p) to A.

  3. B receives the number from A and computes the private key (g b ) a mod p.

  4. A receives the number from B and computes the private key (g a ) b mod p.

  5. Through their independent calculations, both A and B have the agreed on key g ab mod p.

10.2.2.4 Digital Signature Standard (DSS)

The DSS, formally defined in [DSS], employs the ElGamal and Schnorr PK systems to produce a fixed width signature (irrespective of the public/private key size); in contrast, RSA signature length is a function of the key length employed. The DSS uses discrete exponentiation modulo a prime p, where the exponents are computed modulo a prime q. A signature produced with DSS is likely to remain safe at least until 2015 [FACTOR], but if longer term signature verification is required, time stamping and document trail mechanisms can be used.

10.2.2.5 Rivest Shamir Adelman (RSA)

The RSA algorithm is a very commonly asymmetric algorithm used for both encryption and signing [RSA]. It relies on the difficulty of factoring large numbers . The strength of the encryption increases with the key length, and a 1024-bit key is considered safe. The public and private keys are functions of a pair of large prime numbers. Recovering the plaintext from the cipher text is equivalent to factoring the product of the two primes. Although it is patented, its licensing requirements allow for a wide use. The algorithm is as follows:

  1. Take two large prime numbers, P and Q.

  2. Find their product N=PQ; N is called the " modulus ".

  3. Choose a number, E, where E < N and E is relatively prime to (P1)(Q1).

  4. Find the inverse of E, called D, mod (P1)(Q1), i.e., ED = 1 mod (P1)(Q1).

  5. E and D are called the public and private exponents respectively.

  6. The public (encryption) key is the pair (N,E) and the private (decryption) key is D.

  7. The factors P and Q must be kept secret or destroyed.

10.2.2.6 Advanced Encryption Standard (AES)

The AES resulted from the US Government's search for a replacement to the aging DES [AES]. NIST first called for algorithms in September 1997. Several very well known cryptographers responded, including Rivest, Schneier, Knudsen, Biham, Rijmen, and Coppersmith among others, and developed AES candidate algorithms that met the specified criteria. Of the 15 initial algorithms meeting the criteria, five (Mars, RC6, Rijndael, Serpent and Twofish) were selected to enter the second round . Eventually, Rijndael prevailed [AES].

The Rijndael is a block cipher, designed by Joan Daemen and Vincent Rijmen as a candidate algorithm for the AES; the complete specification and AES proposal are available from the NIST Web site [AES]. The cipher has a variable block length and key length. Currently specified are the usage of keys with a length of 128, 192, or 256 bits to encrypt blocks with lengths of 128, 192 or 256 bits, where all nine combinations of key length and block length are possible. Both block length and key length can be extended very easily to multiples of 32-bits. Rijndael can be implemented very efficiently on a wide range of processors and in hardware. The design of Rijndael was strongly influenced by the design of the block cipher Square , which is a 128-bit block cipher designed by the same authors whose original work concentrated on the resistance against differential and linear cryptoanalysis.

10.2.2.7 OpenPGP

Pretty Good Privacy (PGP) is a family of software systems developed by Philip R. Zimmermann. Open -PGP, RFC 2440 [OpenPGP], based on PGP 5.x (formerly known as PGP 3) and higher, is a hybrid method that uses a combination of strong public key and symmetric cryptography to provide security services for electronic communications and data storage. These services include confidentiality, key management, authentication, and digital signatures.

With OpenPGP, a new session key is generated as a random number for each message. This key is bound to that message and transmitted with it. To protect the key, it is encrypted with the receiver's public key. The sequence is as follows:

  1. The sender creates a message.

  2. The sending OpenPGP generates a random number to be used as a session key for this message only.

  3. The session key is encrypted using each recipient's public key; these encrypted session keys start the message.

  4. The sending OpenPGP encrypts the message using the session key, which forms the remainder of the message.

  5. The receiving OpenPGP decrypts the session key using the recipient's private key.

  6. The receiving OpenPGP decrypts the message using the session key.

The digital signature uses a hash code or message digest algorithm, and a public key signature algorithm. The sequence is as follows:

  1. The sender creates a message.

  2. The sending software generates a hash code of the message.

  3. The sending software generates a signature from the hash code using the sender's private key.

  4. The binary signature is attached to the message.

  5. The receiving software keeps a copy of the message signature.

  6. The receiving software generates a new hash code (using the algorithms specified in the signature) for the received message and verifies it using the message's signature. If the verification is successful, the message is accepted as authentic .

Both digital signature and encryption services may be applied to the same message. First, a signature is generated for the message and attached to the message. Then, the message plus signature is encrypted using a symmetric session key. Finally, the session key is encrypted using public key encryption and prefixed to the encrypted block.

10.2.3 PKI Architecture

The PKI assumes an underlying infrastructure architecture comprising a Certificate Authority (CA), Registration Authority (RA) and a certificate user (see Figure 10.1) [X509]. Users of public keys require confidence that the associated private key is owned by the correct remote subject (person or system) with which an encryption or digital signature mechanism is used. This confidence is obtained through the use of public key certificates, which are data structures that bind public key values to subjects. The binding is asserted by having a trusted CA digitally sign each certificate. The CA may base this assertion on a challenge-response protocol, presentation of the private key, or an assertion by the subject. A certificate has a limited valid lifetime indicated in its signed contents. Because a certificate's signature and timeliness can be independently checked by a certificate-using client, certificates can be distributed via untrusted communications and server systems, and can be cached in unsecured storage in certificate-using systems.

Figure 10.1. The architecture assumed by X.509.

10.2.3.1 Certificates and Certificate Authorities

A public key certificate is a digitally signed statement from a CA, saying that the public key (and some other information) of another entity has some specific value. The certificate provides assurance that the holder of the private key can be trusted. One CA can issue a certificate for another CA and so on, in a chainlike or hierarchical structure. Examining the certificate of an issuer requires going further up in the structure until a trusted part is found. There is a root level where no one above can issue a certificate, where the certificate is self signed. There exist some large CAs (e.g., VeriSign) that are self-signing and widely trusted.

CAs are the digital world's equivalent of passport offices. They issue digital certificates and validate the holder's identity and authority. CAs embed an individual's or an organization's public key along with other identifying information into each digital certificate and then cryptographically sign it as a tamperproof seal, verifying the integrity of the data within it and validating its use. Typically, a certificate store is used, which is persistent storage where certificates, CRLs, and Certificate Trust Lists (CTLs) are stored.

A CA hierarchy contains multiple CAs. It is organized such that each CA is certified by another CA in a higher level of the hierarchy until the top of the hierarchy, also known as the root authority, is reached. Organizations could run their own CA, as part of a CA hierarchy, to support a wide range of applications more securely than the security provided by user name and password. Establishing a CA in an organization allows using digital certificates to manage people and facilities. Digital certificates can be used as an integral part of an email system, voice-mail systems and even for controlling access to buildings or rooms with a digital certificate embedded in a token (like an access card).

Setting up a CA is not significantly more complex than setting up a Web server. One should use a dedicated server to serve as a CA box where the CA software resides. A Hardware Security Module (HSM) should be used to hold the private keys. A second computer or adequate computer resources are also needed for the RA software. Another key component is a database or directory of people or software agents to whom certificates are issued. Finally, there is a need for a secured physical area in which to store the CA and RA servers with controlled and monitored access.

10.2.3.2 Usage

The certificate can be used as follows:

  • A sends her certificate to B.

  • B checks the time-stamp to see if the certificate is valid.

  • B checks the identity of the CA to see if the CA is someone to trust.

  • B encrypts the signature with CAs public key to see who the owner of this certificate is and the owner's public key.

  • B sends a challenge encrypted with A's public key to A, to be ensure that A really is A.

  • A decrypts and send back the challenge to prove that she was able to match the public key.

  • Now B can be certain that A is A and that A has As private key.

10.2.3.3 Policy

Certificate policies are often needed to manage the applicability of certificates. A policy is a set of rules indicating the applicability of certificates for a specific class of applications with common security requirements. Such a policy might, for example, limit certain certificates to transactions specific regions or price ranges.

10.2.3.4 Chains

All certificates are signed by a CA, which may in turn be signed by another CA. A root certificate is a self-signed certificate found at the top of a certificate chain. If the root certificate can be verified , the entire certificate chain can be trusted. The root certificate is trusted because the owner claims to be trustworthy. The owner of a root certificate establishes this trust by protecting the signing root private key from theft; if it is stolen, it can be used to falsely sign certificates. A root certificate is in turn a signing certificate, or a certificate whose private key is used to sign other certificates, thereby proclaiming that the public key of those certificates is the equivalent of the owner name and anything done with the matching private key is performed for that named entity.

All root certificates are signing certificates, but not all signing certificates are root certificates. Any certificate can be designated as a signing certificate, in effect creating a signing hierarchy. In a hierarchy, trust is established by walking the certificate chain back to the common root certificate.

In essence, certificate chaining extends the domain of trust between CAs. This trust is established through a cross-certification process. Cross-certificates are special certificates created by the owner of a signing certificate where the name is the same as that of the signing certificate and signed by a different signing certificate. Together, the root, signing and cross certificates establish a trust among other certificates. For example, if the certificate chain contains three certificates and the root certificate is from VeriSign, then the client trusts VeriSign certificates, and allows VeriSign to vouch for other certificates. In this case, the client trusts anyone with a VeriSign certificate.

10.2.3.5 Transmission

When a certificate is transmitted, it is accompanied by all of the certificates required to verify it back to the root. Therefore, the first level certificate is accompanied by all higher level certificates, each verified using the next higher level certificate. That is, the first level certificate is verified using the second level certificate, the second level is verified using the third level, and so on until finally the root CA is verified using the root key that is hard-coded in the receiver.

10.2.3.6 Reception

Certificates should not be cached by a receiver system unless they remain embedded within the signature resource in which they were delivered. If a nonroot certificate is not yet valid or has expired, a receiver system may deny permissions to the application, or alternatively, query the viewer to determine whether to accept the certificate to authenticate the application that references it. However, a secure receiver should not use a non-CA certificate that is not yet valid or has expired unless the viewer expressly permits this use. A receiver system may cache the viewer's response to such a query for a given certificate and a given invocation of the application; use of different certificates or different applications should repeat this query procedure.

10.2.3.7 Authentication

Authentication is the process of identifying the organization signing, or vouching, for the data. It is important to note that there need not be a one-to-one correspondence between signer and broadcaster . Any organization or individual can sign code, not only broadcasters or those producing content they transmit. Consider the case of Acme Xlet Corporation (AXC). AXC is under contract to exclusively develop iTV applications for NBC. If AXC's private key is compromised before the expiration date, one could broadcast its application's non-NBC broadcasts. In such a situation, it would clearly be in AXC's interest to cancel their compromised key as far and as wide as possible.

The possibility that a broadcast contains data signed by multiple originators raises the need to process multiple certificate chains, which are typically stored in hierarchical file systems. The content of a file system is authenticated recursively. The root of the authenticated tree can be the root directory of the file system or the top directory of a subtree .

10.2.3.8 Issuing Certificates

The CA certificate serves as a widely recognized trust point for validation of certificates that chain to it. To issue certificates that are automatically trusted by third party software, one needs to have the CA certificates embedded in the third party software or cross- certify with one of the existing trusted roots. Without such cross-certification, receivers may fail tracking the certificate trust chain and, as a result, viewers may be presented with a warning message stating that the certificate issuer is not trusted.

Typically, a certificate is issued by a CA as a response to a certificate request, which is a specifically formatted electronic message sent to a CA. The request must contain the information required by the CA to authenticate the request, plus the public key of the entity requesting the certificate.

10.2.3.9 Certificate Revocation List (CRL)

As part of maintaining the integrity of an organization's PKI, the administrator of a CA has to revoke a certificate. Revocation of a certificate invalidates a certificate as a trusted security credential prior to the natural expiration of its validity period. Revocation could be needed when the subject of a certificate leaves the organization, if the certificate subject's private key has been compromised, it was discovered that a certificate was obtained fraudulently, or if some other security- related event dictates that it is no longer desirable to have a certificate considered valid.

PKI scalability relies on distributed verification of credentials in which there is no need for direct communication with the central trusted entity that vouches for the credentials. This creates a need to distribute certificate revocation information to individuals and software agents to verify the validity of certificates. Therefore, when a certificate is revoked by a CA, it is added to that CA's CRL, published by that CA, and which lists certificates that are already issued but that are no longer valid.

10.2.3.10 The ITU-T X.509 Standard

The standard certificate format ITU-T X.509 (formerly CCITT X.509), ISO/IEC 9594-8 or RFC 3280 (obsoletes RFC 2459) was first published in 1988 as part of the X.500 directory recommendations. The certificate format in the 1988 standard is called the version 1 format. When X.500 was revised in 1993, two more fields were added, resulting in the version 2 format.

The Internet Privacy Enhanced Mail (PEM) RFCs, published in 1993, include specifications for a PKI based on X.509 v1 certificates (RFC 1422). The experience gained in attempts to deploy RFC 1422 made it clear that the v1 and v2 certificate formats were deficient in several respects. Most important, more fields were needed to carry information that PEM design and implementation experience had proven necessary. In response to these new requirements, ISO/IEC, ITU-T and ANSI X9 developed the X.509 version 3 certificate format. The v3 format extends the v2 format by adding provision for additional extension fields. Particular extension field types may be specified in standards or may be defined and registered by any organization or community. In June 1996, standardization of the basic v3 format was completed [X.509].

ISO/IEC, ITU-T, and ANSI X9 have also developed standard extensions for use in the v3 extensions field [X.509]. These extensions can convey such data as additional subject identification information, key attribute information, policy information, and certification path constraints.

The X.509 standard defines what information can go into a certificate and describes how to write it down (the data format). All X.509 certificates have the following data, in addition to the signature:

  • Version : This identifies which version of the X.509 standard applies to this certificate, which affects what information can be specified in it. Thus far, three versions are defined.

  • Serial Number : The entity that created the certificate is responsible for assigning it a serial number to distinguish it from other certificates it issues. This information is used in numerous ways, for example when a certificate is revoked its serial number is placed in a CRL.

  • Signature Algorithm Identifier : This identifies the algorithm used to sign the certificate.

  • Issuer Name : The X.500 name of the entity that signed the certificate. This is normally a CA. Using this certificate implies trusting the entity that signed this certificate. In some cases, such as root or top-level CA certificates, the issuer signs its own certificate.

  • Validity Period : Each certificate is valid only for a limited amount of time. This period is described by a start date and time and an end date and time, and can be as short as a few seconds or almost as long as a century. The validity period chosen depends on a number of factors, such as the strength of the private key used to sign the certificate or the amount one is willing to pay for a certificate. This is the expected period that entities can rely on the public value, if the associated private key has not been compromised.

  • Subject Name : The name of the entity whose public key the certificate identifies. This name uses the X.500 standard, so it is intended to be unique across the Internet. This is the Distinguished Name (DN) of the entity, composed of a common Name, Organizational Unit, Organization, and Country.

  • Subject Public Key Information : This is the public key of the entity being named, together with an algorithm identifier which specifies which public key crypto system this key belongs to and any associated key parameters.

Version 1 of the X.509 standard has been available since 1988, is widely deployed, and is the most generic. Version 2 introduced the concept of subject and issuer unique identifiers to handle the possibility of reuse of subject or issuer names over time. Most certificate profile documents strongly recommend that names not be reused, and that certificates should not make use of unique identifiers. Version 2 certificates are not widely used.

X.509 Version 3 is the most recent and supports the notion of extensions, whereby anyone can define an extension and include it in the certificate. Some common extensions in use today are: KeyUsage (limits the use of the keys to particular purposes, e.g., signing-only) and AlternativeNames (allows other identities to also be associated with this public key, e.g., DNS names, Email addresses, and IP addresses). Extensions can be marked critical to indicate that the extension should be checked and enforced. For example, if a certificate has the KeyUsage extension marked critical and set to keyCertSign then if this certificate is presented during Secure Socket Layer (SSL) communication, it should be rejected, as the certificate extension indicates that the associated private key should only be used for signing certificates and not for SSL use [SSL].

All the data in a certificate is encoded using two related standards called ASN.1/DER. Abstract Syntax Notation (ASN) 1 describes data. The Definite Encoding Rules (DER) describe a single way to store and transfer that data. People have been known to describe this combination simultaneously as powerful and flexible and as cryptic and awkward .

10.2.3.11 Embedded Certificate Issues

Receivers are likely to be shipped with their own certificate repositories, carrying the root and signing certificates they trust. These roots need to be published to various participants of the iTV food chain, including broadcasters. Without middleware upgrades and transmission of certificates, there is a concern that one of these certificates may expire. This model has a number of shortcomings: it has no support for certificate revocation and provides no way to acquire a viewer certificate based on the user name. Addressing these issues typically requires support for CRL and certificate repositories that are linked by signing or cross- certifying and publishing all of their information in business-class repositories.

10.2.3.12 PKI Interoperability and Deployment

Clearly, without the appropriate open standards it is not possible to achieve interoperability across the food chain and address issues such as the use of embedded certificates. To enable interoperability, X.509 (see later section) standardizes the certificate and CRL format.

Although the PKI in general, and X.509 in particular, is being fielded in increasing size and numbers, there are still many unanswered questions about the ways in which PKI is organized and operated in large scale systems. Some of these questions involve the ways in which individual CAs are interconnected . Others involve the ways in which revocation information is distributed. In a 1994 report, the Missile Test and Readiness Equipment (MITRE) Corporation suggested that the distribution of revocation information has the potential to be the most costly aspect of running a large scale PKI [AES].

The MITRE report assumed that each CA would periodically issue a CRL that listed all of the unexpired certificates that it had revoked. Since the MITRE report was published, several alternative revocation distribution mechanisms have been proposed. Each of these mechanisms has its own relative advantages and disadvantages in comparison to the other schemes. The NIST has created mathematical models of some of the proposed revocation distribution mechanisms. These models were used to determine under what circumstances each of the mechanisms is most efficient.

Most of the proposed revocation distribution mechanisms have involved variations of the original CRL scheme. Examples include the use of segmented CRLs and delta-CRLs. However, some schemes do not involve the use of any type of CRL (e.g., on-line certificate status protocols and hash chains).

10.2.4 Application Trust Issues

iTV applications should be authorized to perform certain privileged operations, such as changing display configuration, automatically tuning the receiver, or accessing the Internet, only if the application is determined to be trusted and granted permission to perform these operations. To grant authorization to perform privileged operations, the middleware should determine if the application is trusted or not.

10.2.4.1 Application Authentication and Trust Determination

Most iTV standards require that trusted applications are transmitted with a file containing signatures; this is referred to as the signature resource. Further, a reference (e.g., URL or Lid) to such a resource is expected to be signaled in the transport stream, by the broadcaster. The determination of whether an iTV application is trusted or not should occur only once, by the middleware, after loading the application code into memory but before it is executed. The trust determination should remain in effect until the application is terminated or replaced .

Once an application is designated to be trusted, the receiver's middleware should determine the permissions to be granted to the application. Each time an iTV application attempts to perform a privileged operation, the middleware should determine if the operation is authorized (i.e., permitted) by examining the application's effective permissions (this is essentially supported by the fine-grained Java security model). Whereas privileged operations should only be granted to trusted applications, not all of the privileged operations should be granted.

10.2.4.2 Resource Authentication

For a trusted application to access an application resource (e.g., a file), the resource may require authentication. A resource is authenticated by verifying that the digest of the resource has not changed from when the digest was recorded in the effective signature or in a manifest resource that can be reached by one or more steps of indirection from the effective signature.

If an attempt is made to access a resource during the processing of a trusted application and resource authentication fails, then the application environment may abort the application.



ITV Handbook. Technologies and Standards
ITV Handbook: Technologies and Standards
ISBN: 0131003127
EAN: 2147483647
Year: 2003
Pages: 170

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net