7.2 Mobile Security


7.2 Mobile Security

Mobile security is often overlooked [3]. The lack of data security on current mobile application platforms is one of the biggest hindrances to mobile commerce adoption, especially in the corporate world. Even with mandatory HTTPS support in Mobile Information Device Profile (MIDP) 2.0, the standard Java 2 Platform, Micro Edition (J2ME) platforms still cannot support versatile security solutions such as those in the Java 2 Platform, Standard Edition (J2SE) world. Currently, to develop advanced mobile security solutions, we must rely on toolkits from third-party vendors .

7.2.1 An Introduction to Certificates

Certificates come in many shapes and sizes, but they all play the same role. At the barest minimum, a certificate is a document that contains information identifying an entity (in the case of X.509 certificates, using that entity's X.500 Distinguished Name [DN]) and the entity's public key. Another entity digitally signs and therefore certifies both pieces of information. If you believe that the signing entity is honest and that its private key has not been compromised, then you can safely assume that the signing entity believes that the public key belongs to the named entity. Depending on how much you trust the signing entity, it may be all that's required for conducting business.

In order to validate the signature of the signing entity, you need the signing entity's public key. That public key is often stored in a self-signed certificate ”a certificate digitally signed by the same entity whose public key is contained within the certificate. That certificate, to be effective, must be generally distributed (as part of a software package, for example) and easily verified (via a published SHA1 or MD5 hash, for example). The simplest practical arrangement consists of a chain of two certificates. One certificate contains the public key of the entity with which you wish to communicate. The second certificate, or root certificate, contains the public key of the entity that certified the first certificate. That arrangement is illustrated in Figure 7.2.


Figure 7.2: A certificate chain.

The root certificate must be generally available and its validity easily verified. Furthermore, all parties planning to take part in the secure interaction must trust the issuer. In light of that trust, and as an indication of their ability and willingness to create signed certificates for other entities, the root certificate's creator is called a Certificate Authority (CA). In practice, transmitting the entire certificate chain between entities is often unnecessary.

Many applications (popular Web browsers and servers, for example) are preconfigured with a set of acceptable root certificates from well-known CAs. As such, the entities represented by those applications only need to send the certificate containing their public key, as illustrated in Figure 7.3.


Figure 7.3: A solo certificate.

The certificate chain could also contain intermediate certificates between the root certificate and the ultimate certificate containing the public key of interest. Figure 7.4 illustrates this arrangement. In such a case, each certificate in the chain may be validated by the next in the chain until the root certificate is reached. Typically, you only encounter certificate chains of that length in situations involving CA mutual authentication.


Figure 7.4: A long certificate chain.

Certificates come in multiple formats. Popular types include X.509 certificates, Pretty Good Privacy (PGP) certificates, and Simple Distributed Security Infrastructure (SDSI) certificates. The PGP certificate format was the first to achieve widespread usage. Java supports the X.509 format, an international standard created by the International Telecommunication Union (ITU).

7.2.2 Intermediate Certificates

On the Internet, the public has come to accept that identity is established by trusted authorities that vouch for other entities' identities. These authorities perform various checks on entities wishing to get a Secure Socket Layer (SSL) certificate. There are really just two types of certificate issuances ” root and chained. In the trust hierarchy, root certificates refer to certificates issued from the highest level of the trust hierarchy. Most root certificates are stored directly in the Web browser software. The owners of those root certificates are commercial entities that are generally recognized as trustworthy. These entities sometimes sell subordinate certificates called chained or intermediate certificates. In this trust hierarchy, think of it as (at least) one step removed from the top level.

For users, these certificates are (generally speaking) of equal strength; however, chained certificates require a more complicated installation process because all chained certificates perform SSL validation by validating the certificate first through the intermediate certificate authority and then to the corresponding root certificate authority. This means installing multiple certificates. Intermediate certificates are only compatible with browsers and Web server software that are SSLv3 compliant. Use of intermediate certificates requires you to install the intermediate certificate as well as your SSL certificate. This can require additional technical expertise and resources.

7.2.3 Certificate Chain

Certificate validation is a recursive process. It begins with the need to verify the signature on some data presented by an End Entity (EE). This involves checking that the key pair is actually owned by that EE. To do this, the public signing key of that EE is acquired by getting its certificate. That certificate would have been signed by the EE's CA, so the signature on the certificate can be verified by getting the CA's public signing key. In turn , the CA's certificate may need verifying, in which case the process is repeated until the process bottoms out when an entity that is already trusted is reached; that entity is usually self-signed.

The set of certificates from an EE up to a trusted root CA certificate is called a certificate chain. Once a certificate chain has been constructed , the EE's key pair at the start can be validated. This process is illustrated in Figure 7.5.

click to expand
Figure 7.5: Certificate validation process.

7.2.4 Management of Certificates with a PKI

In order to protect critical applications and data and comply with new regulatory requirements, public key infrastructure (PKI) has recently increased in popularity for use in the banking, financial, and health care industries and in areas where the protection of proprietary data is imperative. In the wired world, it is used to provide privacy, integrity authentication, and nonrepudiation. Increasingly, wireless networks will be required to provide the same level of basic security functionalities as that provided in the wired world to meet the minimum accepted standards for security that are expected by users, customers, and partners . As the support and demand for WLAN PKI solutions increase and the cost, complexity, and interoperability issues decrease, it is important for the wireless practitioner to be aware of the function and use of PKI.

click to expand
Figure 7.6: Public key encryption and decryption as used in a PKI.

What is PKI?

A PKI is a set of technologies that enables an organization to ensure that similar levels and forms of trust that exist in the physical world are implemented in the digital world. It includes the hardware, software, people, policies, and procedures needed to create, manage, store, distribute, and revoke certificates. Public-key cryptography is a form of encryption based on the use of two mathematically related keys (the public key and the private key) such that one key cannot be derived from the other. PKI provides security architecture that provides user authentication, data confidentiality, message authentication and integrity, and nonrepudiation. At a minimum, an effective PKI should address the following issues [4]:

  • Privacy . Ensures that two parties send and receive data without any other party gaining access to it

  • Authentication . Ensures that the communicating individuals or entities are who they claim to be

  • Nonrepudiation . Ensures that electronic events, such as signed contracts and wire transfers, cannot be disavowed.

  • Access. Provides 24- hour -a-day, 7-day-a-week availability to people with authorized access to particular services.

  • Scalability. Allows a secured network to expand as demand increases .

  • Security. Beyond privacy, ensures confidentiality using physically and electronically impregnable links over private and public networks, such as intranets , extranets, and the Internet.

General Overview of the PKI Process

PKI conducts transactions by attaching digital certificates to a message between different parties. Digital signatures comprise coded ciphers and authenticate these transactions. Since October 2000, the Electronic Signatures Act has been the law [5] that allows digital signatures to carry the same contractual weight and effect as traditional signatures in the United States. A CA issues a digital certificate. After a person or company is certified, it is usually identified in an initial communication, and the receiving party associates it with a given digital signature as part of a digital certificate. This encrypted identifier prevents hackers from reading or altering the content of the transmission. The digital signature acts as a private key to unlock an encrypted message, but the process also requires a public key to transmit the message through the network. The corresponding public key is contained in a public directory.

The Certificate Authority

The CA issues the actual certificates and implements the defined policies and procedures on how those certificates are to be utilized. These policies and procedures are detailed in the Certificate Policy (CP) and Certificate Practice Statement (CPS). A CA generates, updates, and manages certificates, signs certificates, stores users' private keys, generates and publishes the Certificate Revocation List (CRL), and cross-certifies other CAs. Depending on the application or role, any organization that has the ability to verify the binding between a public key and an entity can be a CA. The relationship of the CA hierarchy is shown in Figure 7.7.

click to expand
Figure 7.7: Certificate authority process hierarchy.

The Registration Authority

The Registration Authority (RA) is an optional component in the PKI and is a subordinate server to which a CA can delegate management functions. A variety of authentication tasks can be performed by the RA to include reporting on revoked certificates and generating keys or archive key pairs. The RA can be used to distribute functionality across the network to increase the scalability of the implementation across an organization. The functionality of delegation has a drawback in that the length of the security loop that must be managed is increased.

The Digital Signature

A digital signature is the encryption of a message with a private key. The problem is that while a digital signature authenticates the document up to the point of the signing computer, it does not authenticate the link between that computer and signer [6]. Figure 7.8 illustrates how digital signatures work.

click to expand
Figure 7.8: The digital signature process.

The Digital Certificate

A digital certificate is an electronic document that binds pieces of information together to include name, serial number, expiration dates, copy of the certificate holder's public key, and the digital signature of the CA so that a recipient can verify that the certificate is authentic . In this way, the identity of the message sender or the document signer is authenticated, and there is assurance that the original content of the message or document has not been altered . A user's private/public key pair and a passcode are used to protect the certificate. The CA that issues it determines the value of the certificate. For example, just as it is possible for a teenager to obtain a worthless identification card, it is also possible to get a worthless, cryptographically strong digital certificate.

Certificate Distribution

The CA is the trusted third party and must have a means to distribute certificates, so users and applications can use them. A directory is a certificate repository that stores certificates so applications can retrieve them on behalf of users. The Lightweight Directory Access Protocol (LDAP), discussed elsewhere in the book, has become the directory of choice for many PKI systems. LDAP is popular because it can support a huge number of users, is very scalable and distributed, responds efficiently to search requests , and is an open standard (RFC 1777). LDAP also has value in that it is an alternative to X.500, which is more complex than what is needed for PKI. Directories are an efficient means for certificate storage and retrieval within a PKI system. The CA populates its directories with certificates and CRLs. The directory can then be used by client applications to retrieve the certificate based on a parameter such as name or e-mail address. Clients can also check the CRL to determine whether an individual certificate is revoked or not.

Certificate Revocation

Many certificates have a long lifetime, and it is possible for certificates that are no longer trustworthy and that have not expired to still be in the system. The CA must revoke untrustworthy certificates. Reasons that a certificate may be revoked include a compromised or stolen private key, a forgotten user passphrase, a user who resigns or is terminated , or a change in corporate policy. The revocation status of a certificate must be checked before each use, and the users and applications must be informed that the continued use of the certificate is no longer considered secure. This requires that PKI must incorporate some type of revocation system. The CA must securely publish information regarding the status of each certificate in the system. The application software, on behalf of users, must then verify the revocation information before each use of a certificate. In most cases, the CA creates secure CRLs through the use of a digital signature, and pushes these CRLs to the directory. The CRL will specify the unique serial numbers of all revoked certificates. The client-side application must check the appropriate CRL before using a certificate to determine if the certificate is still trustworthy. This process must be done consistently and transparently on behalf of users.

The CRL can become very large because the CRL issued by a CA must include all valid certificates issued by that CA and those that that have been subsequently revoked. The lifetime of the certificates and size of the user base will factor into the size of the CRL. The required bandwidth to support the CRL can become very high as a result of a large CA, making it impractical for large organizations to support standard CRLs. This is an important consideration when designing a PKI solution for use in a WLAN.

PKI Policy

Policy is a critical element in the effective and successful operation of a PKI. PKI cannot be effective if it is deployed without a set of working policies that govern the use, administration, and management of certificates. The certificate policy defines the controls for the use of certificates. The X.509 standard defines certificate policy as "a named set of rules that indicates the applicability of a certificate to a particular community and/or class of application with common security requirements." The CPS defines the practices that a CA employs in managing the certificates that it issues. The CPS should describe how the certificate policy is interpreted in the context of the system architecture and operating procedures of the organization. The lack of a CPS in a PKI will result in ambiguity and confusion regarding who is responsible for what.

Risk Analysis: Do you Really Need PKI?

As with other technologies, before you get started, a risk analysis should be conducted to see if you can really benefit from a PKI. The process and information flows involved in the proposed PKI system should be studied, focusing on the weakest links, both systems and personnel. Personnel will be assessed because human weakness can always be exploited at a cost far lower than that of technology.

7.2.5 Ubiquity

Ostensibly, this is the number of browsers that will recognize SSL certificates as valid. Browsers know which certificates can be trusted because a list is embedded within the browser software. Certificate authorities lobby browser makers to include their certificates. As software changes, certificates are chained, SSL itself changes, old code is left behind, and when presented with newer certificates, older browsers may generate transaction errors. Some vendors make lots of noise about ubiquity, in some cases, to distract buyers from other product deficiencies. There is precious little science to determining these values, and really, anything greater than 95 percent will provide more than adequate acceptance. Very old browsers will always have a problem.

7.2.6 Authentication and Java

One way to provide authentication is to use SSL, which is available for Java in the Java Secure Socket Extension (JSSE), handles authentication among communicating processes using X.509 technology, and provides encryption support using various encryption algorithms of assorted strengths. For many applications, this is the way to go if you want an out-of-the-box solution and can guarantee that both sides provide SSL support.

7.2.7 Authentication

Consider the following interactions between a client and a server, which are typical of both SSL-enabled applications (although hidden from view) and the custom applications built using X.509 technology:

  1. The client opens a connection to the server and asks the server to authenticate itself.

  2. The server authenticates itself and ” optionally ”asks the client to authenticate itself. Client authentication, while possible with SSL, is seldom used in most SSL transactions; however, for enterprise applications in which auditing of all transactions is important, client authentication provides the only way to determine for sure that the client's claimed identity is legitimate .

  3. The client authenticates itself. If the client desires an encrypted connection, it takes steps to establish one. Server authentication and client authentication essentially mirror each other.

  4. The client begins the transaction.

7.2.8 Content-Based Security

HTTPS, SSL, and Transaction Layer Security (TLS) are connection-based security protocols. The basic idea is to secure communication channels and, hence, secure everything that passes through those channels. This approach has several problems. Figure 7.9 illustrates a mobile transaction involving multiple intermediaries.

click to expand
Figure 7.9: A mobile transaction involving multiple intermediaries.
  • Direct connection between client and server must be established . If our application has multiple intermediaries to provide value-added services, multiple HTTPS connections must be piped together. That not only opens potential security holes at connecting nodes, but also creates a public key certificate management nightmare.

  • All content is encrypted . In some application scenarios, such as broadcasting stock quotes or getting multilevel approval of a transaction, parts of the communication should be open. Yet, we still want to verify the authenticity of those quotes and approval signatures. Connection-based security is of no use here. Unnecessarily encrypting all content also introduces more processing overhead.

  • HTTPS is inflexible for applications with special security and performance requirements . It lacks support for custom handshake or key exchange mechanisms. For example, HTTPS does not require clients to authenticate themselves . Another example is that any minor digital certificate-formatting problem causes the entire HTTPS handshake to fail. The developer has no way to specify what errors can be tolerated. Other connection channel-based security technologies, such as VPN, have similar problems. For future mobile commerce applications, we must secure content rather than channels.

7.2.9 Distributed Access Control

Mobile applications often interact with multiple back-end servers, pull data from them as needed, and assemble personalized displays for users. Each information service provider might have its own user authentication and authorization protocols, as seen in Figure 7.10. It is a major inconvenience for mobile users to sign on to each back-end server manually. One way to combat this problem is by using single sign-on services. Single sign-on servers manage user profiles and provide time-stamped access tokens, such as Kerberos tickets, to authenticated users. Service providers interact with single sign-on servers to validate tokens. Being a one-to-one protocol, HTTPS is unfit in single sign-on schemes.

click to expand
Figure 7.10: Sign-on process involving an authentication server.

Single sign-on domains can form alliances and federations. Allied domains recognize tokens from each other. Important single sign-on alliances include Microsoft .Net Passport and Sun Microsystems' Liberty Alliance Project. Figure 7.11 illustrates the structure of federated single sign-on domains. To integrate into single sign-on service domains, smart mobile clients must be able to handle security tokens. Those tokens are often cryptographic hashes with attached digital signatures.

click to expand
Figure 7.11: Federation of single sign-on domains.

7.2.10 Device Security

Mobile devices are easy to steal or lose. We must prevent unauthorized personnel from accessing a device's sensitive data. For example, your company's financial data or private keys should not be recovered from a stolen mobile device. On-device information security is one of the most important challenges we face today. HTTPS does not support on-device information security. Mobile clients are responsible for protecting their own data. Strong passwords usually protect on-device information.

7.2.11 Lightweight Mobile Cryptography Toolkits

To take advantage of advanced security technologies, mobile developers must have programmatic access to cryptographic algorithms. Here we discuss third-party J2ME cryptography toolkits. Those toolkits let us implement flexible solutions meeting the aforementioned requirements. They prove crucial to the Connected Limited Device Configuration (CLDC) MIDP platform because standard CLDC/MIDP does not provide any cryptography APIs. High-end J2ME platforms such as profiles based on CDC (or PersonalJava) can optionally support the java.security package in Java Cryptography Architecture (JCA) but not the javax.crypto package. As a result, crucial security APIs such as encryption/decryption ciphers are missing from all of these standard profiles. Even for APIs in the java.security package, the bundled JCA provider might not implement the proprietary algorithm we need or have an inefficient implementation. So, for high-end J2ME devices, lightweight toolkits also prove essential. Here are the general requirements for a toolkit suitable for mobile commerce:

  • Fast. Mobile devices are personal devices that must be responsive . On the other hand, they have slow CPUs, and Java is not known for its raw performance. Handling CPU- intensive cryptography tasks, especially public key algorithms, at an acceptable speed on J2ME devices is a big challenge.

  • Small footprint . Most modern comprehensive cryptography packages consume several megabytes of storage space; however, an MIDP phone device might have only 100 KBs of storage space. We must balance features with footprint.

  • Comprehensive algorithm support . A cryptography package's goal is to support flexible security schemes. Such flexibility comes from the ability to choose from a range of algorithms. Important cryptographic algorithms include the following:

    • Symmetric key encryption

    • Public key encryption

    • Digital signatures

    • Password-based encryption

  • Sensible APIs . To support a wide range of algorithms through a consistent interface, cryptography package APIs often have multiple layers of abstractions and complex inheritance structures; however, a complex API will hinder its adoption.

  • Easy key identification and serialization. In a general-purpose cryptography package, keys for different algorithms must be identified and matched properly on both communication ends. The public key pair generation process is often too slow on devices. Therefore, we must pregenerate keys on the server side and then transport keys to devices. The API should provide the means to ease and secure this process.

  • Good reputation . A security solution provider must be trustworthy and have a good record of accomplishment. In addition, no algorithm is secure if the implementation is poorly conceived.

  • Timely bug fixes. Security holes and algorithm weaknesses are discovered frequently around the world. The security solution provider must track this information and provide fixes or patches promptly.

Bouncy Castle Lightweight API

Bouncy Castle (BC) started out as a community effort to implement a free, clean-room, open-source JCE provider. BC developers developed their own lightweight API (BC lightweight crypto API) to be wrapped in BC JCE provider classes. The BC lightweight API can also be used as a stand-alone, with minimum dependence on other J2SE classes. The BC J2ME download package contains the implementation of the BC lightweight API as well as two core Java classes not supported in J2ME/CLDC: java.math.BigInteger and java.security.SercureRandom. BC's strength comes from its open-source development model:

  • When security holes or bugs are found, they are fixed quickly.

  • BC's flexible API design and community development model allow anyone to contribute new algorithm implementations . BC supports a range of well-known cryptographic algorithms.

  • The BC community is constantly optimizing existing implementations. For example, BC 1.16 has three AES implementations that provide a range of compromises between speed and memory usage. From BC 1.11 to 1.16, the BigInteger implementation has improved so much that the time needed for Rivest-Shamir-Adleman (RSA) encryption is only one-fortieth of what it used to be.

  • Because BC implements an open-source JCE provider, you can look at the BC JCE source code to figure out how to use the lightweight API for various tasks. This provides a powerful learning tool for advanced developers.

  • Best of all, it is free.

However, the ad hoc development model also brings some problems:

  • Many BC algorithm implementations come straight from textbooks . There are simply too many algorithms and too few volunteer developers to optimize everything. The lack of optimization results in relatively poor performance, especially for some public key algorithms. As of version 1.16, BC public key performance proves sufficient for only high-end phones or PDAs.

  • The BC API design is flexible, but quite complex, and beginners find it difficult to learn. Some developer-friendly API features are missing. For example, although BC provides full support for Abstract Syntax Notation.1 (ASN.1), it lacks a set of ready-to-use general-key serialization APIs.

  • The community support via mailing list often works well; however, there is no guarantee that someone will answer your question, much less in your specified time frame.

  • To support so many algorithms, BC has a large footprint. The lightweight API jar file is nearly 1 MB, but most mobile applications use only a small subset of BC algorithms. BC's free license terms allow you to pack and redistribute only the classes required in your application. Some J2ME postprocessing tools and IDEs (e.g., IBM WebSphere Device Developer) can automatically find class dependence and delete unused files from your jar file. Those tools prove handy when you develop with BC.

Phaos Technology Micro Foundation Toolkit

Phaos Technology is a Java and XML security solution provider. It offers toolkits for secure XML Java APIs, J2ME lightweight crypto APIs, and one of the first implementations of the SSL protocol on J2ME/CLDC. Here we focus on the Phaos Micro Foundation (MF) lightweight crypto API. Phaos XML security packages do not currently work with J2ME, but they are at a leading position to provide future secure Web services tools for mobile applications. Phaos toolkits are available for free evaluation. You must e-mail the company to get a 30-day license key, which comes with tech support. Phaos is a reputable security company with a good record of accomplishment. The technical support staff is also very knowledgeable and responsive.

The Phaos MF runs on both CLDC and CDC. The CDC version also runs under J2SE. The toolkit footprint is 187 KB for the CLDC version and 169 KB for the CDC version. The Phaos API is intuitive and comes with excellent documentation and code examples. Phaos MF supports a set of frequently used cryptographic algorithms to strike a balance between performance and features. Those algorithms include symmetric ciphers, such as AES, DES, RC2, and RC4; PKI ciphers and signature schemes, such as DSA and RSA; and Password-based Encryption Schemes (PBES), such as Public Key Cryptography Standard (PKCS) #5 and #12. Phaos MF also supports X.509 certificate parsing, ASN.1 encoding, and efficient memory pooling.

For RSA and DSA algorithms, Phaos implementations are better optimized than BC 1.16. Nevertheless, public key tasks still take seconds on even high-end mobile phones. In fact, no matter how much optimization you do, those classic PKI algorithms might just prove too heavy for the smallest devices. Novel algorithms and approaches are needed. NTRU and a startup company called B3 Security provide such solutions.

NTRU Neo for Java Toolkit

NTRU PKI algorithms include an encryption algorithm NTRUEncrypt and a signature algorithm NTRUSign, invented and developed by four math professors at Brown University. In sample programs, NTRU algorithms perform 5 to 30 times faster compared with other public key algorithms with similar cryptographic strength. NTRU algorithms are published and on their way to becoming IEEE and Internet Engineering Task Force (IETF) standards. NTRU patented the algorithms to protect their business interests. NTRU algorithm patents have been licensed by a variety of mobile software, smart card, and Digital Signal Processor (DSP) chip vendors, including Sony and Texas Instruments.

Cryptographic algorithms are scrutinized and improved repeatedly before being considered mature and ready for general public adoption. Although NTRU algorithms have been inspected many times by both the academic and business worlds , they are still relatively new. Security weaknesses were identified in NTRUEncrypt as late as May 2001. Those weaknesses do not undermine NTRU algorithm fundamentals and have since been fixed. As you should with any critical project, research NTRU security before licensing it.

NTRU provides an implementation of its algorithms in a Java package called NTRU Neo for Java. You must work out an agreement with NTRU before you can evaluate the package. In addition to NTRU public key algorithms, Neo for Java also includes an implementation of the AES Rijndael symmetric key algorithm. The Neo for Java package runs on CLDC, CDC, and J2SE platforms. It has a memory footprint of 37 KB without signature key generation classes, which have a footprint of 35 KB. The Neo for Java API is simple and easy to use. In fact, it might be too simplistic. For example, the block encryption method requires users to divide plaintext data into blocks.

Using Neo for Java, NTRUEncrypt key pairs can be generated quickly from passphrases. The same passphrase always produces the same key pair. For that reason, Neo for Java does not provide a password-based key store facility. NTRUSign keys, however, are slow to generate and require floating-point support. The Neo for Java package provides a floating-point emulation class, which can support NTRUSign key generation on a CLDC device. Nevertheless, on-device NTRUSign key pair generation takes a long time to complete. Generating and distributing NTRUSign keys from a server computer is a better approach. Fortunately, a signature key can sign thousands of messages before it needs replacement.

B3 Security

B3 Security is a company based in San Jose, California, that specializes in developing new lightweight security infrastructures that minimize the current overhead associated with PKI. Its flagship products are B3 Tamper Detection and Digital Signature (B3Sig) SDK and B3 End-to-End (B3E2E) Security SDK. Both are available for J2ME. The B3E2E SDK (still in beta) provides features equivalent to SSL in the PKI world, but with a shorter handshake, faster session key establishment, and less management overhead, especially for pushed messages.

The B3Sig SDK runs on the J2ME/CLDC platform. The B3 digital signature scheme is based on keyed hash, which is referred to as a Hashed Message Authentication Code (HMAC). HMAC has been around for many years and has proven security. B3 uniquely uses HMAC properties instead of more computationally intensive public key algorithms, such as large integer factoring, to form a B3 tamper-proof block of bytes and digital signature. In a mobile enterprise application setting, it works in its preferred operation mode where the B3Sig SDK uses two pairs of shared and nonshared secrets. Analogous to the PKI world, a shared secret acts like a public key with targeted distribution scope, and a nonshared secret key acts like a private key.

Each user knows his or her own password in an existing enterprise identity management system. The system only stores a hash (e.g., Message Digest 5 [MD5]) of the password. No cleartext password is stored anywhere . The B3 SDK uses that hash as the first shared secret. The first nonshared secret comes from a different hash (e.g., Secure Hash Algorithm-1 [SHA-1]) of the same password.

B3 software on a device generates a private root key and the corresponding shared secret. They form the second pair of secrets, which ensures stronger authentication. The second shared secret can be used for third-party verification. A B3 protocol can also use the second pair to efficiently reset a forgotten password. Key points of this approach include the following:

  • Nonshared secrets are used together with the message (or transaction) itself and user ID to generate a unique signing key for every message.

  • A B3 algorithm then generates a digital signature containing three interrelated parts.

  • A B3 algorithm can verify message and user ID integrity with shared secrets. The receiving party can query the password system to verify the sender's authenticity.

The B3 scheme has the following advantages:

  • Speed. Cryptographic hash and HMAC algorithms can run 1,000 times faster than public key algorithms.

  • Seamless integration with existing enterprise authentication infrastructure . Various password-based identity management systems are already widely deployed in enterprises (the simplest example is a password file). Utilizing existing password-based identity management systems avoids the expensive overhead of digital certificate management associated with a PKI digital signature.

  • Strong two-factor authentication. Only a person who has access to the specific device and knows his or her application password can generate the correct shared and nonshared secrets to sign messages. That also helps prevent password guessing and dictionary attacks.

  • Tamper detection . B3Sig SDK has a conservative design it assumes that no algorithm is permanently secure, including its own. In case of a successful crypto attack on B3 signature and verification algorithms, the sender can still prove that he or she did not send the forged message. Part of the B3 signature is linked to the nonshared secrets through well-established non-B3 one-way algorithms (HMACs).

B3 solutions do not dictate complete replacement of the current PKI infrastructure. Rather, B3 solutions can coexist and interoperate with the current system. For example, we can use HTTPS as well as B3E2E SDK to pass shared secrets during setup. The application can add delegated PKI signatures on top of the B3 signatures if desired. To use B3 signatures without an existing identity/password management system, one would also need to set up a B3 shared secrets store.

Leading security experts in the financial services industry, such as Larry Suto, FTCS co- chair at Wells Fargo Bank, and Jim Anderson, a vicre president of information security at Visa, have agreed to be references for the B3 solutions. If B3 can deliver on its promises, it could become one of the most important security solutions for mobile enterprise applications; however, B3 is still a young company, and its approach has not been tested in large-scale, real-world environments.

7.2.12 Device-Specific APIs

MIDP device vendors (e.g., Motorola iDEN phones) also provide device-specific cryptography API extensions. Those packages utilize device native cryptography libraries and special hardware features. Thus, they likely have good on-device performance; however, applications using vendor-specific APIs are no longer portable to other devices. That causes J2ME platform fragmentation and defeats one of Java's greatest advantages. One way to avoid such fragmentation, yet still take advantage of native performance, is to develop standard J2ME cryptography API specifications and allow device vendors to plug in their own implementations. The CDC/FP/PP device vendors can implement native JCE solutions. At the time of this writing, no MIDP-compatible lightweight crypto API Java Specification Request (JSR) is being developed in the Java Community Process.

7.2.13 Secure Your Mobile Data

Advanced mobile commerce applications require content-based and single sign-on security solutions that protect both communication and on-device data. Today's popular HTTPS solution does not meet those requirements because of its point-to-point nature, inflexible protocol design, and slow algorithms. Third-party vendors have come up with excellent security tools that will meet mobile commerce requirements. Those toolkits give developers programmatic access to cryptographic algorithms, especially algorithms specifically designed for mobile applications. For an excellent example of high-level mobile code security, we need look no further than Java applets, which use digital signatures and the security sandbox to ensure code safety over the Web. Here's a rough outline of how applet security works:

  • Before transmission, the applet server signs an applet jar file using its digital certificate.

  • Upon receipt, the browser-side Java security manager verifies the signature and decides whether the origin and integrity of the application module can be trusted.

  • If the digital signature cannot be verified, the runtime exits with an error. If the signature can be verified, the security manager uses the digital certificate to determine the permission domain for that entity, either by querying the client or by using a table to look up permissions for trusted entities.

  • Once the verification process has been successfully completed, the application code is delivered to the client.

Note that each permission domain contains a set of rules to access specific APIs. For example, an application from a lesser-known source might not be allowed to read/write local storage devices or make arbitrary network connections.

J2ME/CDC-based mobile code can be signed and delivered in the same way as Java applets. In theory, MIDP applications could be secured by the same methods . Because of limited processing power and memory, however, a domain-based security manager is not yet available in the MIDP 1.0 specification. The current MIDP Virtual Machines (Vms) can only provide a minimum-security sandbox. For example, an MIDP applet suite can only access persistent record stores created by itself.

The upcoming MIDP 2.0 specification will require support for the domain security model, including a domain-based security manager, application code signing, and digital certificate verification functionality. To better support secure mobile code provision, MIDP 2.0 will also formally include an over-the-air (OTA) provisioning specification. The MIDP 2.0 OTA specification describes who has the authority to install and delete wireless applications, what operations must be confirmed by the user versus which operations can be done automatically, what alerts must be presented to the user, and what data is shared when updating applications.

7.2.14 Code Signing

Signing occurs by using a public key algorithm, usually the RSA algorithm, which is a special type of cryptography consisting of two different, but mathematically related, keys: a private key (for signing code) and a public key (for verifying the signed code). The public key is indeed "public" and can be published for anyone to access. The private key, then, is "private" and must be protected by the owner.

When the private key is successfully used for signing, the signature proves that the content really came from the owner of the private key on two levels: (1) we trust the authority that distributed the certificate; and (2) we trust that the owner of the private key protected it and kept it a secret. When we put our trust in this system, we can believe that when a public key verifies a signature, it really came from the owner who signed it and whatever has been signed was not modified.

7.2.15 Verification

When developers apply this technology to signing their code, they are actually creating a signed data object with the public key algorithm, which is used in combination with a hashing cipher based on a standard called Public Key Code Signing #7 (PKCS#7). Signing code happens in two steps. First, the hashing algorithm is applied to the code they wish to distribute. Hashing algorithms produce unique binary message digests based on the code they sign. If the hashing cipher is run through code a second time and the message digest changes, it means the code changed after being signed. Next, after a hashing function produces a message digest, the developer signs the digest with his or her private RSA key. The signed message digest can be appended to the code and distributed with the developer's public certificate. The public certificate contains the signer's public key, which is to be used to verify the signature during the verification process. The verification process works in reverse using the signer's public key. First, the public key is applied to the signature to verify its authenticity. If successful, the same hashing algorithm will be applied to the code. When the output from the two message digests match, we know that the code has not changed. If either of these two steps fails, however, we know the code is not trustworthy.

Why are there two steps to signing? Public key cryptography is computationally intensive because it is based on complex math. Computing the math slows the process, so a PKCS#7 signature is an efficient way to use a combination of algorithms to sign only what is needed in the most secure way. If it's so slow, you're probably wondering, why use public key cryptography at all? We use it because it is a highly secure algorithm that has not been broken when used at the recommended key size (currently 1,024-bit). Also, it defines a way to provide trust and authenticity for two people who have never met and must communicate over insecure channels.

Nevertheless, code signing cannot work on its own. It has to work in a system that is big enough to encompass the Internet. For starters, the client has to know how to verify a signature. Luckily, this functionality is currently supported, albeit handled differently, in both the Internet Explorer and Netscape browsers. These browsers also contain several trusted CAs' public root keys. When they receive a developer's public certificate, it must also be signed by one of these CA's root keys. This is known as certificate chaining. Before a public certificate used "in the wild" can be trusted, it must be verified by the CA that signed it. That is exactly what is done with these public root keys in browsers. The entire system depends on how the CA protects the private root key. Many CA vendors seal them away in vaulted safes in undisclosed locations.




Wireless Operational Security
Wireless Operational Security
ISBN: 1555583172
EAN: 2147483647
Year: 2004
Pages: 153

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net