Common Problems

Common Problems

There tend to be two common problems with the use of cryptography in secure applications. First, some applications developers mistakenly believe that simply incorporating strong cryptography into their applications makes them secure. Second, some applications developers invent their own (proprietary) cryptographic algorithms. This section addresses both these problems.

Cryptography by Itself

Designing a secure application is about making decisions, and these decisions often affect how the application uses cryptography. If an application uses cryptography incorrectly (for example, using an encryption algorithm when it should use a message authentication algorithm), it will probably be insecure. The design decisions of applications developers also affect whether there will be holes in an application's design that allow an attacker to attack the application itself (instead of the cryptography). An example of a design flaw would be an e-mail encryption program that uses strong cryptography to encrypt a user's e-mail but stores the user's pass phrase unencrypted on the user's disk. Another common mistake is to assume that an attacker will not reverse-engineer an application to extract its embedded cryptographic keys or learn the details behind its proprietary cryptographic protocol. An example of this is the copy protection schemes employed by game manufacturers. As soon as a new game or protection scheme is released (and sometimes sooner), there is a crack, or some site has an unprotected version available for download.

The point is that although cryptography is a powerful tool, the designer of an application must not consider cryptography a silver bullet. Application security is more than just cryptography and must be designed in from the beginning.

Proprietary Cryptographic Protocols

A common rule-of-thumb phrase in the security industry is Never roll your own cryptography (unless, of course, you are a cryptographer, and even then, never use what you have designed until others have thoroughly evaluated it). Although this now trite expression is laughed off, it is fundamentally true.

Companies invent their own proprietary algorithms for several reasons. Some believe that cryptography is not that difficult or that they can invent revolutionary new protocols more secure than existing solutions. Others believe that an attacker will have a more difficult time breaking the company's secret protocols than published, well-known, academically accepted protocols.

Although there is some truth to both these beliefs, they can be quite dangerous. (Unfortunately, it is the end user who is often affected and not the developers.) To address these beliefs, we would like developers to consider the following.

Companies and developers who create their own cryptographic protocols seldom have seasoned cryptographers review their designs. This is especially true when companies keep their protocols secret. Consequently, unless a company employs its own world-class cryptographers, its proprietary protocols may not be as secure as it believes. Likewise, although the underlying cryptographic primitive or protocol design may be secure, the implementation may be flawed, negating any security or advantage in using the proprietary protocol.

A well-known mathematician/cryptographer named Kerckhoff said something to the effect that if an algorithm is secure against an attacker who does know the inner workings of that algorithm, that algorithm will be secure against an attacker who does not know the inner workings of that algorithm. We further observe that one of the best ways to understand the security of an algorithm is to have seasoned cryptographers try to break it (or try to prove something about the security of that algorithm with respect to a trusted primitive). If, after a reasonable effort, the world's best public cryptographers cannot break that algorithm, you can have confidence that the algorithm is secure.

With this in mind, if a publicly accepted algorithm is secure against the world's best public cryptographers, obviously, that algorithm should be secure against the casual attacker. Furthermore, the preceding observations suggest that application developers should choose older, more analyzed algorithms over flashy new algorithms that have yet to be proven. This is provided that the older algorithms are secure, given your situation and intended use.

Despite the preceding advice (which is not being presented here for the first time by any means), the development industry is still infused with insecure cryptographic algorithms. As you might expect, people eventually get around to breaking these algorithms. The heavily publicized flaw with 802.11's Wired Equivalent Privacy (WEP) protocol is an example of why the preceding advice should be heeded. Some of the flaws with WEP are classic, and almost any experienced cryptographer could have identified and avoided them. If the authors of the 802.11 Specification had properly involved cryptographers during the design of WEP, these issues could have been identified and resolved before the specification was implemented and used commercially.

Common Misuses

In addition to creating their own protocols, one of the most common mistakes people make when designing secure applications is not to use cryptographic protocols the way they were intended. This might be analogous to not following the directions included with a house smoke detector or burglar alarm the smoke detector will not be of much use if the batteries are not replaced and the burglar alarm is not connected properly to the siren and phone line. The same thing is true for cryptography cryptographic protocols come with rules that must be followed.

The following is a list of common misuses of cryptographic protocols. Although not exhaustive, it will help wireless and application developers understand and avoid some of the most common pitfalls.

Key Generation

When we first introduced encryption, we said that an encryption protocol consists of two parts: an encryption algorithm and a decryption algorithm. That is not completely true. It actually consists of three parts: an encryption algorithm, a decryption algorithm, and a key generation algorithm. The same can be said for many other protocols. For example, a message authentication code actually consists of a tagging algorithm, a verification algorithm, and a key generation algorithm.

Although we did not mention the key generation algorithm earlier, the key generation algorithm is among the most critical portions of a secure cryptographic protocol. To see this, recall that one way for an attacker to break an encryption protocol is to find the decryption key. In the symmetric setting, if a k-bit encryption key was chosen randomly, the attacker might have to guess 2k keys before she stumbles on the right one. Thus, in the symmetric setting, we argued that the strength of a secure encryption algorithm correlates roughly to the size of that algorithm's encryption key.

Unfortunately, that is true only if the encryption key was generated correctly or randomly. What if the encryption key was not generated randomly? That is, what if certain encryption keys are more probable than others? For example, what if the encryption key depends on the user (perhaps as a function of the user's social security number or birthday)? What if the encryption key consists of a number of zeros followed by a few random bits? In both cases, an attacker would be able to guess the encryption key in far fewer than 2k tries. This means that although an encryption algorithm with randomly generated keys might be secure (because an attacker can't possibly make 2k guesses), the algorithm may not be secure in practice (because the user's application doesn't properly generate random keys). Developers and users should be aware that if anything other than random numbers is used, the security of the encryption is adversely affected.

Randomization

Key generation algorithms are not the only algorithms that require random numbers; many cryptographic protocols use random numbers to circumvent certain attacks. For example, the CBC and stateless CTR encryption modes rely on random numbers to prevent a single plaintext from always encrypting to the same ciphertext. To use these protocols properly, developers must ensure that applications correctly supply the protocols with random numbers.

Any algorithm for generating random numbers is only a pseudo-random number generator (PRNG) because random behavior cannot, by definition, be captured by an algorithm. Worse, many of the PRNGs provided as library function calls are not random at all. They are quite predictable. Developers should ensure that, if a random number is called for, the source of this number is known and its randomness is acceptable for the usage.

Key Management

As its name implies, key management is concerned with the way applications handle cryptographic keys. Key management addresses issues such as key generation, key agreement, and key lifecycle. Each cryptographic protocol has different requirements for the way its keys are handled.

There are some general (though often ignored) principles for key management. One principle is that the same cryptographic key should not be used for multiple purposes. For example, you should not use the same key pair for the RSA encryption primitives as you use for the RSA signature primitives. Similarly, the same symmetric key should not be used for symmetric encryption and symmetric message authentication code. Multiple usages may inadvertently leak information useful to an attacker.

A similar principle is that multiple parties should not share the same key. Although this principle may seem intuitive (because the more parties that know a key, the more likely it is for an attacker to learn the key), many modern applications (including most WEP installations) still distribute identical keys to multiple parties.

Another often ignored principle is that keys should be changed over time. Regularly changing keys limits an attacker's ability to learn about the key, by placing a time period on a key's usability. For example if a key can be learned by capturing a certain number of WEP packets, the key should be changed before sending that many packets. With poorly implemented schemes, this period of time may be very short indeed, rendering the encryption practically useless.

Keystream Reuse

When introducing stream ciphers, we emphasized that you should be extremely careful not to reuse a keystream. Although most people are aware of this restriction, numerous applications still reuse a keystream. If an application reuses a keystream, an attacker could learn information about the plaintexts encrypted with the reused portions of the keystream. As a visible example of this problem in common applications, the 802.11 WEP Specification does not properly prevent keystream reuse.

Encryption versus Authentication

When most people think about cryptography, they think about encryption. Although there may be an historical basis for correlation cryptography with encryption, this is no longer the case. Modern cryptography is concerned with more than just encryption; it is concerned with other concepts as well (such as integrity and authentication).

This bias of regarding encryption as synonymous with cryptography causes many people to use encryption when they should use some other cryptographic protocol. For example, many people make statements like this: "We'll encrypt this message. If the recipient of the encrypted message can decrypt it, the recipient will know that the message really came from us." Unfortunately, this is not necessarily correct.

The problem comes down to following a protocol's directions the directions for encryption protocols say that the protocols are tools to accomplish privacy. The directions for authentication protocols say that they are tools to accomplish authenticity, and the directions for integrity protocols say that they are tools to accomplish integrity. By misusing encryption to provide message integrity, application developers can unintentionally open their applications to attack. This problem is exemplified in the 802.11 WEP Specification: Because the WEP protocol does not include any formal message integrity code, an attacker can make controlled, precise, and undetectable modifications to WEP-encrypted packets.

The Man-in-the-Middle Attack

When introducing the asymmetric cryptographic protocols, we mentioned the motivation for certificates: Certificates enable a user to verify that her copy of someone's public key is legitimate (not a public key created by an attacker). Unfortunately, many applications fail to verify properly the authenticity of other entities' public keys. When this happens, an attacker can mount a man-in-the-middle attack. In the office complex case study, Kathleen can pretend to be Louis to NitroSoft and NitroSoft to Louis. In this way, she can undetectably listen to and modify the communications between Louis and NitroSoft.

Buffers and Information Leakage

One final area, often overlooked by developers, is the handling of the plaintext information after it has been encrypted. If plaintext information is left in memory or on disk, it may be accessible to an attacker, either directly through physical access to the user's machine or by leakage of remnant information in buffers used as temporary storage during the encryption or decryption process. If these memory areas are not cleared after processing the plaintext information, another process may be able to access this information and break the application's security without ever looking at the extremely secure encryption protecting the information being sent over the network.

 



Wireless Security and Privacy(c) Best Practices and Design Techniques
Wireless Security and Privacy: Best Practices and Design Techniques
ISBN: 0201760347
EAN: 2147483647
Year: 2002
Pages: 73

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net