WEP has some serious shortcomings, as listed in Table 11.1.
TKIP introduces a set of measures that address each one of these weaknesses. In some ways, it's like the way in which the Hubble telescope problem was repaired. The Hubble telescope is a huge optical telescope placed in space and orbiting the earth. Because it is in orbit, where there is just a clear vacuum to look through and no gravity to distort the lenses, the results were expected to be spectacular. Excitement turned to despair as the telescope was tested and it was found that the main mirror had been manufactured with a defect and was not perfectly shaped. The solution was not to replace the main mirror, which was not possible in space; it was to add some corrective lenses in the light path to the optical receiver after the main mirror.
TKIP solves the problems of WEP in a similar way. It cannot change the major items such as the way RC4 is implemented in hardware, but it can add a series of corrective tools around the existing hardware. The changes applied to WEP to make TKIP are summarized in Table 11.2. The numbers in parentheses indicate which weaknesses in Table 11.1 are addressed by each change.
Message integrity is an essential part of security. If an attacker is able to modify a message, there are many ways in which your system can be compromised. WEP had a method for detecting modification called the integrity check value (ICV), but the ICV offers no real protection at all. The ICV is not considered part of TKIP security (although its value is still checked).
If you want to detect tampering, one simple method would be to add together all the bytes in the message together to create a "checksum" value and then send this along with the message. The receiving party can perform the same computation and verify that it gets the same result. If a bit changes, the sum won't get the same result.
Such a simple approach is of no use for security because the attacker can simply recompute the checksum to match any changes he makes to the message. However, the general idea is similar: Combine together all the bytes in the message to produce a check value called the MIC (message integrity code) and then send the MIC along with the message. The MIC is computed using a special nonreversible process and combining a secret key. As a result, an attacker cannot recompute the MIC value unless he knows the secret key. Only the intended receiver can recompute and check the result.
There are several well-known secure methods for computing the MIC. Such methods have been well tried and tested in other protocols and security applications. However, for TKIP there was a problem. All the well-known methods require introduction of a new cryptographic algorithm or require computations using fast multiply operations. The ones that use multiply operations were considered, but they generally need at least one multiply operation for each group of four bytes. That would mean, for a typical 1514-byte data packet, at least 379 multiply operations. The microprocessor inside the MAC chip of most Wi-Fi cards is not very powerful; typically, it doesn't have any sort of fast multiplication hardware. In a small microprocessor a 32-bit multiply might take 50µs to complete. That would mean that it would take nearly 20ms to compute the MIC, reducing the data throughput from 11Mbps down to less than 1Mbps.
One proposal was to move the computation of the MIC up to the software driver level. After all, if you are using a laptop with a modern PC processor, such computations can be done in almost no time. But what about the poor old access point? Most access points do not have such a high-power processor and are already hard pressed to keep up with a multitude of connected stations.
What was needed was a method that was as secure as the well-known approaches but that could be done without either multiplication or new cryptographic algorithms. This was a tough goal. It was like saying, "I want a car as fast as a Porsche but with a 500cc engine" not practical. However, a good compromise solution came from cryptographer Niels Ferguson using a method that he called Michael. Michael is a method of computing the MIC that uses no multiplications, just shift and add operations, and is limited to a fairly short checkword. Michael can be implemented on existing access points without ruining their performance. However, the cost of the simplicity is that Michael is vulnerable to brute force attacks during which an attacker is able to make many attacks rapidly one after the other. Michael makes up for this vulnerability by introducing the concept of countermeasures.
The concept of countermeasures is straightforward: Have a reliable method to detect when an attack is being made and take action to slam the door in the attacker's face. The simplest countermeasure is just to shut the whole network down when an attack is detected, thus preventing the attacker from making repeated attempts.
We will look at the details of Michael later; but for now, it is enough to know that the method allows the computation of a MIC value that is added to the message prior to encryption and checked by the receiver after decryption. This value provides the message integrity missing in WEP.
Michael operates on MSDUs; the computation to produce the MIC is done on the MSDU rather than on each MPDU. This has two advantages. First, for the mobile device, it allows the implementer to perform the computation in the device driver running on the host computer prior to forwarding the MSDU to the Wi-Fi LAN adapter card. Second, it reduces the overhead because it is not necessary to add a MIC value to every fragment (MPDU) of the message. By contrast, TKIP encryption is done at the MPDU level.
Michael requires its own secret key, which must be different from the secret key used for encryption. Creating such a key is easily accomplished when you are generating temporal keys from a master key, as described in Chapter 10.
IV Selection and Use
Chapter 6 explained the purpose of the IV (initialization vector) and how it is prepended to the secret key to generate the encryption key for each packet. We also noted that there are three fundamental weaknesses with the way IVs are used in WEP:
For a further explanation of why IV reuse is a problem, see Chapter 6.
In the WEP standard, there were no requirements to avoid IV reuse. In fact, it was not mandatory to change the value of IV at all! Many vendors picked a random value for IV for each packet, which seems intuitively to be a good idea. However, at the end of the day, if you want to avoid IV reuse for the longest time possible, the simple approach is to start with a value of one and count up by one for each packet. In TKIP, this behavior has been mandated as part of the standard.
Incrementing the IV delays IV reuse, but it doesn't get you out of the problem of "too short" IV. Even counting up, all the values will be used after about 16 million frames, which happens surprisingly fast in a busy network. Like the odometer on a car, when a binary counter reaches its maximum value, the next increment returns it all to 0 this is called rollover. TKIP introduces several new rules for using the IV. Essentially, there are three differences in how IVs are used compared to WEP:
The WEP IV, at 24 bits, allowed only 16,777,216 values before a duplicate IV would be used. This is unacceptable in practice when this number of frames could be sent in a few hours. Chapter 6 discussed the FMS attack, the most effective attack against WEP, and showed how WEP is very susceptible because its IV appears first in the frame and advertises when a weak key is being used. Some vendors try to reduce the potency of this attack by skipping IV values that would produce weak keys. However, this strategy just reduces the number of possible IV values still further, making one problem better but another one worse!
In designing TKIP, the security experts recommended that the IV must be increased in size to provide a robust solution. There was some debate about how to do this; but in the end, the IEEE 802.11 task group (i) decided to insert 32 extra bits in between the existing WEP IV and the start of the encrypted data. This was a contentious decision because not all vendors can upgrade their legacy systems to meet this requirement. However, most can.
Potentially, the extra 32 bits added to the original 24 gives a new IV of 56 bits; however, in practice only 48 bits is used because 1 byte must be "thrown away" to avoid weak keys. The advantages of going to a 48-bit IV are startling. Suppose you have a device sending 10,000 packets per second. This is feasible using 64-byte packets at 11Mbps. for example. The 24-bit IV would roll over in less than half and hour, while the 48-bit IV would not roll over for over 900 hundred years! Moving to a 48-bit IV effectively eliminates the IV rollover problem, although you still have to be careful to avoid two devices separately using the same IV value with the same key.
Increasing the IV length sounds straightforward, but it introduces some real problems for practical implementation. Remember that the original WEP IV is joined on the front of the secret key to create the RC4 encryption key. Thus, a 40-bit secret key is joined to the 24-bit IV to produce a 64-bit RC4 key. The hardware in legacy systems assumes this type of key structure and can't be upgraded to suddenly deal with a 88-bit RC4 key created by joining the new 48-bit IV to the old 40-bit secret key. In TKIP this problem is solved in an interesting way. Instead of simply forming a new RC4 key by joining the secret key and the IV, the IV is split into two pieces. The first 16 bits of the new IV are padded out to 24 bits in a way that avoids known weak keys. This 24-bit value is used in the same way as for WEP systems. However, rather than joining this value to the ordinary secret key, a new mixed key is generated by merging together the secret key and the remaining 32 bits of the IV (and some other stuff, too). The way in which the long IV is incorporated into the key, called per-packet key mixing, is described in more detail later in this chapter and is shown in Figure 11.3. The important thing to note here is that this approach allows us to achieve two objectives:
Figure 11.3. Creating the RC4 Encryption Key
These objectives have been achieved with the advantage of a 48-bit IV value.
IV as a Sequence Counter the TSC
WEP had no protection against replay attack. An enemy could record a valid packet and play it back again later, in the expectation that, if it decrypted correctly the first time, it probably would do so again. In a replay attack, the enemy doesn't attempt to decode your messages, but she does try to guess what you are doing. In an extreme example, by recording messages while you delete a file and replaying them later, she could cause a file of the same name to be deleted without ever breaking the encryption. Replay prevention is designed to block old messages used in this way. TKIP has a mechanism to enforce this called the TKIP sequence counter (TSC).
In reality, the TSC and the IV value are the same thing. This value always starts with 0 and increments by 1 for each packet sent. Because the IV is guaranteed not to repeat for a given key, we can prevent reply by ignoring any messages with a TSC that we have already received. These rules mean that it is not possible to mount a replay attack by recording earlier messages and sending them again.
The simplest way to prevent replay attacks is to throw out any received messages in which the TSC has not increased by 1 compared to the last message. However, there are a number of reasons why this simple approach would not work in practice. First, it is possible for frames to be lost in transmission due to interference and noise. If a frame with TSC 1234 is received and then the next frame (1235) is lost, the subsequent arriving frame would have a TSC of 1236, a value that is 2 greater than the last TSC seen. Because of the lost frame, all subsequent frames would get rejected on the basis that the TSC did not increase by 1.
So let's revise the rule: throw out any messages that have a TSC less than or equal to the last message. But what about retransmission? According to the standard, frames must be acknowledged by the receiver with a short ACK message. If they are not acknowledged, the message should be retransmitted with a bit set to indicate that it is a duplicate. Being a repeat message, these will have the same TSC as the original attempt. In practice, this works okay because only one valid copy is needed at the receiving end so it is no problem to throw out the duplicates when checking the TSC at the receiver. The possibility of retransmissions illustrates that duplicate TSCs should not always be treated as evidence of an attack.
A more difficult problem arises due to a new concept called burst-ack. In the original IEEE 802.11 standard, each data frame sent must be acknowledged separately. While this requirement is effective, it is somewhat inefficient because the transmitter has to keep stopping and waiting for the ACK message to be received before proceeding. The idea of burst-ack is to send up to 16 frames in quick succession and then allow the receiver to acknowledge all 16 in one message. If some of the messages were not received successfully, the receiver can specify which ones need to be resent. Burst-ack has not yet been added to the standard, but it is likely to be included in the future.
Can you see the problem from the perspective of the TSC? Suppose 16 frames are sent, each with a TSC greater than the last, and the receiver fails to get the first one. It then requests that the first frame be resent, which would happen with its original TSC value. This value would be 15 less than the last frame received and would thus be rejected, according to the rule of "TSC must be greater."
To accommodate this burst-ack, TKIP uses the concept of a replay window. The receiver keeps track of the highest TSC received and also the last 16 TSC values received. When a new frame is received, it categorizes the frame as one of three types:
For the WINDOW category, it checks to see whether a frame with that TSC has been received before. If so, it rejects it. The receiver must keep a record of the last 16 TSC values and check them off as they are received.
This set of rules is more complicated than the simple test we started with, but it effectively prevents replay attacks while allowing the protocol to run efficiently.
Countering the FMS Attack
The most devastating attack against WEP was described in a paper by Scott Fluhrer, Itsik Mantin, and Adi Shamirand and is usually referred to as the FMS attack. This weakness allows script tools to deduce the secret key by monitoring the link. Let's take a (very) quick look at the attack.
The RC4 cipher generates a pseudorandom stream of bytes that can be combined with the data to be encrypted so the whole stream of data looks like random noise. To generate the random sequence, RC4 uses a 256-byte array of values that is reorganized into a different pattern between each byte of random output. The 256-byte array is initialized using the secret encryption key.
Certain key values, called weak keys, create a situation in which the first few bytes of pseudorandom output are not all that random. In other words, if a weak key were being used, there would be a higher probability of certain values coming out in the first few bytes. This fact is exploitable: If you know a weak key is being used, you can work backwards to guess the entire encryption key value just by looking at the first few bytes from a collection of packets. WEP has this particular weakness because it uses the IV value as the first bytes of the key, and the IV value is visible to everyone. As a result, a weak key will eventually be used, thereby giving an enemy the basis for an attack.
Ron Rivest, the designer of RC4, recommended a simple solution to this problem. You simply generate, and throw away, 256 bytes of the random stream before you start encrypting. This approach denies the attacker a chance to get hold of the dangerous first few bytes and ensures that the whole 256-byte array is fully churned up and no longer contains any accessible key information. Well, this would be a simple answer except that, for most of the Wi-Fi cards shipped with WEP, the hardware assist in the MAC chip does not support this solution. It is programmed to start using the first random bytes as soon as they are ready.
Given this severe known weakness, and denied the opportunity to resolve it in the recommended way, the designers of TKIP decided on a two-pronged defense:
The FMS attack depends on the ability of the attacker to collect multiple samples of frames with weak keys. Only about 60 frames are needed before the first bits of information emerge and complete decoding of the key can occur after a few million packets. So how to defeat the attack? The approach adopted in TKIP is to change the secret key with every packet. If you take this approach, the attacker can never get enough samples to attack any given key. This sounds impractical, but in the next section you will see how you can make this change even on older adapter cards.
Another defense against FMS attack is to avoid using weak keys at all. The problem is that no one knows for sure what all the weak keys are. Cryptographers can say that a certain type of key is weak but they can't say that all the others are strong. However, the following section on key mixing shows that two bits in the IV are always fixed to a specific value to avoid a well-known class of weak keys.
Some vendors have modified their WEP implementations to avoid IVs that produce weak keys. However, there is another problem with this approach. We already know that there are not enough IV values available when the IV is 24 bits long. If we now reduce the IV space still further, we reduce one problem but make another one worse! This problem is removed in TKIP simply by doubling the length of the IV.
This section focuses on the changes to the use of the IV in TKIP. There are three significant changes: The length is increased to 48 bits, the IV doubles up as a sequence counter (the TSC), and the IV is combined with the secret key in a more complicated way than WEP. The last feature achieves two results: It enables the 48-bit IV to be incorporated without changing the implementation hardware and it avoids the use of a known class of weak keys. The changes to the IV provide a significant amount of extra security over WEP.