Overview of IEEE 802.11 PHYs

There are a remarkable number of Physical layer choices that may be used in the context of IEEE 802.11 WLANs. The large number of choices arises because there are a number of different physical media over which the frames can be transmitted, including diffuse infrared optical, as well as several different radio frequency bands, including a set of channels in the 2.4 GHz unlicensed ISM[2] band (2.450 ± 0.050 GHz, which is a total RF bandwidth of 100 MHz), as well as several other unlicensed bands in the neighborhood of 5 GHz, for whom usability depends on the regulatory domain (i.e., country or geographical region) in which the product is being used.

[2] There are numerous unlicensed Industrial, Scientific, and Medical (ISM) bands scattered throughout the RF spectrum in the United States. The FCC manages the spectrum and defines how each slice is used. Other countries have their own regulatory agencies, and they may or may not have parallel spectrum assignments that are functionally equivalent to the U.S. ISM bands.

The IEEE 802.11b-1999 and IEEE 802.11g-2003 standards, in the United States (and in most of the world), actually use a subset of the 2.4 GHz band, ranging from 2.401 to 2.4835 GHz. However, in Japan, frequencies up to 2.495 GHz may be used.


The fact that WLANs operate in unlicensed spectrum is a critical distinction from other wireless data standards such as PCS, 3G, or 4G wireless [WAN] networks. The spectrum for those services was sold at a very high price, and it must be frustrating to the operators who spent literally billions of dollars to build these networks that now it is possible to deploy technologies that have much better performance and can operate in spectrum that has no associated cost. Operators of WLAN-based access networks may have a substantial advantage over operators of competing technologies.

Due to the richly inter-related nature of the concepts and issues involved in WLAN PHYs, including the physical RF spectrum allocation, the modulation schemes, and the regulatory issues surrounding product development and deployment, the author will first discuss the PHYs together, to show their similarities, and then summarize each individual PHY in its own section. The author believes that the similarities between the IEEE 802.11 PHYs are more important than their differences and that the differences are best appreciated when you don't focus on them alone.

This chapter does not go into substantial detail with respect to Frequency Hopping Spread Spectrum (FHSS) techniques such as are specified in IEEE 802.11-1999 for use in the 2.4 GHz band to achieve speeds of 1 or 2 Mbps. FHSS technology has been widely deployed in the context of other wireless communication systems, and there is a large installed base of FHSS-based systems deployed today. However, the frequency hopping variety of spread spectrum is effectively a dead-end technology insofar as WLANs are concerned. The Direct Sequence Spread Spectrum (DSSS) techniques covered in this chapter are scalable to much higher throughput rates. This sidebar gives a "just the facts"-level of coverage of FHSS.

The IEEE 802.11-1999 FHSS PHY uses the 2.4 GHz band in a completely different way than the Barker code, which is a form of Direct Sequence Spread Spectrum (DSSS) that also achieves 1 or 2 Mbps, using either 2GFSK (two-level Gaussian Frequency Shift Keying) or 4GFSK (four-level GFSK) coding, respectively.

Regardless of the speed at which it is operating, the FHSS PHY relies on a pseudo-random frequency-hopping sequence that is communicated in advance to all the FHSS STAs. In North America and Europe (although not Spain or France), there are 79 channels within the 2.4 GHz band when it is used for FHSS operation, and they are mapped by a simple formula: Take the channel number and multiply it by 1 MHz, and then add it to 2400 MHz and the result is the channel frequency in MHz. There is no channel 1 the first channel is 2 and the 79th is 80, so the range of frequencies available to the FHSS PHY is from 2.402 GHz to 2.480 GHz. The random hopping sequence mentioned earlier is advertised by the AP so that all the STAs will be able to synchronize themselves with the hopping pattern.

In Japan, 23 channels are available (from 2.473 to 2.495 GHz, numbered as shown previously, except instead of 2 through 80, Japan uses 73 through 95). Finally, Spain uses channels 47 through 73, and France uses channels 48 through 82.

No discussion of FHSS would be complete without mentioning that the author has never seen an IEEE 802.11-1999 product that implements the FHSS PHY. The author would not want to give the impression that he has seen everything; however, he believes that FHSS is not worth covering in detail, as it is distracting from the IEEE 802.11 WLAN PHYs that readers are likely to encounter in the real world (e.g., IEEE 802.11a-1999, IEEE 802.11b-1999, and IEEE 802.11g-2003).

While FHSS may be important to someone studying signal processing, it is the author's opinion that FHSS is not of direct practical interest, nor is it the basis for future progress in WLANs. There are numerous books on signal processing and wireless communications theory in which a sufficiently motivated reader can find arbitrarily detailed information on FHSS technology.


The IEEE 802.11 wireless LAN standards have a number of different physical layers, most of which operate in various channels within certain well-known RF bands. It probably goes without saying, but a product that operates in the 5 GHz band has a 5 GHz radio subunit, and only receives or transmits energy in this part of the radio frequency spectrum. In particular, unless the product also has a 2.4 GHz radio subunit, it cannot receive or transmit 2.4 GHz signals.

One reason that a 2.4 GHz system can't receive a 5 GHz signal (or vice versa) is that it takes completely different Physical-layer radio circuitry (including tuners, mixers, amplifiers, etc.) to receive or transmit at either frequency. Each frequency band requires a completely different analog radio subunit to receive and transmit signals at a given frequency.

In addition, each frequency uses a different antenna length. For reasons that are too complex to describe here, antennas are most efficient when they are approximately one quarter as long as the wavelength of the radio waves that they are designed to receive. Basic physics of wave phenomena tells us that the frequency of a wave times its wavelength is equal to the group velocity of the wave.

In the case of electromagnetic radiation, the wave's group velocity is "c" (the speed of light, which is accepted to be 299,792,458 meters per second, or 299,792,458,000 millimeters per second). By simple algebra, we find the quarter-wavelength by dividing c by four times the frequency. By such a calculation, an optimal antenna for the 2.4 GHz band is about 30.75 mm (299,792,458,000/(4x2,437,000,000)).[3] A similar calculation shows that an optimal antenna for the 5 GHz band is about 14.28 mm (299,792,458,000/(4 x5,250,000,000)).[4]

[3] 2.437 GHz is the center frequency of Channel 6 in the 2.4 GHz band, which is the default channel for most products based on IEEE 802.11b.

[4] 5.25 GHz is the center frequency of the 5.15 5.35 "U-NII lower band," which is where IEEE 802.11a's eight contiguous non-overlapping channels are located (there are four other non-overlapping channels in a disjoint portion of the U-NII spectrum (the upper band)).

For those of us who are more comfortable with English units of measure, the ideal 2.4 GHz antenna is just over 1 3/16 inches long, and the length of the ideal 5 GHz antenna is about 9/16 of an inch.

Beyond the actual differences in the radio subunits, and the need for different antennas for each of the RF subunits, there are other, related issues, such as the fact that certain analog RF components, such as power amplifiers, are designed to work efficiently only across a certain frequency range.

Some products may appear to have two antennas, but they are not always both in use, because the radio may (on a frame-by-frame basis) select the antenna with the best signal quality when receiving data. This feature is known as antenna diversity. In some early products, one of the "antennas" was just a dummy (i.e., an empty piece of plastic), since presumably a product with two antennas looked more capable than a product that only had one. In a product with two RF subunits (i.e., one at 2.4 GHz and another at 5 GHz), each plastic antenna housing could easily accommodate two discrete antennas (since they are so short), one for 2.4 GHz and one for 5 GHz.

There are, in fact, dual-band products on the market that support both IEEE 802.11a-1999 and IEEE 802.11b-1999, and now that IEEE 802.11g-2003 has been standardized, we can expect to see dual-band products incorporating all three standards, and such devices are often known by the shorthand moniker "a/b/g."

A dual-band wireless STA will presumably always associate with the fastest AP available, unless the signal quality from that AP would not result in a good-quality connection. Such decisions are out of the scope of the standard, and are an implementation choice for the vendor of the dual-band device.

One important thing to keep in mind about the different modulations that can be used with IEEE 802.11 is that (in general) higher speed modulations have shorter range. A wireless AP using IEEE 802.11b might in theory be able to service a circular area[5] that is 300 feet (90 100 meters) in radius, but the closer a STA is to an AP, the stronger the signal will be, and the more successful the STA will be in using a higher-speed modulation. For STAs based on IEEE 802.11a, the core area in which the signal is strong enough to support the fastest modulations may only be 60 90 feet in radius (i.e., up to 30 meters or less, depending on the number of users). This smaller radius is due to the different propagation characteristics of 5 GHz RF energy relative to the propagation of 2.4 GHz RF energy. Paradoxically, lowering a STA's transmit rate may actually improve throughput, since slower modulation schemes are more robust.


[5] Due to physical obstructions, the design of the radiating antenna, multipath effects, and other nonlinear effects, the coverage area will not be exactly a circle. As can be seen in Chapter 8, common large-scale WLAN designs create a hexagonally packed array of circles with a radius of between 60 and 120 feet (18 to 37 meters).

For the RF-oriented[6] PHYs that have been defined for use with IEEE 802.11, Figure 3-1 shows the valid combinations of frequency band (e.g., 2.4 or 5 GHz), and the associated modulations that are usable within that band to achieve the speeds on the left.

[6] IEEE 802.11-1999 also defined a PHY based on diffuse infrared technology that could operate at 1 or 2 Mbps. The author is not aware of any mass-market products that implemented this type of PHY.

Figure 3-1. Valid IEEE 802.11 operating parameters[7]

graphics/03fig01.gif

[7] Adapted from IEEE Std. 802.11-1999, copyright 1999, IEEE Std. 802.11a-1999, copyright 1999, IEEE Std. 802.11b-1999, copyright 2000, and IEEE 802.11g-2003, copyright 2003. All rights reserved.

Several observations are immediate, first being that there are a large number of optional modulations for IEEE 802.11g. An implementation of IEEE 802.11g that supported all these optional encapsulations would be more complex, but the mandatory parts are relatively easy for anyone who has implemented IEEE 802.11a and IEEE 802.11b. In IEEE 802.11g, the modulationscheme of IEEE 802.11a, and thus the BBP chip design, has effectively been grafted onto an IEEE 802.11b radio.[8]

[8] This is not actually as straightforward as it might sound…the RF components for IEEE 802.11g will require better linearity and in general need to be of higher quality to support IEEE 802.11g. Clearly, such a higher quality RF subunit can obviously still support IEEE 802.11b.

The tricky parts with IEEE 802.11g are interoperating with legacy IEEE 802.11b devices, which can no longer decode the modulations of all the frames that might be flowing across the 2.4 GHz medium. These changes are primarily in two places, namely the MAC sub-layer, and the Physical Layer Convergence Protocol (PLCP) sublayer, which is part of the PHY layer. However, for now, think of IEEE 802.11g as essentially the modulation scheme from IEEE 802.11a (OFDM[9]) grafted onto the radio from IEEE 802.11b.

[9] OFDM stands for Orthogonal Frequency Division Multiplexing.

OFDM modulation is employed in the same way in both IEEE 802.11a and IEEE 802.11g. Wherever an OFDM-based speed is mandatory in IEEE 802.11a, that speed is also mandatory in IEEE 802.11g. Likewise, wherever an ODFM-based speed is optional for IEEE 802.11a, it is also optional for IEEE 802.11g. Also, within IEEE 802.11g, observe that wherever OFDM is either optional or mandatory to achieve a given speed, CCK[10] -OFDM is an optional modulation that can be used to achieve that same speed (although CCK-OFDM is always optional in IEEE 802.11g, regardless of whether OFDM is mandatory for that speed).

[10] CCK stands for Complementary Code Keying, another form of direct sequence spread spectrum (DSSS) communications, which is the most common modulation that is used with IEEE 802.11b. In the CCK-OFDM (also known as DSSS-OFDM), the PLCP header of the frame uses the CCK form of DSSS, while the PLCP payload (the MAC frame) is modulated using OFDM.

The implementation rules for both IEEE 802.11a and 802.11g are the same as well, in that if an implementer wants to support the highest rate (54 Mbps) which is not mandatory then that implementation must support all slower optional rates. Every OFDM-based implementation, in order to be compliant with the standard, must include support for the mandatory rates of 6, 12, and 24 Mbps.

Thus, an implementation that supported only 6, 12, 24, and 54 Mbps would not be considered compliant with either IEEE 802.11a or IEEE 802.11g because an implementation that supports 54 Mbps must also support the complete set of optional rates, viz. 9, 18, 36, and 48 Mbps.

Within a given channel, a modulation scheme will either use the center frequency of the channel as its sole carrier frequency over which its transmissions are modulated, or it will break up the channel into multiple [orthogonal] subcarrier frequencies. For example, in IEEE 802.11a and IEEE 802.11g, OFDM actually breaks up a channel into 52 subcarriers, of which 48 actually carry data. This is true whether OFDM is being used within one of the channels in the 2.4 GHz band (i.e., in a product based on the eventual IEEE 802.11g standard), or within one of the channels in the 5 GHz band (in a product based on IEEE 802.11a-1999).

This "carrier" frequency concept is not the same thing as the "non-overlapping channels" that will be discussed in more detail shortly. With respect to the number of carriers in each modulation, the only multicarrier schemes are the ones based on OFDM (DSSS-OFDM[11] included). All the other modulation schemes are based on a single carrier frequency:

[11] DSSS-OFDM is a hybrid of CCK and OFDM in which the usual OFDM PHY header is replaced with a CCK-style PHY header. The term CCK-OFDM may be used interchangeably with DSSS-OFDM (the IEEE 802.11g-2003 standard uses the latter terminology).

  • Barker with DBPSK or DQPSK

    • Supports speeds of 1 Mbps (using DBPSK) and 2 Mbps (using DQPSK)

    • Defined by IEEE 802.11-1999

      • Mandatory-to-implement

  • Complementary Code Keying (CCK)

    • Supports speeds of 5.5 and 11 Mbps

    • Defined by IEEE 802.11b-1999

      • Mandatory-to-implement

  • Packet Binary Convolutional Coding (PBCC)

    • Supports speeds of 5.5 and 11 Mbps

    • Defined by IEEE 802.11b-1999

      • Optional-to-implement

    • Extended by IEEE 802.11g-2003 to provide speeds of 22 and 33 Mbps[12]

      [12] PBCC was defined in IEEE 802.11b as an optional modulation scheme, offering the same speeds as CCK, namely 5.5 and 11 Mbps, so its appearance in IEEE 802.11g is not surprising, although you may never have heard of it before. The IEEE 802.11g-2003 variant of PBCC achieves higher speeds (e.g., 22 and 33 Mbps).

      • Optional-to-implement

Channels within the 2.4 GHz Band

Within the 2.4 GHz domain, there are 14 channels that may be used by either IEEE 802.11b or IEEE 802.11g. Each channel is numbered according to its center frequency, starting with 2412 MHz, incrementing by 5 MHz for each successive channel up through channel 13 at 2472 MHz. The channels that would have been centered at 2477 and 2482 MHz are not defined, but Channel 14, at 2484 MHz, has been defined (specifically, Channel 14 is only defined for use in Japan; this channel is centered 2 MHz higher than the expected…had the existing pattern been extended, the second channel above Channel 12 would have been centered at 2482 MHz).

Each of these channels is 22 MHz wide, meaning that they range from the point 11 MHz below the channel's center frequency, to the point 11 MHz above the channel's center frequency. This means that, for example, Channel 1 runs from 2401 to 2423 MHz, since it is centered at 2412 MHz. Channel 2 overlaps almost completely with Channel 1, since it spans the range from 2406 to 2428 MHz (centered on 2417 MHz).

It turns out that one has to move all the way up to Channel 6, which begins at 2426 MHz, is centered on 2437 MHz, and ends at 2448 MHz, to find the first channel above Channel 1 that does not overlap with Channel 1's range of frequencies. Similarly, one must then skip all the way to Channel 11 before one finds a channel that does not overlap with Channel 6. Channel 11 is bounded by 2451 MHz and 2473 MHz, centered on 2462 MHz. This is why the channel allocation within the 2.4 GHz band is said to provide for three non-overlapping channels. The three channels that do not overlap are Channel 1, Channel 6, and Channel 11.

A word on notation

Since 1 GHz is exactly the same as 1000 MHz, it is equally correct to refer to a frequency as either 2472 MHz or 2.472 GHz. By extension, the same frequency could be referred to as 2,472,000 kHz, 0.002472 THz (because 1 terahertz (THz) is exactly 1000 GHz), or 2,472,000,000 Hz. The author may choose to refer to frequencies in both the 2.4 GHz and 5 GHz bands in units of either MHz or GHz, depending on which is more convenient in a given context.


Figure 3-2 clearly shows that while 14 channels are defined inside the 2.4 GHz band for use within the context of IEEE 802.11b, there is no regulatory domain in which all of these 14 channels are concurrently usable.

Figure 3-2. IEEE 802.11b/g Channelization within the 2.4 GHz Band

graphics/03fig02.jpg

Figure 3-2 is actually a synthesis of several tables from IEEE 802.11b-1999 and IEEE 802.11b Corrigendum 1-2001[13] and shows the low, center, and high frequencies for each channel on the left, and on the right shows the defined regulatory domains in which each channel may operate. The left portion of Figure 3-2 shows the frequency boundaries that define each channel in the 2.4 GHz band. The non-overlapping channels 1, 6, and 11 are highlighted in gray.

[13] A "corrigendum" is similar to an errata sheet. Webster's Dictionary defines the word as follows: "an error in a printed work discovered after printing and shown with its correction on a separate sheet."

On the right side of the table, each column corresponds to a different regulatory domain, and indicates whether that channel is legally usable in that domain. The leftmost column corresponds to the United States (regulated by the Federal Communications Commission). The second column is Canada (regulated by Industry Canada (IC) formerly the Department of Communications). Those European countries that follow ETSI (European Telecommunications Standards Institute) regulations are grouped together in the third column). Several individual European countries have their own rules, viz. Spain and France. Japan has two unique regulatory domains, (which actually has two different regulatory domain identifiers, one in which only Channel 14 is usable, and the other in which Channels 1 through 13 are all usable, similar to the characteristics of the ETSI domain).

An observant reader will note that there is 3 MHz of separation between both pairs of the non-overlapping channels (i.e., between Channels 1 and 6, and between Channels 6 and 11). As it is currently defined, Channel 14 happens to be adjacent to Channel 11, and strictly speaking, does not overlap with it, although there is no space between Channel 11 and Channel 14. So, it would seem that the reason why Channel 14 was not centered on 2477 or 2482 MHz (the two "unused" channels in Figure 3-2) is that if Channel 14 had been defined to use either of those channels, it would definitely have overlapped with Channel 11.

Curiously, if Channel 14 had been defined to be centered on 2487 MHz, then it would have constituted a fourth non-overlapping channel, since its low-end frequency would have been 2476 MHz, which is 3 MHz above the highest frequency in Channel 11 (2473 MHz). Such a definition for Channel 14 would not have pushed the top of Channel 14 outside the U.S. ISM band, since such a definition of Channel 14 would have topped out at 2498 MHz, which isn't as close to the top of the U.S. ISM band as Channel 1 is to the bottom of the U.S. ISM band. This is mostly a moot point, however, since Channel 14 is only defined for use in Japan. Moreover, in the United States and in many other parts of the world, the spectrum for IEEE 802.11b and IEEE 802.11g tops out at 2.4835 GHz (i.e., just beyond the top of Channel 13). As of this writing, Japan is the only country that has defined a channel beyond 2.4835 GHz.


As can be seen in Figure 3-2, in the majority of countries, Channels 1 through 11 are usable, but in parts of Europe it is possible to use Channels 12 and 13, and in Japan a Channel 14 has been defined (Channel 14 is only available in Japan). Almost more important than the "extra" channels that one may be able to use in some locations are the apparently "normal" channels that cannot be used in certain places note that in Spain and France, Channels 1 through 9 are not usable. Spain would appear to be the most restrictive country with respect to the 2.4 GHz band, as only two channels are available for use in Spain.

Most vendors of equipment based on IEEE 802.11b have chosen Channel 6 as the default for their WLAN devices. However, from a regulatory perspective, this would appear to be a less than optimal choice. Specifically, note that Channel 11 is usable in all but one regulatory domain, while there are several regulatory domains in which Channel 6 is not legally usable.

A case could be made for making Channel 11 the default channel, but that is not the case in the real world where Channel 6 is, in fact, the default channel in most IEEE 802.11b products on the market. However, this has minimal impact since when a STA is activated in a new location, it uses MAC-layer mechanisms to determine which channel(s) is (are) available for its use, so the choice of default is not as important as it might seem to be (at least for STAs; in an AP, the default channel should be chosen as a result of entering the regulatory domain in which the AP is located).

Channel Selection by a STA

If you have spent any time digging around in the user interface of your WLAN card, you probably found an interface that allows you to display the currently selected channel on which your WLAN card is operating. To be more precise, the control panel or configuration utility allows you to see what channel the driver has selected the user typically cannot pick the channel arbitrarily; rather, the driver software dynamically finds a channel in which an AP has a nice strong signal (other criteria may also be important in the selection of the best channel), and then associates with the AP that is in control of that channel.

When a WLAN card is initialized, the driver spends some time looking for an AP, and uses internal rules to pick the best one (strongest and/or best quality RF signal; acceptable security policy; legal authorization to use a certain channel in a certain location; etc.). The control panel shows the user what channel has been selected by the driver. The author's laptop is running Microsoft Windows 2000, and has a WLAN card.

Figure 3-3 shows the user interface that is exposed by the WLAN card's configuration utility under Widows 2000[14] . In this user interface, clicking the Rescan button forces the card to change channels.

[14] Other than showing this generic screen shot, I won't identify the brand here (not that I have anything particularly bad or good to say about the card; it has a Wi-Fi logo, and it works). If you recognize the brand from this screen shot, you should not infer any endorsement of the associated product, product line, or corporation.

Figure 3-3. Windows 2000 User Interface of an IEEE 802.11b STA showing channel selection

graphics/03fig03.gif

For most products based on IEEE 802.11b and IEEE 802.11g, the configuration utility should be able to display Channels 1 through 11 or conceivably Channels 1 through 14, if the card was designed to support up to Channel 14. The range of available channels will depend on the capabilities of the radio in the WLAN card, as well as the driver software (i.e., it's possible that a WLAN card that seemingly doesn't support Channel 14 could be upgraded to support Channel 14 simply by updating its firmware and/or driver software; in other words, it's possible that the radios in many existing IEEE 802.11b products would be able to "tune in" to Channel 14 if their software was capable of directing them to do that).

Figure 3-4 shows the similar configuration utility from Red Hat Linux 9.0's NEtwork Administration Tool (neat). The terminology used within neat is somewhat unique, in that the tool refers to three types of WLAN, viz. "Managed," "Ad-Hoc," and "Auto." The former is what we normally see referred to as "infrastructure" (i.e., basic or extended service set (BSS or ESS)) while "Ad-Hoc" refers to an "independent" basic service set (IBSS). "Auto" mode allows the driver to choose to join whatever WLAN it discovers nearby, if any, perhaps requiring user input if more than one WLAN is available from which to choose.

Figure 3-4. Red Hat Linux 9.0 "neat" User Interface of an IEEE 802.11b STA showing channel selection in "Managed" (infrastructure) mode

graphics/03fig04.jpg

Figure 3-5 shows the same screen with the mode set to Ad-Hoc.

Figure 3-5. Red Hat Linux 9.0 "neat" User Interface of an IEEE 802.11b STA showing channel selection in "Ad-Hoc" mode

graphics/03fig05.jpg

Note that in Red Hat Linux 9.0, as well as in Microsoft Windows 2000, the WLAN configuration utilities both ask for the following information: 1) the SSID, 2), the mode of operation (essentially, the latter boils down to IBSS (ad-hoc) vs. infrastructure (BSS or ESS) mode).

In the Windows user interface, the indicators of signal strength and quality are on the same screen as the SSID prompt, while the mode switch is on a different screen. Figure 3-6 shows the other configuration screen for the author's WLAN card configuration utility, again under Microsoft Windows 2000.

Figure 3-6. Windows 2000 User Interface of an IEEE 802.11b STA showing SSID and Power Save mode controls

graphics/03fig06.jpg

One other difference between the Red Hat Linux 9.0 and Windows 2000 configuration utilities is their treatment of Wired-Equivalent Privacy (WEP) encryption configuration. The WEP key is displayed on the main neat screen, but it is in a separate screen under Windows 2000.

These differences are only cosmetic, as there is no "standard" arrangement for the configuration screens. Different products might choose to present the configurations in different ways. As long as they all allow the SSID and mode to be chosen, along with (optionally) the WEP key(s), then a given device will be able to participate in the WLAN.

Figure 3-7 shows the encryption configuration screen for the author's WLAN card configuration utility, again under Microsoft Windows 2000.

Figure 3-7. Windows 2000 User Interface of an IEEE 802.11b STA showing encryption mode controls

graphics/03fig07.jpg

For completeness, Figure 3-8 shows the configuration screen of a Macintosh computer equipped with an AirPort WLAN card, using the MacOS 9 operating system. As you can see, the same information is presented, differing only in the layout of the configuration screens.

Figure 3-8. Apple MacOS 9 AirPort User Interface of an IEEE 802.11b STA showing SSID and implicit mode selection

graphics/03fig08.jpg

Note that in the Apple AirPort user interface, the term "ad-hoc" does not appear; it is replaced by the term "Computer to Computer." By not choosing "Computer to Computer" the user is implicitly choosing infrastructure mode (in the authors opinion, Apple has done a good thing by removing the jargon from their configuration utility).

Most APs have similar user interfaces typically web-based so its appearance does not depend on the client browser's operating system that allow the user to specify the channel on which the AP will operate, as well as the SSID (in the case of my AP, the term used is ESSID, which is also equivalent to the term BSSID and SSID).

Figure 3-9 shows the main configuration screen of the author's Access Point. This is a cropped screen shot of a portion of a web page exposed by the management utility within the Access Point.

Figure 3-9. Web-based User Interface of an IEEE 802.11b AP showing SSID, encryption, and channel selection controls (among others)

graphics/03fig09.jpg

The particular user interface on the author's AP exposes the active WEP key to the view of the management client, so the author deleted the key before taking the screen shot. Normally, in the particular case of the implementation of the author's AP's web-based configuration utility, the key would be displayed as 26 hex digits (since the key is 104 bits, or 13 bytes, in length, and each byte is represented as two hexadecimal digits).

The main configuration difference between a STA and an AP is that a STA discovers the channel that it thinks is best, while a network manager must configure an AP with the channel on which it will operate, and must define the SSID for the local WLAN centered on the AP (if multiple APs are in the WLAN, it is known as an "Extended Service Set," hence the term ESSID a single-AP WLAN is a "Basic Service Set," hence the term BSSID).

The STA is not statically configured with a channel number. A STA also can discover which SSIDs are nearby, and present the user with a choice. In principle, a STA needs no a priori settings, unless it desires to use WEP encryption.

Roaming 'Round the World

Different parts of the world have different regulations controlling acceptable use of the radio frequency spectrum. These rules may apply on a scale as small as a country, or be a regional regulation (e.g., being defined uniformly throughout a region, such as Europe). Other rules may apply only in very small areas, such as on a military base.

Ideally, users should be able to roam to any geographical area they choose, and their WLAN adapter should detect its location and automatically adapt its transmitter to comply with local regulatory requirements. Clearly, to help WLAN users adhere to local regulations, a mechanism is needed to help STAs seamlessly discover where they are (in the coarse geographic sense of the word where), so that they know which channels are legally available for their use.

The IEEE 802.11 WG has defined specific procedures that attempt to maximize interoperability, while still providing mechanisms to allow users in many countries to use WLANs, while complying with local laws and/or regulations. End users should never have to wonder what the legal configuration of their STA's WLAN card is. The card should learn from the AP any relevant restrictions on its operation, before it ever transmits anything.[15]

[15] It is possible that a STA may transmit for a brief time while it is trying to find an AP. A STA has two choices…either wait for an AP to identify itself, or actively probe all the channels until an AP answers. Once the AP's response is heard, the STA can restrict future transmissions so that they always use one of the allowed channels.

As shown in Figure 3-2, the original specification for IEEE 802.11b-1999 defined various "handles" to label each regulatory domain, to enable this automatic "domain discovery" capability. This is exactly the function of the IEEE 802.11d-2001 specification, entitled: "Amendment 3: Specification for operation in additional regulatory domains."

For example, the IEEE 802.11d-2001 specification allows an AP to advertise the regulatory domain in which it is operating (as configured by the network manager this cannot be automatic!). IEEE 802.11d applies to any PHYs operating in the 2.4 GHz portion[16] of the RF spectrum. STAs within range of an IEEE 802.11d-capable AP will then use the information that the network manager configured regarding the regulatory domain that the AP is operating within. A suitably configured AP can then operate according to whatever legal constraints are imposed for radio frequency devices within that regulatory domain. Based on the configuration of the AP, nearby STAs will also be given the information that they need in order to make decisions about which channel(s) are legally available to them.

[16] IEEE 802.11h, which is currently under development, but is almost finished being standardized (should be published in 2003), brings similar regulatory domain detection functionality to the 5 GHz domain, as well as enhanced power control to support the various limits on transmitted power in different regulatory domains. The title of IEEE 802.11h is (verbatim): "Spectrum and Transmit Power Management extensions in the 5GHz band in Europe."

IEEE 802.11 Operation at 5 GHz IEEE 802.11a

The use of the 5 GHz spectrum by PHYs based on IEEE 802.11a-1999 is quite different from the uses of the 2.4 GHz spectrum by either IEEE 802.11b-1999 or IEEE 802.11g-2003. The most obvious difference is that the 5 GHz spectrum is not contiguous with the 2.4 GHz spectrum. Devices that operate at 5 GHz cannot interfere with those that operate in the 2.4 GHz spectrum, which means that one could operate two WLANs in the same physical space such that they would not interfere with each other. In fact, because IEEE 802.11b supports three non-overlapping channels, and because IEEE 802.11a supports twelve non-overlapping channels, one could actually operate 15 unique WLANs in the same physical space.

The IEEE 802.11a-1999 standard was written to allow operation over the United States "Unlicensed National Information Infrastructure" (U-NII) band. The U-NII was allocated to support devices that provide high-speed wireless digital communications for short-range, fixed, and point-to-point applications on an unlicensed basis. The U.S. U-NII band comprises a lower band (5.15 5.35 GHz), and an upper band (5.725 5.825 GHz). Due to different limits on radiated power, each band is suitable for slightly different applications.

Although the U-NII would appear to have two bands[17], the U-NII is actually divided into three logical bands, as shown in Figure 3-10, with each of the three bands occupying 100 MHz. The difference in the bands is the maximum power allowed by the FCC.

[17] The two U-NII bands consist of the 200 MHz "lower band" that ranges from 5.15 5.35 GHz, and the 100 MHz "upper band," which spans from 5.725 to 5.825 GHz. The upper and lower bands are not contiguous.

Figure 3-10. Definition of the U-NII band

graphics/03fig10.gif

The 5.15 5.25 GHz portion of the lower U-NII band has the most restrictive power limits, yet is still suitable for indoor or other short-range applications (e.g., WLANs). The power is restricted to this extent to keep these devices from interfering with mobile satellite service operations. The 5.25 5.35 GHz portion of the lower U-NII band has reduced restrictions on radiated power, so it is additionally suitable for use between buildings, as well as within them. Finally, the upper U-NII band (5.725 5.825 GHz) has the least-restrictive regulation with respect to radiated power, allowing for longer distance (on the order of a few kilometers) operation when the signal is guided with directional antennas. A device operating in the upper band need not feel the need to always use more power than a device designed for the lower bands, just because it can. It is up to the designers of an IEEE 802.11a product whether they want to support only the lower U-NII band, or want to provide a radio that is tunable across a much wider range.

As an aside, there is also an ISM band in the 5.0 GHz spectrum (5.8 ± 0.075 GHz, or from 5.725 to 5.875 GHz), of which the upper U-NII band is a subset. The upper end of the ISM band extends beyond the top end of the U-NII band. The spectrum that extends beyond the defined top end of the upper U-NII band is not available for use by IEEE 802.11a products (as of the time of this writing).

Channel Definitions within the U-NII Band

Channel spacing within the IEEE 802.11a specification is based on 5 MHz multiples. Effectively, this gives 200 channels, numbered 0 through 200, in the 1000 MHz range between 5 and 6 GHz. For example, Channel 40 would be at 5200 MHz, and Channel 41 would be at 5205 MHz.

To keep some separation between the IEEE 802.11a channels and the edges of the U-NII spectrum, the IEEE 802.11a-1999 standard specifies that there be 30 MHz of unused space on either side of the 5.15 5.35 GHz U-NII band (the lower band). As depicted in Figure 3-11, the remaining 140 MHz of spectrum in the lower U-NII band can accommodate eight 20 MHz channels, in which the separation between the centers of two adjacent channels is 20 MHz.

Figure 3-11. Eight contiguous IEEE 802.11a channels in the lower U-NII band

graphics/03fig11.gif

These eight channels are all non-overlapping. For readers who have heard some things about IEEE 802.11a, these are the "eight non-overlapping" channels that vendors of IEEE 802.11a products tout as being one reason to claim superiority over IEEE 802.11b products (which are limited to three non-overlapping channels).

In addition, IEEE 802.11a-1999 defines four more channels in the upper U-NII band, with 20 MHz of guard space between the edge of the band and the channels within it. Figure 3-12 shows the layout of these four additional non-overlapping channels.

Figure 3-12. IEEE 802.11a channels in the upper U-NII band

graphics/03fig12.gif

Despite the fact that IEEE 802.11a-1999 offers a total of 12 non-overlapping channels, some early implementations seem to have restricted themselves to the lower band of the U-NII, which only has eight non-overlapping channels.[18]

[18] It is possible to make a radio that can operate within both the upper and lower U-NII bands, but a radio that is tunable over a wider range of frequencies may be slightly more expensive than one that is limited to a smaller frequency range. Such "wideband" radios do exist on the market, and this capability may be a practical necessity for "globally tunable" radios in the 5 GHz neighborhood of the RF spectrum.

The worldwide availability of RF spectrum in the 5 GHz region is not uniform, and as this book is being written, there are numerous efforts underway to "harmonize" spectrum allocations and the standards that depend on them, such that there will be a greater degree of similarity in the end-users' WLAN experience as they roam around the planet. IEEE 802.11h is an emerging standard that can allow IEEE 802.11a devices to operate in Europe, similar to the way that IEEE 802.11d-2001 permits the operation of IEEE 802.11b-1999 devices in different regulatory domains.

The IEEE 802.11h draft is on its way to approval in 2003, but the author is not willing to make any statements about potential spectrum allocations or regulatory changes in Europe, Japan, or elsewhere. Worldwide regulations of devices operating in the 5 GHz spectrum are hopefully going to be converging over the next one to three years, but exactly how the dust will settle is far from clear at this point.

The IEEE 802.11 WG recently formed a new Task Group "j" to define an IEEE 802.11a-like PHY for use in Japan, in their 4.900 5.000 GHz and 5.030 5.091 GHz unlicensed bands. Some manufacturers' "5 GHz" radios are already tunable down into the 4.9 GHz frequency band, but there are not yet defined procedures for using that band in accordance with Japanese law (these procedures are being defined by TGj).

The IEEE 802.11j standard will also support operation in the 5.15 5.25 GHz band in Japan, which happens to be the same frequencies as the lowest U-NII band over which IEEE 802.11a-1999 operates. However, a product using these frequencies in Japan must adhere to the procedures defined in the forthcoming IEEE 802.11j standard. A TGj-compliant PHY may also operate in the middle and upper U-NII bands that are used by IEEE 802.11a-1999 PHYs.

One difference between the forthcoming IEEE 802.11j and IEEE 802.11a-1999 will be that STAs that support the eventual IEEE 802.11j standard will not be able to transmit on those frequencies until they have heard an AP tell them that it is permissible to do so. This implies that STAs based on such PHYs will not be able to operate within an independent BSS (IBSS).[19]

[19] IBSS mode is colloquially known as ad-hoc mode, and in this mode all the STAs associate with each other without relying on a central AP.

Another implication is that any STA incorporating a PHY based on IEEE 802.11j will not support active scanning to join a WLAN, since that would involve transmitting RF energy at those frequencies before the station knew that it was safe to do so.

Again, a STA based on IEEE 802.11j must wait to hear from an AP that it is safe to operate before it can emit any RF energy in these frequency bands. TGj's activity began in 2002, as a study group, but the real business of TGj did not begin until the IEEE 802.11 WG meeting in January 2003. Due to the straightforward nature of the work (adopting IEEE 802.11a's OFDM modulation to operate in a slightly different frequency band, as well as defining a few necessary MAC sublayer protocol changes), TGj appears to have made quick progress. It is possible, bordering on probable, that the IEEE 802.11j draft standard will be ready for Sponsor Ballot by the end of 2003.

The issue with spectrum allocations in the neighborhood of 5 GHz is complicated by the fact that many military and civilian radar systems (and things like microwave landing systems that help planes to land in bad weather) also occupy the same spectral neighborhood. Different countries may have significant issues with a product that works fine in one country, but that interferes with military radar in their country.

In the immediate-to-short term, IEEE 802.11a products are available now, and are now being certified by the Wi-Fi Alliance, so that customers can buy 5 GHz products with a Wi-Fi logo with the same confidence that they have had in purchasing IEEE 802.11b products in the 2.4 GHz band. Once the regulatory issues are resolved, the products that people are purchasing today will not become obsolete, although there may be new software that will allow operation in new geographies. Depending on the ability of radios to be tuned to a wide range of frequencies, it may not even be necessary for a user to buy a new product new software and drivers may suffice.

Bring on the Noise

Much has been made of the fact that radio frequency interference is a problem in the 2.4 GHz ISM band, since microwave ovens, 2.4 GHz cordless phones, and so forth all emit RF radiation in this band. The noisiness of the 2.4 GHz band is one reason that vendors of IEEE 802.11a equipment cite when listing the advantages of their products.

However, the 5 GHz spectrum is not as pristine as it once was, since noise is now creeping into the band (e.g., 5 GHz cordless phones[20] ). Moreover, besides interference from new categories of home-based devices, there are many other types of devices that have been designed to operate in the U-NII.

[20] This is another marketing triumph. The vendors sell 5 GHz cordless telephones to customers based on the unstated assumption that 5 is bigger than 2.4, so it must be better. The same thing happened when cordless phone vendors moved from the 900 MHz ISM band to the 2.4 GHz ISM band. Now, the ironic thing is that as end users have moved out of the 900 MHz band into higher bands, the 900 MHz band has become much cleaner for people who still have "old" 900 MHz cordless phones. Ah, progress. J

For example, there are devices that bridge an Ethernet LAN to a "T-3" circuit over the air (for point-to-point applications of the U-NII, the author has seen products that can operate in either the 5.8 GHz portion of the U-NII, or in both the 5.2 and 5.8 GHz portions), and several network access providers are using the 5.8 GHz portion of the U-NII band to provide broadband Internet access (DataCentric Broadband, for just one example).

As IEEE 802.11a products begin to be operated in the 5.8 GHz portion of the U-NII, there is the potential that they will be exposed to interference from some of these other devices that take advantage of the unlicensed spectrum in the U-NII band. It is even the case that point-to-point wireless bridges operating in the lower (5.2 GHz) portion of the U-NII band might possibly interfere with IEEE 802.11a products operating along the line of sight from the transmitter. Granted, these devices might not be that common, but neither are 5 GHz cordless phones…yet. The assertion that the 5 GHz band is cleaner than the 2.4 GHz band will be less true as time goes by.

The fact is that noise can be a problem in any frequency band. In practice, if end users know that they are deploying a WLAN in a commercial kitchen, then it might be wise for them to use IEEE 802.11a, since it won't be affected by the RF background noise from the microwave ovens. Similarly, if a facility uses a lot of cordless phones, then a wise choice of WLAN technology would be one that does not interfere with their installed phones. The cordless phone users will also appreciate the lack of interference (the author can hear static on his cordless phone when his WLAN is active).

If an installation is not near an obvious source of noise, in either the 2.4 GHz or 5 GHz band, then noise need not be the driving criterion when making a choice of WLAN technology.

Remember that in some countries the 5 GHz band also overlaps with military and commercial radars, which do present significant, high-powered sources of noise in the 5 GHz spectrum. A signal that is considered noise by one person may be a valuable signal to another person. One of the features of the IEEE 802.11h specification is that it will enable IEEE 802.11a devices to peacefully coexist with radars, both by detecting their presence and by adjusting their output power in order to share the RF spectrum, which is preferable to not having access to that spectrum at all. It is not clear how much the WLAN device would affect the operation of the radar, especially at a distance, but the radar's power is sufficient to cause significant interference to the WLAN device for nontrivial bursts of time.

The reason this is mentioned is not to discourage investment in IEEE 802.11a devices, just to dispel the myth that the 5 GHz spectrum is a very low-noise environment. If you are considering deploying a WLAN and you are going to make a purchasing decision, be sure to base it on an actual site survey, not a gut feeling. Especially with the imminent availability of products based on IEEE 802.11g-2003, there is no reason to feel that IEEE 802.11a is the only high-performance option, and if this level of performance is important to you, it would behoove you to determine whether you'd be better off deploying 2.4 GHz WLANs or 5 GHz WLANs (and some users may choose not to decide at all, but to use both frequency bands!).

Author's Note

IEEE 802.11a and IEEE 802.11b were standardized in 1999. Some vendors make an argument that says that IEEE 802.11a is "just as mature" as IEEE 802.11b. Well, maybe…if the only metric is the age of the standard. A significant reason why IEEE 802.11a did not garner market share as quickly as IEEE 802.11b did was that it was far more difficult to design CMOS-based RF circuitry that could operate in the 5 GHz band, in compliance with the IEEE 802.11a standard.

In addition, the modulation techniques that IEEE 802.11a uses to encode bits on the wireless medium were also much more complex than those required by IEEE 802.11b. The fact that IEEE 802.11a implementations had to overcome these technical challenges gave a head start to IEEE 802.11b.

Atheros Communications was one of the first companies to do what many people initially believed to be impossible (or at least too expensive to be cost effective)…they implemented IEEE 802.11a in CMOS. However, by the time IEEE 802.11a components became widely available, IEEE 802.11b products had already been in the marketplace for well over a year, and had increased in popularity very quickly after their debut.

There are some practical reasons why IEEE 802.11a is less attractive. For one thing, 5 GHz radio waves do not penetrate walls as well as 2.4 GHz radio waves do. Such a limitation may be more important in a home setting than in a corporate deployment. Moreover, the 5 GHz band is not uniformly allocated across the different regulatory domains (i.e., countries) of the world, which means that an IEEE 802.11a product that can legally be used in one country might be illegal to operate in another country. This problem is also solvable, by making products that can select their RF operating frequency based on the regulatory domain in which they find themselves.

IEEE 802.11a does have some significant benefits. Most notably among its benefits is that the standard has many more non-overlapping channels in which to operate. As a result, it can support more users in the same area, or the same number of users in a given area at higher speeds than IEEE 802.11b can.

With all of that said, IEEE 802.11a products are just now appearing on the market that are comparable (in the usability sense) with the existing IEEE 802.11b products. In addition, the Wi-Fi Alliance began certifying IEEE 802.11a products in late 2002, so it is now possible to purchase such products with the same degree of confidence that users have had in IEEE 802.11b products.


The IEEE 802.11 PHY in Context

To help put all the concepts of radio frequency channels and these other concepts into context, we can place them into a block diagram of a typical WLAN product, as depicted in Figure 3-13. The RF subunit contains the analog electronics that actually drives energy onto the wireless medium (WM), and receives energy from the WM. The BBP is the subunit that acts like a modem, encoding the digital data from the IEEE 802.11 frames into an analog form. The BBP also performs the reverse operation on received data from the WM. Finally, the MAC subunit is where the frames are generated and received, the inner workings of which will be discussed in the next chapter.

Figure 3-13. Functional blocks of a WLAN adapter

graphics/03fig13.gif

One can envision these modulation techniques as equivalent to similar techniques employed by modems, which also need to send digital data over an inherently analog medium. The job is the same, logically, although the techniques differ due to the different noise characteristics of the wireless medium (vs. a dial-up line), and due to the different speeds that are being achieved. In contrast, modulation onto a wired medium, either fiber or copper (twisted pair), is "baseband" (or carrier-less) style modulation, which is not really modulation at all…the transmitter simply transmits an electrical or optical representation of the digital data stream.[21]

[21] It is also possible to operate broadband (analog) modulations over wired media, which is what modems do over phone lines. The wireless medium is not the only non-baseband medium in use by data networking protocols.

Whether baseband (digital) or carrier-based (analog) modulation is being used, the modulation is not always done on a bit-by-bit basis…some modulation schemes encode multiple bits into a single "data symbol" for transmission onto the medium. Even some digital wired media don't directly transmit bits onto the wire; for example, consider the case of FDDI, which used a 4B/5B code and ran at 125 mega-symbols per second, where each five-bit symbol encoded four bits of actual data.[22]

[22] Gigabit Ethernet is similar, in that some of its PHYs make use of an 8B/10B code (in which data is transmitted 256 bits at a time, using symbols taken from a 1024-element "alphabet"). Thus the link's clock speed will need to be on the order of 1250 MHz to carry 1000 Mbps (i.e., because a 10-bit symbol only transmits eight bits worth of data, there is a 10/8 (or 1.25) multiplier).

To continue with the FDDI example, there are practical reasons for using such a technique, one being that of the 32-symbol (5-bit) "alphabet," there are also 16 nondata symbols beside the 16 symbols that represent each possible 4-bit group of data. The nondata symbols can be used for various control and signaling functions (since they are not legally able to appear within the data itself). Finally, the clock speed can be reduced from 200 MHz, which is what it might have taken to do certain one-bit-at-a-time baseband modulations, to 125 MHz, which was important at the time that the standard was defined, since it would not have been economically feasible to produce FDDI products unless the clock speed was kept as low as practical, while still achieving the goal of 100 Mbps operation.

The three logical blocks from Figure 3-13 may be integrated into one common piece of silicon that incorporates MAC, BBP, and RF subunits on one chip. There would still be a need for some external chips, such as power amplifiers between the RF output and the antennas, however. The author is not aware of such a highly integrated component, but two other pairings are common, as depicted in Figure 3-14.

Figure 3-14. Common pairings of WLAN PHY subunits

graphics/03fig14.gif

The pairing on the left puts the BBP and RF subunits near each other, or even integrates them together. This kind of makes sense since moving analog signals around inside a computer is a tricky thing to do, especially as the carrier frequencies get higher. The MAC and BBP are exchanging data at, worst case, about 50 Mbps, in only one direction at a time. There are many possible solutions that can make it very easy to move such a low-speed digital signal around inside a PC.

The pairing on the right of Figure 3-14 integrates the MAC and the BBP, which may make sense for certain applications, such as integration into PC core logic chipsets. In fact, this latter approach is being standardized by the JEDEC JC-61 committee, so that there is a common interface between chipsets with integrated BBP (and perhaps MAC as well) with external radio devices.

The signal between the BBP and radio is either analog or digital. If analog, it will probably take the form of a so-called "intermediate-frequency" signal that will be directly used to modulate the carrier wave in the radio, prior to the finished signal being amplified for transmission over the antenna, with a similar process occurring in the receive direction. If digital, then a digital representation of the modulated data stream is sent to the radio chip.

Because of the existence of the JC-61 interface, which will be standardized in the first half of 2003, we can expect to see radio vendors producing chips with such an interface, so their radios can easily be integrated with BBPs that also will have a JC-61 interface. This will give computer designers more flexibility in choosing the best MAC/BBP (or just BBP) and the best radio for their unique application. They can choose to optimize for best features, best cost, or any other metric they consider important.

Supporting Multiple Speeds Simultaneously

WLAN media have one attribute that is unique compared with LANs that are based on physical wires. In a WLAN, it is possible for stations on a shared-medium network to be operating at different speeds. While Ethernet supports 10 and 100 Mbps attachments to switched networks, these interfaces are dedicated collision domains that are limited to the two stations that share the link, one being the attached device, and the other station being the switch to which the device is attached. In short, any given wire only ever runs at 10 or 100 Mbps. Based on the current definition of Ethernet, a wire can't carry both 10 Mbps and 100 Mbps signals simultaneously.

The speed mismatch between ports running at different speeds is handled by buffering within the Ethernet switch. By allowing stations in a WLAN to operate at different speeds, the WLAN is able to optimize itself to the local needs of each station, without requiring that all the stations be forced to go no faster than the "weakest link." Just as wired switches buffer frames to support multiple link speeds, in the wireless world the AP is responsible for buffering frames to each member STA in a WLAN, and the speed used will be dependent on the configuration of the STA, which is communicated to the AP during the "association" procedure.

It would be natural to wonder why a STA wouldn't always want to run at the fastest speed. The fact is that each STA is probably a different distance from the AP, and each STA has a different local noise environment. Also, due to non-linear propagation effects caused by reflections and absorption of radio waves, even two STAs that are equidistant from an AP may see different signal strength. Based on all these factors, it is likely that the signal strength, as well as the signal-to-noise ratio, that the STA sees from the AP can be presumed to be different for every station.

The modulations that achieve the fastest speeds generally require the signal strength to be above a certain threshold, and require a signal-to-noise ratio that allows the STA to recover the frame from within the signal. In IEEE 802.11, the goal is to provide a number of modulations of graduated robustness, so that the STA can gracefully scale back its own transmit speed until it achieves an acceptable ratio of successful transmissions. In the subsequent chapters that discuss the IEEE 802.11 MAC layer protocol, specifically its procedures for sharing access to the wireless medium, we will see that there is a capability for retransmission of IEEE 802.11 frames.

The STA expects to see an ACK within a short time after sending a frame, which is how it can tell that it might need to modify its data transmission speed (i.e., the lack of a single ACK might not cause the STA to infer that it is now too far to use a given speed, but if several ACKs are lost in a short time, plus if the signal strength and/or signal-to-noise ratio is deteriorating, the STA may choose to "down-shift" and use a slower modulation).

One thing that is important to remember, and can be confirmed by looking back at Figure 3-3, is that the STA's speed is the transmit speed. In Figure 3-3, look for the "Current Tx Rate" parameter, which in my case was 11 Mbps, the fastest transmit speed possible for a device based on IEEE 802.11b-1999. The speed on receive may be different, since it will depend on the choice of transmission speed by the communicating peer STA, or in the case of an infrastructure WLAN, all STAs are in communication with the AP, not directly with each other.

There is also a MAC-layer capability to perform frame-level fragmentation, since it is possible that a large frame will not be able to get through since it takes longer to transmit and is therefore more vulnerable to interference. However, if the MAC can break that frame up into smaller chunks, then each chunk might have a chance to make it to the other side without encountering interference. In the end, it might take longer to send the sequence of frame fragments to the other side than it would have taken to get the entire frame across, but it's possible that the large frame would never have made it across, no matter how many retransmission attempts the STA was willing to make.

In the end, a STA might stay at a higher speed if all the retransmission and fragmentation tricks were sufficient, but a STA can also reduce its own transmit speed as a way to better handle noisy transmission environments. Based on the increased robustness of the modulation schemes as one lowers the transmit speed, it is reasonable to expect that the frames will be more likely to be successfully transmitted at lower speeds. In fact, at lower speeds it's even possible that the STA will see higher overall throughput, since most of its packets will get through without needing to be retransmitted or fragmented. This is because the modulations for lower speeds are generally more robust against noise than the modulations that can achieve faster speeds.

In the event that a STA had responded to frame transmission errors by reducing transmit speed, it can revert to a higher speed after a suitable time has passed. The standard doesn't specify or recommend how to detect situations when using a slower speed might be a good choice, nor does it specify how long to wait before trying again at the next-faster speed. It is up to each vendor to decide how to gracefully handle noise, the presence of which is a fact of life in the wireless domain. In the end, a slow transmission is better than no transmission.

What's really interesting is how different speed frames can coexist on the WM. The trick is that all the modulations that can share the same frequency band begin with a common "prelude" that describes what type of modulation will follow. All STAs must be able to understand this prelude, even if the song that follows is in a language they can't understand. Figuratively, they may be able to tell that music is playing, but they won't be able to understand it. However, most importantly for correct operation of the WLAN, they will still be able to detect that another STA is active on the channel, and will know how much time the STA has reserved for transmission. Typically, a STA will reserve enough time for the frame to be transmitted, and for the associated ACK to be received, which is just a small additional amount of time.

The length of the frame-to-be-transmitted is not specified in octets, but in microseconds, because in order for receivers to do anything meaningful with the length in octets, they would need to divide by the transmission speed to get the expected transmission time. However, receivers may not be able to tell what modulation scheme is in use, which will mean that they'll have no clue as to how fast it is. Only the sender knows how fast its intended modulation is, and can predict with great accuracy the expected duration of its transmission.

The following figures expose the contents of this "prelude" protocol, PLCP, which is actually a sublayer of the PHY layer. Before delving into the structure of the PLCP header, Figure 3-15 shows the layering of PLCP within the PHY and relative to the rest of the IEEE 802.11 protocol stack.

Figure 3-15. Logical structure of IEEE 802.11b PHY layer[23]

graphics/03fig15.gif

[23] Adapted from IEEE Std. 802.11-1999, copyright 1999. All rights reserved.

No single IEEE 802.11 protocol architecture diagram applies to all of IEEE 802.11's PHYs, however, Figure 3-15 depicts IEEE 802.11b's protocol architecture, and the other PHYs have similar structures. IEEE 802.11a has an OFDM-based PLCP, and the FHSS and DSSS schemes from IEEE 802.11-1999 both have similar, but simpler, versions of this diagram.

The Physical Medium Dependent (PMD) layer handles all the work of transmitting PHY Protocol Data Units (PPDUs) onto and receiving PPDUs from the WM. In addition, the PMD handles medium-specific configuration and tuning necessary to maintain optimal performance. The PMD Service Access Point is the interface through which the PLCP layer sends PPDUs across the PDM sublayer. Above the PLCP sublayer is the Convergence sublayer, which serves to expose a common interface to the MAC, regardless of which PMD sublayer is actually in use.

Figure 3-16 shows the names of the PDUs that are exchanged between each layer in Figure 3-15. The job of the MAC Service is to exchange MAC Service Data Units (MSDUs). To accomplish this task, the MAC layer adds a MAC Protocol Data Unit (MPDU) header before the MSDU and a Frame Check Sequence (FCS; in this case the FCS is the MPDU trailer) after the MSDU. The resulting MPDU is accepted as a PHY Service Data Unit (PSDU) by the PHY's PLCP sublayer.[24]

[24] Note that the MAC layer also exchanges Control frames and MAC Management Protocol Data Units (MMPDUs) that are also treated as PSDUs by the PHY's PLCP sub-layer.

Figure 3-16. IEEE 802.11 PDU layering

graphics/03fig16.gif

Just as the MAC layer protocol's job is to exchange MSDUs, the PHY layer's job is to exchange PSDUs. In order to exchange the PSDUs successfully across the WM, the PHY layer adds a PLCP header to build a PPDU. Unlike the MPDU, there is no PPDU trailer. The PLCP header is what enables each frame to travel at a different speed. Figures 3-17a and 3-17b show the PLCP header formats for IEEE 802.11b.

Figure 3-17a. IEEE 802.11b PPDU format Long Preamble

graphics/03fig17a.gif

Figure 3-17b. IEEE 802.11b PPDU format Short Preamble

graphics/03fig17b.gif

As with Ethernet, the frame begins with a preamble that is used to allow the receiver to synchronize itself with the exact bit transmission speed of the sender. The expectation is that a receiver can maintain a "lock" on the sender's clock for the duration of a maximum-sized frame.

In the case of IEEE 802.11b, the PLCP preamble is available in one of two sizes. The first size, depicted in Figure 3-17a, is the "Long Preamble," which is 144 bits in length and is transmitted using the Differential Binary Phase Shift Keying (DBPSK) modulation at 1 Mbps. At 1 Mbps, each bit takes 1 µs to transmit, so the temporal length of the PLCP Long Preamble is 144 µs.

The preamble is created by sending a 144-bit all-ones pattern through the IEEE 802.11b scrambler. The preamble concludes with a Start-of-Frame Delimiter (SFD), which is 16 bits long. The SFD always contains the same value (the rightmost bit is transmitted first): 11110011 10100000, or 0xF3A0 in hexadecimal notation.

The other preamble size, depicted in Figure 3-17b, is the "Short Preamble," which is 72 bits in length (exactly half of the length of the Long Preamble), and just like the Long Preamble, it is transmitted using the Differential Binary Phase Shift Keying (DBPSK) modulation at 1 Mbps. At 1 Mbps, each bit takes 1 µs to transmit, so the temporal length of the PLCP Short Preamble is 72 µs. The bit pattern of the Short Preamble is created by sending a 72-bit all-zeros pattern through the IEEE 802.11b scrambler.

As a final differentiator between the Long and Short Preambles, the Short Preamble concludes with an SFD, which is a bit-reversed image of the SFD in the Long Preamble (this is why the graphic in Figure 3-17b shows "(DFS)" below "Short SFD." The "DFS" is meant to remind us that the SFD in the Short Preamble is in the reverse order compared to the SFD in the Long Preamble. The Short Preamble's SFD, then, always contains the same value (the rightmost bit is transmitted first): 00000101 11001111 (0x05CF in hexadecimal notation).

Since supporting the PLCP Short Preamble is not mandatory for STAs based on IEEE 802.11b-1999, STAs that do not support it will not be able to detect that a frame has started, since they will not see a valid SFD (at least, not valid for a STA that only supports PLCP with Long Preamble). Note that the PLCP Short Preamble is exactly half the duration (in microseconds) of the PLCP Long Preamble.

Following either the PLCP Long Preamble or the PLCP Short Preamble is the PLCP Header, which consists of four fields. The first field to be transmitted is the "Signal" field, which is one octet in length. This field is used to indicate the speed at which the PSDU will be transmitted. In IEEE 802.11b, the speed is encoded in 0.1 Mbps increments, starting with a minimum value of the field of 0x0A (decimal 10, or 1 Mbps when multiplied by 0.1 Mbps). The other values of the field in IEEE 802.11b are 0x14 (decimal 20, which is 2 Mbps when multiplied by 0.1 Mbps), 0x37 (decimal 55, which is 5.5 Mbps when multiplied by 0.1 Mbps), and finally, 0x6E (decimal 110, which is 11 Mbps when multiplied by 0.1 Mbps).

The PLCP Header following the PLCP Short Preamble is structurally and semantically identical to the PLCP Header that follows the long preamble; however, when the Short Preamble is in use, the PLCP Header is transmitted twice as quickly at 2 Mbps, using Differential Quaternary Phase Shift Keying (DQPSK). Thus, the 48 bits of the PLCP Header are transmitted in only 24 µs.

The PLCP Length field carries the time to transmit the PPDU, as derived from the transmit data rate and the size of the MPDU (a.k.a. PSDU). Finally, the PLCP Header concludes with a CRC-16 checksum that protects the Signal, Service, and Length fields.

The PLCP "Service" field is used to help qualify the exact nature of the modulation scheme that is being used to transmit the PSDU. For example, if PBCC coding is in use, the Service field will be marked to indicate that fact. Figure 3-18 shows the meaning of the bits in the Service field, comparing the definition of the field as defined in IEEE 802.11b-1999 with the version in IEEE 802.11g-2003. The figure shows the meaning and usage of each bit in both IEEE 802.11b and IEEE 802.11g. Note that the only two bits that have revised meanings are bits 5 and 6

Figure 3-18. IEEE 802.11g-2003's PLCP Service field as compared to IEEE 802.11b-1999's PLCP Service field[25]

graphics/03fig18.gif

[25] Adapted from IEEE Std. 802.11b-1999, copyright 2000, and IEEE Std. 802.11g-2003, copyright 2003. All rights reserved.

As usual, in IEEE 802.11 standards, bit 0 is the first bit to be transmitted since it is the least-significant bit.

The "Locked Clocks" bit indicates that the transmit frequency and symbol clocks are derived from the same oscillator. If this is true, then the transmitting STA sets this bit; otherwise, it remains clear.

In IEEE 802.11b, the Length Extension bit in the PLCP Header's Service field is only valid for PPDUs that are transmitted at 11 Mbps (according to the contents of the Signal field). To quote the IEEE 802.11b-1999 standard: "Since there is an ambiguity in the number of octets that is described by a length in integer microseconds for any data rate over 8 Mbps, a length extension bit shall be placed at bit 7 in the PLCP Header's Service field to indicate when the smaller potential number of octets is correct." The calculation that drives the setting of the Length Extension bit in IEEE 802.11b is equally applicable to both the CCK-11 and PBCC-11 modulations.[26]

[26] The two additional Length Extension bits (in bit positions 5 and 6) that have been defined by IEEE 802.11g are applicable only to PBCC-22 and PBCC-33, similar to the IEEE 802.11b LE bit, based on a somewhat more complex algorithm that determines which of the two bits shall be set in any given situation.

Finally, the PLCP Header's Service field has a Modulation Selection bit (b3) that is used to determine whether CCK or PBCC is in use for any speed where either could be used, in particular, 5.5 and 11 Mbps. Along with the Signal field in the PLCP Header, a receiving STA can use the combination of MS and MS2 to determine the modulation that is being used.

For the purposes of computing the CRC-16 that protects the PLCP Header, all the Service field bits that are not defined are set to zero. This behavior has been carried forward into IEEE 802.11g, although there are only three reserved bits in the Service field as defined by IEEE 802.11g-2003.

Based on the setting of the modulation selection bit, and the value in the Signal field, the modulation can be uniquely determined. If the Signal field indicates a speed of 6, 9, 12, 18, 24, 36, 48, or 54 Mbps, then the DSSS-OFDM mixed modulation scheme must be in use (pure OFDM has its own unique PLCP header, so if those speeds are used in the OFDM PLCP header's Signal field, then the modulation is uniquely determined since the only modulation that can follow an OFDM PLCP header is OFDM). Figure 3-19 shows the OFDM PLCP PPDU structure that is used in both IEEE 802.11a and IEEE 802.11g.

Figure 3-19. OFDM PLCP Header and PPDU structure

graphics/03fig19.gif

If the speed in the Signal field is 22 or 33 Mbps, then the PBCC modulation must be in use. In addition, bit 3 of the Service field would be set in this case. If PBCC were only usable at 22 or 33 Mbps, there would be no need to define bit 3, but since PBCC is also usable at 5.5 and 11 Mbps, and since CCK is also usable at those speeds, bit 3 is needed to determine whether or not PBCC or CCK is in use for 5.5 or 11 Mbps. For 1 and 2 Mbps, the only valid modulation is Barker.

In summary, if the Signal field tells us that the PSDU is going to be transmitted at 6 Mbps, then there are only two choices for modulation, either OFDM or DSSS-OFDM. It follows that when the CCK-style PLCP Header is used, the only real choice for this modulation speed is DSSS-OFDM. The MS bit must be zero in this case, which does not imply CCK modulation (as MS=0 would only be meaningful if the speed were 5.5 or 11 Mbps).

By the way, note that just as the CCK-style PLCP Header is transmitted using one of the mandatory modulations that all IEEE 802.11b devices can be expected to understand,[27] the OFDM PLCP Header is also transmitted at one of the mandatory rates (in this case, 6 Mbps). At this speed, each symbol carries 24 bits in 4 µs, which works out to 6 Mbps (24 bits divided by 4 µs is 6 Mbps).

[27] All PLCP preambles using CCK modulation are transmitted at 1 Mbps using the Barker/DBPSK modulation. When long preambles are used (the default for IEEE 802.11-1999 and IEEE 802.11b-1999), the PLCP header (which is situated between the PLCP preamble and the PSDU's header) is also transmitted at 1 Mbps. However, when short preambles are used, the PLCP header is transmitted at 2 Mbps using Barker/DQPSK modulation.



A Field Guide to Wireless LANs for Administrators and Power Users
A Field Guide to Wireless LANs for Administrators and Power Users
ISBN: 0131014064
EAN: 2147483647
Year: 2005
Pages: 60
Authors: Thomas Maufer

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net