With limited capacity, wireless networks are often accused of having poor performance. After receiving such a complaint, the first task is to find some way of assessing the performance in some objective manner before developing a strategy to improve performance.
Many commercial network analyzers will report channel utilization. It may be reported either as a percentage of time spent transmitting and receiving frames, or in megabits per second. The former measurement is much more useful because it takes into account the potential variance in speed due to distance from an access point. If all the associated stations are far away, they may only be able to operate at 1 Mbps. A channel fully occupied transmitting at 1 Mbps under the totally ideal conditions described in the previous example will be able to squeak out 0.94 Mbps at 100% channel utilization. An analyzer that reported only bit transmission rate would not report the high utilization.
If there is contention for radio resources, changes should work to reduce that contention. One of the best ways to increase performance is to reduce the power on access points. By shrinking the coverage areas, stations will be closer to the APs and will (hopefully) operate at higher data rates.
Smaller coverage areas may also help avoid co-channel interference. Capacity for an AP is shared over its entire coverage area. If two APs are on the same channel at high transmission power, it is likely that they will interfere with each other. Although the protocol is designed to gracefully handle two devices transmitting at the same time, it often severely diminishes throughput. Reassigning channel numbers may help avoid interference as well, but capabilities are limited by the number of channels allowed by the technology in use. With 802.11b and 802.11g, there are only three channels, and it is likely that there will be a fair amount of interference from adjacent APs. 802.11a has many more channels, and is therefore much more useful in high-density networks where the AP coverage areas may overlap.
Using better physical layers may also help. 802.11b networks must divide a scant 6 Mbps of radio capacity between all associated stations. 802.11a and 802.11g can offer much higher throughput, often approaching 30 Mbps. If it is possible, it may help to upgrade all stations to better technology. In doing so, beware of 802.11g protection. Protection drastically cuts throughput. Although no precise figure can be offered, a good rule of thumb is that protection may cut throughput by more than 50%.
If protection causes such a throughput hit, the first reaction is to disable it, though preventing protection is harder than it might first appear. Any 802.11b transmission triggers the activation of protection, whether it is associated with the network or not. With the extensive installed base of 802.11b equipment, most 802.11g networks are stuck operating in protected mode, with a consequent reduction in performance.[*]
[*] As a user, this also makes your choice clear. When I helped to build the wireless LAN at Supercomputing 2004, there were 1300 802.11b/g users crammed on to 80 APs, but only 100 802.11a users. With my 802.11a card, I did not face contention for the radio medium, and I was able to use the full speed.
In some cases, the network architecture limits performance. Networks that wall off the wireless LAN from the rest of the network force all traffic through a single choke point. If that choke point is not capable of handling the traffic load, performance will suffer. Security protocols may also have a negative effect on perceived performance. IPsec, for example, accepts maximum-size packets for transmission, but the additional IPsec headers may require breaking it into a maximum size packet plus a tiny follow-on packet. Both require an 802.11 data frame sequence after contention for the medium, which increases the latency of transmission.
Performance may be perceived as poor due to the application in use. Data transmission is generally quite forgiving. Late arrival of a data frame is not a problem because it means that a web page may load slightly slower. Data transmissions are bursty and lumpy, and it is acceptable for packets to arrive in bursts. If a network needs to support real-time delivery, the engineering becomes much more difficult. Voice is a data stream that must arrive quickly, in order, and with minimal burstiness. Even though a network may support multi-megabit service, it is not acceptable to provide a voice session with what it requires on average. Quality of service on 802.11 networks is based on emerging specifications. While a few voice calls on an AP will work, the quality will suffer as the network load increases and voice traffic cannot be transmitted immediately upon arriving at the radio. Poor voice performance can occur even if the network is not saturated.
As a last resort, it may be possible to squeeze the last drop of performance out of a network by altering various parameters in the 802.11 specification, which is taken up in the next section of this chapter. In my experience, though, 802.11 parameters do not provide large enough performance gains to spend a great deal of time on tuning. Access points are now cheap enough that adding additional network capacity is extremely cost-effective.