13.2 Precedence and Express Queuing


13.2 Precedence and Express Queuing

Chapter 4 examined the use of waiting line analysis or queuing to determine an optimum line operating rate to link remote bridges and routers, the effect of single and dual port equipment on network performance, and buffer memory requirements of network devices. In doing so, we assumed that the servicing of queues was based on a first-in, first-out basis. That is, the arrival of frame n followed by frame n+1 into a queue would result in a remote bridge or router transmitting frame n prior to transmitting frame n+1. Although most remote bridges and routers operate on a first-in, first-out service basis, other devices may support precedence and/or express queuing. In doing so, they provide a level of performance that may considerably exceed the performance of other devices that do not support such queuing options during periods of peak network activity. To illustrate the advantages of precedence and express queuing, let us first focus attention on the operation of first-in, first-out queuing and some of the problems associated with this queuing method.

13.2.1 First-In, First-Out Queuing

When first-in, first-out queuing is used, messages are queued in the order in which they are received. This is the simplest method of queuing, as a single physical and logical buffer area is used to store data and all messages are assumed to have the same priority of service. The software used by the bridge or router simply extracts each frame based on their position in the queue. Figure 13.3 illustrates first-in, first-out queuing.

click to expand
Figure 13.3: First-In, First-Out Queuing

13.2.2 Queuing Problems

The major problems associated with first-in, first-out queuing involve the effect of this queuing method on a mixture of interactive and batch transmission during peak network utilization periods. To illustrate the problems associated with first-in, first-out queuing, consider the network illustrated in Figure 13.4 in which a Token Ring network is connected to an Ethernet network through the use of a pair of remote bridges.

click to expand
Figure 13.4: First-In, First-Out Queuing Delays

13.2.2.1 Mixing File Transfer and Interactive Sessions

Now assume that station E on the Ethernet initiates a file transfer to station C on the Token Ring. Also assume that the Ethernet network is a 10BASE-T network operating at 10 Mbps and the wide area network transmission facility operates at 56 Kbps.

If the stations on the Ethernet are IBM PC or compatible computers with industry standard architecture (ISA) Ethernet adapter boards , the maximum transfer rate is normally less than 300,000 bytes per second. Using that figure and assuming station E transmitted for one second in which each frame was the maximum length of 1500 information field bytes, not including frame overhead, a total of 300,000 bytes/1500 bytes per frame, or 200 frames, would be presented to the remote bridge. At 56 Kbps, the bridge would forward 56,000/(8 1500), or approximately 5 frames, not considering the WAN protocol and frame overhead. Thus, at the end of one second, there could be 195 frames in the bridge's buffer, as illustrated in the lower-right portion of Figure 13.4.

Now assume that at slightly after one second of time, station G on the Ethernet transmits a query to an application operating on station A on the Token Ring. The 195 frames in the buffer of the remote bridge would require approximately 40 seconds (195 frames 1500 bytes/frame 8 bits/byte/56,000 bps) to be emptied from the buffer and placed onto the line prior to the frame from station G being placed onto the line. Thus, first-in, first-out queuing can seriously degrade transmission from an Ethernet to another Ethernet or to a Token Ring network when file transfer and interactive transmission occurs between networks. The reverse situation, in which transmission is from a Token Ring to an Ethernet or between two Token Ring networks, does not result in as significant problems because a priority mechanism is built into the Token Ring frame, which results in a more equitable shared use of the network bandwidth than achievable on an Ethernet network.

13.2.2.2 Workstation Retransmissions

In the preceding example we noted the potential for the delay of 40 seconds until data from station G was transmitted. In actuality, this will almost never happen because most remote bridges and routers have installation guidelines that typically suggest that you configure buffer memory to store only a few seconds' worth of data. However, doing so results in a workstation attempting to transfer a file retransmitting frames that were not accepted by the bridge or router, adding to network traffic as well as the level of network utilization. This will have a detrimental effect on a series of stations attempting concurrent file transfer operations, as well as stations performing interactive client/server activities when file transfer operations are in effect. In certain situations, buffer queuing delays, added to frame retransmission time, can result in file transfer timeouts that result in the termination of the file transfer session. In other situations, random delays in interactive sessions will frustrate network users. Due to such problems, some remote bridge and router manufacturers have incorporated precedence queuing and express queuing into their products.

13.2.3 Precedence Queuing

Precedence queuing was probably the first method used by remote bridge and router manufacturers to enhance the transmission of inter- and intra-network traffic through device buffer memory. In precedence queuing, data entering a communications device, such as a remote bridge or router, is sorted by priority into separate queues as illustrated in Figure 13.5. Because a bridge operates at the MAC sublayer of the OSI Reference Model's data-link layer, a logical question many readers probably have concerns the method by which a bridge can recognize different frame priorities.

click to expand
Figure 13.5: Precedence Queuing Based on Frame Length

13.2.4 Methods Used

In actuality, until a few years ago a remote bridge performing precedence queuing did not look for a priority byte as there was none to look for. Instead, the bridge used one of two methods: examining the DSAP address in a frame or the frame length. The examination of DSAP addresses permits the bridge to recognize certain predefined applications and prioritize the routing of frames onto the WAN transmission facility based on those priorities. The second method, which is based on the frame length, presumes that interactive traffic is carried by shorter-length frames than file transfer and program load traffic. A few years ago, the IEEE 802.1p standard was promulgated, which, in conjunction with the 802.1q standard, added priority bits to a vLAN tag. Now, bridges and routers compatible with the new standard can examine the setting of frame priority bits to make precedence queuing decisions at layer 2.

13.2.5 Operation

Figure 13.5 illustrates the operation of precedence queuing based on frame length within the buffer area of a remote bridge. In this example, the physical buffer area is subdivided into logical partitions, with each partition used to queue frames whose length falls within a predefined range of values.

Frames from the attached LAN are placed into logical partitions based on the length of the frame. For an Ethernet network this could entail subdivision of the maximum length of 1500 bytes of the information field of that frame. Once logical queues are formed , the servicing of the queues can occur on a round- robin or priority basis. The lower portion of Figure 13.5 illustrates a priority extraction process. In this example, for every frame whose length exceeds 1000 bytes which is serviced and placed on the transmission line, two frames with a length of 501 to 1000 bytes and three frames whose length is less than or equal to 500 bytes are serviced from their queues and placed on the transmission line.

13.2.6 Express Queuing

Express queuing is a term used by one bridge and router manufacturer to represent the allocation of transmission bandwidth based on the destination address of frames in the queue as well as the number of frames with the same destination address in the queue. That is, under express queuing, transmission bandwidth is allocated so that in any given interval of time each destination address will be assigned to either a fixed portion of the WAN bandwidth or a smaller portion if there are fewer frames queued with that destination address. This technique recognizes the fact that interactive transmission is represented by a low rate of frame flow. Therefore, this type of transmission only requires a portion of the total transmission bandwidth to obtain an acceptable flow over the WAN transmission facility. The actual method by which bandwidth is allocated is based on an algorithm designed not only to provide interactive transmission priority service, but to also ensure that multiple concurrent file transfers share the majority of the remaining bandwidth in an equitable manner, which prevents timeouts from occurring.




Enhancing LAN Performance
Enhancing LAN Performance
ISBN: 0849319420
EAN: 2147483647
Year: 2003
Pages: 111
Authors: Gilbert Held

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net