Section 12.4. Resource Allocation


12.4. Resource Allocation

Scheduling algorithms provide QoS guarantees to traffic flows by controlling the flow of transmission, but sufficient buffer space has to be allocated to hold incoming packets, especially in high-speed networks. Some form of protection mechanism has to be implemented to provide flow isolation for preventing ill-behaving flows from occupying the entire queueing buffer. In addition, the mechanism must make packet-discard decisions on the basis of the network congestion level. Thus, buffer management is required to provide rate guarantees for a network.

Issues that involve resource allocation for packet-switched networks are fundamentally different from shared-access networks, such as Ethernet, where end hosts are able to directly observe the traffic on the shared medium. End hosts can thus tailor the rate at which they generate and send traffic, based on the traffic on the medium.

Consider the traffic-congestion case shown in Figure 12.15, where two links with a total traffic volume of ± 1 + ± 2 are connected to router R1 with an immediate outgoing link capacity ± , where ±<± 1 + ± 2. When it reaches the router, the sum of traffic encounters a low speed at the outgoing gate of the router. Router R1 starts to build up unserved packets and eventually enters a state of congestion. Situations like this, in which traffic from several sources has to be combined onto a common low-speed link, are frequent on the Internet but are unusual in shared-access networks.

Figure 12.15. The need for network resource allocation when outgoing link capacity ( ± ) is less than the sum of incoming ( ± 1 + ± 2) traffic at a router


If the same situation occurs on router R2, where ²<² 1 + ² 2 + ² 3, the frequency of severe congestion edge routers of the ISP 1 network is possible. This situation is by all means undesired from the standpoint of network management. Another important situation shown in Figure 12.15 is that router R3, located in a different domain, must be able to handle a great volume of traffic totaled at ± + ² . This situation must convince a network manager that the type of router must depend on the location of the router in the network. The type of a router in a specific location of the network must be determined after thorough study, simulation, and taking all the necessary network design factors into consideration.

12.4.1. Management of Resources

Connection-oriented networks have a method of congestion control that leads to an underutilization of network resources. In the virtual-circuit approach, end hosts send a connection set-up message, which traverses the network and reserves buffers at each router for the connection. If adequate resources are not available at each router along the path of the connection, the connection is denied . The resources reserved for one connection cannot be used by another connection, leading to an underutilization of buffers at routers.

Service models of resource allocation are of two basic types:

  1. Best effort. A network provides no guarantee to end hosts on the type of service to be delivered.

  2. QoS. An end host may request a QoS connection. The host chooses from a set of QoS classes that the network supports.

In connectionless networks, datagrams are switched independently, and datagrams flowing through a router are called connectionless flows. The router maintains state of each flow to make necessary informed decisions. For example, in Figure 12.15, either router R2 or source host H2 can initiate a flow record. In some cases, the router observes the source/destination address combination for a packet and classifies packets into different flows. In some other cases, before a flow begins, the source sends some information to identify a connectionless flow originating from the host.

12.4.2. Classification of Resource-Allocation Schemes

Resource-allocation schemes can be classified as router based versus host based, fixed versus adaptive, and window based versus rate based.

Router Based versus Host Based

Resource allocation can be classified according to whether a router or a host sets up required resources. In a router-based scheme , routers have primary responsibility for congestion control. A router selectively forwards packets or drops them, if required, to manage the allocation of existing resources.

A router also sends information to end hosts on the amount of traffic it can generate and send. In a host-based scheme , end hosts have primary responsibility for congestion control. Hosts observe the traffic conditions, such as throughput, delay, and packet losses, and adjust the rate at which they generate and send packets accordingly . In most networks, resource-allocation schemes may place a certain level of responsibility on both routers and end hosts.

Fixed versus Adaptive

In fixed reservation schemes , end hosts request resources at the router level before a flow begins. The router then allocates enough resources, such as bandwidth and buffer space, for the flow, based on its available resources. A router also ensures that the new reservation does not affect the quality of service provided to the existing reservations . If resources are unavailable, the router rejects the request from the end host. In an adaptive reservation scheme , end hosts send packets without reserving resources at the router and then adjust their sending rates, based on observable traffic conditions or the response from the router.

The observable traffic conditions are the amount of packet loss, delay, or other metrics. A router may also send messages to end hosts to slow down their sending rate. Fixed reservation schemes are router based, as the router is responsible for allocating sufficient resources for the flow. Thus, routers have primary responsibility for allocating and managing resources. Adaptive reservation schemes can be either router based or host based. If end hosts adjust rates of transmitted packets based on observable traffic conditions, the scheme is typically host based.

Window Based versus Rate Based

In window-based resource allocation , a receiver chooses a window size. This window size is dependent on the buffer space available to the receiver. The receiver then sends this window size to the sender. The sender transmits packets in accordance with the window size advertised by the receiver. In rate-based resource allocation , a receiver specifies a maximum rate of bits per second (b/s) it can handle. A sender sends traffic in compliance with the rate advertised by the receiver. The reservation-based allocation scheme might also involve reservations in b/s. In this case, routers along the path of a flow can handle traffic up to the advertised rate.

12.4.3. Fairness in Resource Allocation

The effectiveness of a resource-allocation scheme can be evaluated by considering two primary metrics: throughput and delay. Throughput has to be as large as possible; delay for a flow should normally be minimal. When the number of packets admitted into the network increases, the throughput tends to improve. But when the number of packets increases, the capacity of links is saturated , and thus the delay also increases , because a larger number of packets are buffered at the intermediate routers, resulting in increased delay.

A more effective method of evaluating the effectiveness of a resource-allocation scheme is to consider the ratio of throughput to delay, or power . As the number of packets admitted into a network builds up, the ratio of the throughput to delay increases. This is true until the network load threshold for low delay is reached. Beyond this limit, the network is overloaded, and the power drops rapidly .

A resource-allocation scheme must also be fair. This implies that each traffic flow through the network receives an equal share of the bandwidth. However, disregarding the flow throughputs (flow rates) itself is not fair. Raj Jain proposes a fairness index for n flows f 1 , f 2 , ..., f n as

Equation 12.24


The fairness index ƒ is always between 0 and 1 to represent the lowest and the best fairness. With reservation-based resource-allocation schemes, it is always possible for certain traffic, such as voice, to achieve a greater share of the available bandwidth by reservation. This may lead to unfairness in resource allocation. A fair resource-allocation scheme may not always be the most effective method of resource allocation.

12.4.4. ATM Resource Allocation

Traffic management and congestion control differ in ATM networks and regular packet-switched networks. The high-speed multiplexing and the small packet sizes in ATM networks make the task of congestion control difficult. One of the restrictions that ATM networks face in traffic control is an insufficient number of bits available in a cell for network control. Maintaining constant bit rate would help avoid the congestion in ATM networks.

Achieving Constant Bit Rate (CBR)

ATM networks are designed for fast cell switching and efficient routing at switches. One of the important requirements in ATM networks is low cell delay along with a constant rate of cell delivery. This is important because delay-sensitive applications, such as a voice communication, cannot tolerate long delays among their cells . Cell-delay variations can occur both at sources and within a network. In any case, a destination user must be able to adapt to cell-delay variations.

In order to maintain a constant bit rate (CBR), a certain algorithm is used in ATM networks. Let J i be the cell i delay variation that a given application can tolerate. Let c be the constant rate at which cells are delivered to users. Thus, the interval between two similar points of two consecutive arriving cells is . Let t i be the time at which the i th cell is received. Then, the variable delay for the i th cell is given by

Equation 12.25


The analysis bases J 1 as the first value and starts recursively, using i , where i 2. For example, for the second cell, we have:

Equation 12.26


The cell-delay variation can be reduced by increasing both the available network resources and the data rate at the user network interface. The cell delay at the network level can be lower, because cells are small and fixed size and have a fixed header format. For ATM networks, as well as other types of networks, resource allocation has to be controlled to avoid congestion. Variable cell delay can occur because of the processing overhead in the ATM layer and the physical layer. The encapsulation process at the ATM layer and the overhead bits added at the physical layer lead to variable cell delays.

The key factor that makes it extremely difficult to control wide-area ATM networks is the fact that the cell-transmission time in ATM networks is quite small; hence, feedback control may not be able to react quickly, making the use of adaptive reservation systems for congestion control ineffective . The key factors that influence traffic management in ATM networks are latency and cell-delay variation. The network propagation delay can sometimes be comparable to cell-transmission times. Hence, a source cannot use implicit feedback control, as the feedback control signals arrive too late for any appropriate cell-retransmission strategy.

Resource Management Using Virtual Paths

The main resource management in ATM networks involves the proper use of virtual paths. Several virtual channel connections (VCC) are grouped into a virtual path connection (VPC). ATM networks provide an aggregate capacity for all virtual channels within a virtual path. The primary parameters for traffic management in ATM networks are cell-loss ratio , cell-transfer delay , and cell-delay variation . If a virtual channel passes through multiple virtual paths, the performance on that virtual channel depends on the performance of all the virtual paths that incorporate the virtual channel.

Different VCCs within a VPC may have different QoS requirements. The aggregate capacity and performance characteristics for a virtual path should be set to meet the requirements of the most demanding VCCs. There are two different approaches to resource allocation for virtual paths. In the first approach, an ATM network can set the capacity of the virtual path to meet the peak data rates of all VCCs within that VPC. However, this may lead to an underutilization of network resources. The second approach follows the idea of statistical multiplexing . An ATM network can set the capacity of a VPC to be greater than or equal to the average capacity of all its VCCs. With this approach, the total capacity of the virtual path becomes smaller than the aggregate peak data rate, leading to more efficient utilization of network resources. The second approach works best when a number of VCCs are grouped into a VPC on the basis of similar quality of service.

Connection Admission Control

When requesting a new VCC or VPC in ATM networks, a user must associate the connection with a particular QoS. Generally, a user can choose from a range of QoS classes that the network supports. An ATM network accepts the connection only if it can meet the QoS requirements without compromising on the QoS of existing connections. Once the network accepts a connection, the user and the network enter into a traffic contract. The network continues to provide the QoS as long as the user does not violate the traffic contract. The traffic contract is characterized by four parameters: peak cell rate (PCR), cell-delay variation (CDV), sustainable cell rate (SCR), and burst tolerance .

The peak cell rate of a connection is the maximum rate at which a source generates cells within a connection. The sustainable cell rate represents the upper bound on the average cell rate associated with a given connection. Burst tolerance represents the maximum variability of the cell-arrival rate from a SCR. For a constant bit rate (CBR) source, only the first two parameters are relevant. The cell-delay variation may cause an increase in the cell rate from its peak value, because cell delays may cause cells to clump up, thereby accounting for an increase in the cell rate from its peak value.

For variable bit rate (VBR) sources, all four parameters are relevant. A future flow pattern can be predicted by knowing the SCR and burst tolerance. A network characterizes each connection based on these four parameters. The admission-control decision is based primarily on users' QoS requirements. In an ATM connection, a user is also allowed to assign a cell-loss priority (CLP). The CLP bit in the header indicates either normal priority (CLP = 0 or 1) or high priority (CLP = 0).

Usage Parameter Control

Once a network's admission-control function accepts a connection, the usage parameter control scheme is used to determine whether the connection confirms the traffic contract. The usage parameter control mechanism is normally accompanied by a QoS traffic-shaping scheme. The usage parameter control involves the control of the peak cell rate and the associated cell-delay variation. The traffic flows are monitored to check whether the flow violates the contract by exceeding the peak cell rate, subject to the maximum specified cell-delay variation.

The CLP header bit defined in Section 2.4.2 is used for tagging or discarding noncompliant cells. Those cells that were tagged and cells with CLP = 1 are classified as low-priority cells. If the network becomes congested , these cells are dropped first, in order to provide better performance to high-priority cells. Usage parameter control can also provide a mechanism of traffic policing, whereby cells that violate the traffic contract are either tagged or discarded.

12.4.5. Cell Scheduling and QoS

ATM networks provide a guaranteed frame rate (GFR) service that requires the network to handle frames in addition to ATM cells. With GFR, all cells of a frame have the same CLP bit setting. Frames with CLP = 1 setting are low-priority frames. This service provides a minimum guarantee to a connection by setting up GFR virtual channels. The minimum capacity is ensured only for frames with CLP = 0. The implementation involves tagging and policing .

The tagging process is used to identify frames that violate the GFR traffic contract. The ATM network sets the CLP bit to 1 on all cells in a noncompliant frame. Tagged cells are given a lower priority in the buffer-management and scheduling stages. The buffer management is done when congestion occurs. In such a case, all tagged cells are discarded to give preference to untagged cells. The buffering scheme is set up such that an untagged cell can replace a tagged cell in the buffer when resources are limited.

As a QoS component, a scheduler gives preference to untagged cells over tagged cells. Scheduling among router queues ensures the minimum rate requirements for each virtual channel. The traffic on each connection is monitored all the time. For a frame with confirmed contract terms, all cells in the frame must be confirmed. Frames that violate the traffic contract are either tagged to a lower priority or discarded. Therefore, this process filters frames that may cause congestion by overloading the network. Frames are then checked to determine whether they qualify for guaranteed delivery.



Computer and Communication Networks
Computer and Communication Networks (paperback)
ISBN: 0131389106
EAN: 2147483647
Year: 2007
Pages: 211
Authors: Nader F. Mir

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net