The fundamental service model of the Internet, exemplified in the best-effort delivery service of IP, has remained essentially unchanged for over 20 years. This model has serviced legacy applications such as file transfer and terminal access well, but routing delays and congestion mean that real-time applications do not work so well on a best-effort Internet. The Integrated Services (IS) model is intended to solve these problems by becoming a key component of future Internet architecture. This new Internet architecture includes support for both the current best-effort services and emerging real-time services. IS is designed to optimize network resources for real-time applications such as videoconferencing, video broadcast, and audioconferencing. These applications all require guaranteed QoS in order to offer acceptable quality. IS enables Internet traffic to be separated into legacy best-effort traffic and real-time data flows requiring guaranteed QoS. IS defines two service classes specifically designed for real-time traffic, as follows:
Guaranteed service—intended for applications requiring a fixed delay .
Predictive service—intended for applications requiring prognosticate delay .
IS integrates (hence the name) all of these services over a common link, using a scheme called controlled link sharing. IS is also designed to work equally well with multicast as well as unicast traffic. The IS model is specified by the IETF in .
There are four main components required in an implementation of IS; these comprise the packet scheduler, the admission control routine, the classifier, and the reservation setup protocol. These are discussed briefly below.
Reservation setup protocol. IS use the reservation protocol (RSVP) for the signaling of the reservation messages. The IS instances communicate via RSVP to create and maintain flow-specific states in the end-point hosts and in routers along the path of a flow. An application that wants to send data packets in a reserved flow communicates with the reservation instance RSVP. The RSVP protocol tries to set up a flow reservation with the requested QoS, which will be accepted if the application fulfilled the policy restrictions and the routers can handle the requested QoS. RSVP advises the packet classifier and packet scheduler in each node to process the packets for this flow adequately. If the application now delivers the data packets to the classifier in the first node, which has mapped this flow into a specific service class complying to the requested QoS, the flow is recognized with the sender IP address and is transmitted to the packet scheduler. The packet scheduler forwards the packets, depending on their service class, to the next router or, finally, to the receiving host. Because RSVP is a simplex protocol, QoS reservations are made only in one direction: from the sending node to the receiving node. If the application in our example wants to cancel the reservation for the data flow, it sends a message to the reservation instance, which frees the reserved QoS resources in all routers along the path, so the resources can be used for other flows.
Admission control. The admission control contains the decision algorithm that a router uses to determine if there are enough routing resources to accept the requested QoS for a new flow. If there are not enough free routing resources, accepting a new flow would impact earlier guarantees and the new flow must be rejected. If the new flow is accepted, the reservation instance in the router assigns the packet classifier and the packet scheduler to reserve the requested QoS for this flow. Admission control is invoked at each router along a reservation path to make a local accept/reject decision at the time a host requests a real-time service. The admission control algorithm must be consistent with the service model. Admission control should not be confused with policy control, which is a packet-by-packet function processed by the packet scheduler. It ensures that a host does not violate its promised traffic characteristics. Nevertheless, to ensure that QoS guarantees are honored, the admission control will be concerned with enforcing administrative policies on resource reservations. Some policies will be used to check the user authentication for a requested reservation. Unauthorized reservation requests can be rejected. Admission control will play an important role in accounting costs for Internet resources in the future.
Packet classifier. The packet classifier identifies packets of an IP flow in hosts and routers that will receive a certain level of service, so that each incoming packet is mapped by the classifier into a specific class. All packets that are classified in the same class get the same treatment from the packet scheduler.
Packet scheduler. The packet scheduler manages the forwarding of different packet streams in hosts and routers, based on their service class, using queue management and various scheduling algorithms. The packet scheduler must ensure that the packet delivery corresponds to the QoS parameter for each flow.
In order for an application requiring guaranteed service or controlled-load service to make use of QoS, it must establish the end-to-end path and reserve resources along that path via the resource setup protocol prior to transmitting any data. The decision of whether or not to allocate resources is the responsibility of admission control. If granted, each router along the path must place all incoming packets associated with that flow in specific queues, according to multifield classification performed by the classifier. The scheduler then releases packets in accordance with the QoS specification.
To support the IS model, an Internet router must be able to provide an appropriate QoS for each flow. This function is called traffic control. Figure 8.6 shows the implementation model for a router and host. It is important to note that a router must determine the forwarding path for a packet on a per-packet basis. This procedure must be highly optimized, and in most commercial routers this typically requires hardware assist. For efficiency, a common mechanism should be used for both resource classification and route lookup.
Figure 8.6: IS model for a host and a router.
Before describing these services in more detail we need to enlarge on our earlier discussion of flows.
The IS model, and especially RSVP, relies on the classification of related datagrams into flows (we defined flows earlier in section 8.1.1). There are three basic concepts related to flows that are fundamental to the IS model: sessions, flow specifications, and filter specifications.
A session is a data flow that can be identified by its destination. The term session is used rather than destination to emphasize the soft-state nature of the flow. Once a reservation is made by a router for a particular destination, the router classifies this as a session and allocates resources for its duration. A session is defined as follows:
Destination IP address (unicast or multicast)
IP protocol ID (e.g., TCP, UDP)
Destination port number (e.g., Telnet)
If the destination IP address is multicast, the destination port may not be required, since different multicast applications typically use different addresses rather than different ports. Packets that cannot be identified as belonging to a session are given a best-effort delivery service.
A reservation request issued by a destination end system for a particular flow is called a flow descriptor. A flow descriptor defines the traffic and QoS characteristics for a specific flow of data. The flow descriptor comprises a filter specification (filterspec) and a flow specification (flowspec), as follows:
Flowspec—Used by an application to specify a desired QoS for a flow. Routers will process packets for this flow using a set of preferences based on active flowspecs. The flowspec contains the following elements: service class, Rspec, and Tspec. The service class identifies the type of service requested and includes information used by routers to merge requests. Flowspecs also contain a set of parameters collectively referred to as the invocation information, divided into two groups: Traffic Specification (Tspec) and service Request Specification (Rspec). The Tspec describes the traffic characteristics of the requested service and is represented with a token bucket filter. Rspec specifies the QoS required by the application for the flow and may comprise parameters such as a specified bandwidth, maximum packet delay, and maximum packet loss rate. The flowspec is transported by the reservation protocol, passed to admission control, and then to the packet scheduler. Note that the information derived from Tspec and Rspec and used by the scheduler is not directly visible to RSVP.
Filterspec—The filterspec identifies the set of packets for which the flowspec is requested. Therefore, the filterspec, in combination with the session, defines a flow on which the desired QoS is to be offered. The information from the filterspec is used in the packet classifier. Filterspec comprises two elements: source IP address and source port number (e.g., for UDP/TCP). The filterspec is used to identify a specific subset of the packets associated with a session.
The IS model uses a token bucket strategy to shape traffic to meet reservation requests. This enables many traffic sources to be easily characterized, provides a concise description of the load imposed by a flow, and simplifies the process of resource reservation. Traffic shaping provides input parameters to the policing function.
In addition to the basic best-effort service the IS model defines two additional service classes: a guaranteed service  and a controlled load service .
The controlled load service is designed to support applications that are highly sensitive to congestion conditions in the Internet (such as real-time applications). The controlled load service is also designed for applications that can tolerate a reasonable amount of packet loss and delay, such as audio- and videoconferencing software. These applications work well on lightly loaded networks but degrade rapidly as network load increases and congestion occurs. If an application selects the controlled load service for a specific flow, then the performance of that flow will not degrade as the network load increases. The controlled load service offers only one fixed service level, with no optional features in the specification. The service simulates a best-effort service over lightly loaded networks. In effect the service offered is equivalent to that experienced by best-effort (uncontrolled) traffic under lightly loaded conditions. This means that a very high percentage of transmitted packets will be successfully delivered to the destination, and the transit delay for a very high percentage of the delivered packets will not greatly exceed the minimum transit delay.
Any router that accepts requests for controlled load services must ensure that sufficient bandwidth and processing resources are available. This can be achieved with active admission control. Before a router accepts a new QoS reservation, represented by the Tspec, it must consider all key resources (link bandwidth, router or switch port buffer space, and computational capacity for packet forwarding). The controlled load service class does not accept or make use of specific target values for control parameters such as bandwidth, delay, or loss. Applications that use controlled load services must be capable of dealing with small amounts of packet loss and occasional packet delays.
QoS reservations using controlled load services must provide a Tspec that consists of the token bucket parameters, r and b, together with the minimum policed unit, m, and the maximum packet size, M. It is not necessary to supply an Rspec, since controlled load services do not provide functions to reserve a fixed bandwidth or guarantee minimum packet delays. Controlled load service provides QoS control only for traffic that conforms to the Tspec provided at setup time. Clearly, the service guarantees apply only to packets that respect the token bucket rule (i.e., over all time periods, T, the amount of data sent cannot exceed rT + b).
The guaranteed service is designed to deliver datagrams from the source to the destination within a guaranteed delivery time. This means that every packet within a flow that conforms to the traffic specifications will arrive at worst case at the maximum delay time specified in the flow descriptor. For example, real-time multimedia applications, such as video and audio broadcasting, can use streaming technology. These applications cannot insert datagrams that arrive after their allotted playback time. The guaranteed service is extremely demanding in its specification for end-to-end delay control, and hence it is useful only if it is supported by every router along the reservation path (other service models in the intermediate path may have much weaker delay control mechanisms). It is important to understand that packet delay has two components, as follows:
A fixed transmission delay. The fixed delay depends on the path taken, which is selected not by guaranteed service but by the setup mechanism. All data packets in an IP network have a minimum delay that is limited by the propagation velocity of the media and the turnaround time of the data packets in all routers on the routing path.
A variable queuing delay. The queuing delay is determined by guaranteed service, and it is controlled by two parameters: the token bucket (in particular, the bucket size, b) and the requested bandwidth, R. These parameters are used to construct the fluid model characterizing the end-to-end behavior of a flow.
The fluid model specifies a service level for a flow that is equivalent to having a dedicated link of bandwidth R. In effect each flow has its own independent service specification that is not influenced by other flow requirements or activities. The definition of guaranteed service is based on the premise that the fluid delay of a flow obeying a token bucket (r, b), being served by a line with bandwidth R, is bounded by b / R (where R ≥ r). In practice guaranteed service offers an end-to-end service rate, R, where R represents a proportion of bandwidth reserved along the routing path and not the bandwidth of a dedicated line. In this model, Tspec and Rspec are used to set up a flow reservation. The Tspec is represented by the token bucket parameters. The Rspec contains the parameter, R, that specifies the bandwidth for the flow reservation. Guaranteed service does not minimize jitter, but it does control the maximum queuing delay. Applications that have demanding real-time requirements (such as real-time distribution of share prices) will almost certainly require guaranteed service.
Resource reservation protocol (RSVP) is a key component of the IS architecture and is defined in . RSVP is a connection-oriented signaling protocol designed to establish QoS-compliant paths through an end-to-end network connection by reserving bandwidth and resources in advance. RSVP is designed to meet the demands of real-time voice and video applications as well as legacy best-effort traffic. For example, a video server may ask RSVP to reserve a path to a specific destination, with bounded delay and jitter, in order to deliver a real-time videoconference feed.
RSVP runs over IP (protocol 46) and is designed to operate in both IPv4 and IPv6 environments. It exploits the Type of Service (ToS) field in the IPv4 header and the flow label field in the Ipv6 header. RSVP can be viewed as a Session Layer service; it does not transport user data or perform routing functions. RSVP is designed to operate with both unicast and multicast routing protocols, using the local routing database on each routing node to obtain active path information. Once the path is reserved, data are delivered using a combination of conventional transport and routing protocols. Note that all the hosts, routers, and other network infrastructure elements between the receiver and sender must support RSVP along the end-to-end reservation path to maintain the path state and integrity. All devices in the path must agree to observe the RSVP call request parameters before user traffic is allowed to flow. These parameters may include mandatory flow specifications, such as the maximum frame transmission rate, long-term average frame transmission rate, maximum frame jitter, and maximum end-to-end delay. In order to satisfy these requirements, each intermediate RSVP element must reserve sufficient resources, such as bandwidth, CPU, memory, and buffers, for the specified flow.
RSVP requests are simplex (i.e., unidirectional). Therefore, the RSVP model differentiates between senders and receivers. RSVP also uses the concepts of flows and reservations. Reservations are receiver initiated (i.e., along the reverse delivery path to the sender) and made on behalf of individual packet flows. Routers in the reservation path may merge reservation requests from multiple downstream receivers, as they propagate the requests toward the sender. The receiver maintains the resource reservation for that flow for the duration. RSVP identifies flows by a combination of the destination IP address and destination port. All flows have an associated flow descriptor, which specifies the QoS requirements. Note that RSVP does not understand the contents of the flow descriptor; this object is processed by system-level traffic control functions (i.e., the packet classifier and scheduler). Depending upon the state of the network and the system resources available, any intermediate RSVP element along the upstream path can accept or reject reservation requests.
Most network applications require full-duplex services, since each sender may also act as a receiver (e.g., an interactive videoconference). In such cases two RSVP sessions must be created, one for each peer. Each receiver sends a reservation request to its associated sender, where the contents of the request depend upon the capabilities of the receiver (e.g., network interface speed, display capabilities, throughput, etc.). This model also accommodates the different QoS requirements for heterogeneous receivers in large multicast groups. The sender does not need to know the characteristics of all possible receivers to structure the reservations. For example, a high-performance workstation and a standard PC wish to receive a high-quality MPEG stream from a video server. The default frame rate for the video stream is 30 fps, with an unconstrained data rate of 1.5 Mbps. In this case the PC does not have enough processing power to decode the video stream at the full rate; it can only cope with 10 fps. Initially the video server uses RSVP to signal to the two receivers that it can offer the video stream at 1.5 Mbps. In this case the workstation issues a reservation request for the full 1.5 Mbps; the PC issues a reservation request for a flow with 10 frames per second and a data rate of 500 Kbps.
RSVP messages comprise a common header, followed by a variable number of objects. The number and type of these objects depends on the type of message. Message objects contain information necessary to make resource reservations—for example, the flow descriptor or the reservation style. In most cases, the order of the objects in an RSVP message is immaterial. Reference  recommends a specified order, but implementers should accept the objects in any order. Figure 8.7(a) shows the common header of an RSVP message. The RSVP objects that follow the common header consist of a 32-bit header and one or more 32-bit words. Figure 8.7(b) shows the RSVP object header.
Figure 8.7: (a) RSVP common header. (b) RSVP object header.
Version—a 4-bit RSVP protocol revision (currently 1).
Flags—a 4-bit field. No flags are defined yet.
Message Type—an 8-bit field that indicates the message type:
RSVP Checksum—a 16-bit field. May be used by receivers of an RSVP message to detect transmission errors.
Send_TTL—an 8-bit field that contains the IP TTL value.
RSVP Length—a 16-bit field that contains the total length of the RSVP message including the common header and all objects that follow. The length is counted in bytes.
We have already seen the operation of the RSVP Path and Resv messages. Figure 8.8 shows the RSVP Path and RESV Message formats. The integrity object must follow the common header if it is used. The style object and the flow descriptor list must also occur at the end of the message. The order of all other objects should follow the recommendation in .
Figure 8.8: Path reservation message format. (b) Resv message format.
Length—a 16-bit field that contains the object length in bytes. This must be a multiple of 4. The minimum length is 4 bytes.
Class-Number—Identifies the object class. The following classes are defined:
Null—set to zero. The length of this object must be at least 4, and any multiple of 4. The object can appear anywhere in the object sequence of an RSVP message. The content is ignored by the receiver.
Session—contains the IP destination address, the IP protocol ID, and the destination port to define a specific session for the other objects that follow. The session object is required in every RSVP message.
RSVP_HOP—contains the IP address of the node that sent this message and a logical outgoing interface handle. For downstream messages (e.g., path messages) the RSVP_HOP object represents a PHOP (previous hop) object, and for upstream messages (e.g., RESV messages) it represents an NHOP (next hop) object.
Time_Values—contains the refresh period for path and reservation messages. If these messages are not refreshed within the specified time period, the path or reservation state is canceled.
Style—defines the reservation style and some style-specific information that is not in flowspec or filterspec. The style object is required in every Resv message.
Flowspec—specifies the required QoS in reservation messages.
Filterspec—defines which data packets receive the QoS specified in the flowspec.
Sender_Template—contains the sender IP address and additional demultiplexing information used to identify a sender. Required in every path message.
Sender_Tspec—defines traffic characteristics of a data flow from a sender. Required in all path messages.
Adspec—advertises information to the traffic control modules in the RSVP nodes along the path.
Error_Spec—specifies an error in a PathErr, ResvErr, or a confirmation in a ResvConf message.
Policy_Data—contains information that allows a policy module to decide whether an associated reservation is administratively permitted or not. It can be used in path, Resv, PathErr, or ResvErr messages.
Integrity—contains cryptographic data to authenticate the originating node and to verify the contents of an RSVP message.
Scope—contains an explicit list of sender hosts to which the information in the message are sent. The object can appear in a Resv, ResvErr, or ResvTear message.
Resv_Confirm—contains the IP address of a receiver that requests confirmation for its reservation. It can be used in a Resv or Resv-Conf message.
C-Type—specifies the object type within the class number. Different object types are used for IPv4 and IPv6.
Object contents—varies by object type, maximum length is 65,528 bytes.
For a detailed description of the RSVP message structure and the handling of the different reservation styles in reservation messages, please refer to .
The core operations in RSVP involve the setup and tearing down of paths and the reservation of resources along those paths. The path is the route taken by a packet flow through one or more routers from the sender to the receiver. All packets that belong to a particular flow will follow the same path through the network (i.e., typically the shortest path [s,d] created by an IGP such as OSPF or DVMRP).
In order to create a path, an RSVP sender uses the following process:
The sender first issues a path message that traverses the network to the intended destination of the flow. The path message contains traffic parameters that describe the QoS requirements for a particular flow. In order to forward this message, RSVP consults routing tables in each router (created by conventional routing protocols).
When the path message reaches the first RSVP router, the router caches the IP address from the RSVP hop field within the message (in this case the address of the sender). Then the router overwrites the last hop field with its own IP address and then forwards the modified path message to the next router in the path.
This process continues until the message has finally reached its destination (the receiver), by which time each router in the path will know the address of the previous router and the path can be accessed in reverse.
At this point the receiver(s) knows that a sender can accommodate any QoS requirements of the flow and that all routers along the end-to-end path are aware that there may be pending resource reservations issued for this flow (which they may or may not accept, depending upon their status).
Figure 8.9 shows the process of the path definition.
Figure 8.9: RSVP path definition process.
If a receiver now wishes to make QoS requests for the flow, it goes through the following Steps:
The receiver transmits a reservation (Resv) message to the sender. This message contains the QoS requested from this receiver for a specific flow, as represented by the flow descriptor (comprising a filter-spec and a flowspec). The Resv message is directed to the last router in the path with the address it received and cached from the path message.
Since each RSVP-enabled router on the end-to-end path has already cached the previous hop IP address (taken from the path message), Resv messages are simply forwarded in the reverse direction to the sender, and each router in the path examines the resource reservation request to see if it can be accommodated.
If required, a receiver may request confirmation that a request was accepted by including a confirmation request in its Resv message. Each router will return a ResvConf message if the reservation was established successfully. Figure 8.10 illustrates this process.
Figure 8.10: RSVP Resv message flow.
At each intermediate node, the following actions are undertaken in order to service the request:
Process the reservation request—The RSVP process passes the QoS request to the admission control and policy control instance within the node. Admission control establishes whether the node has sufficient resource to support the new flow. Policy control checks that the requesting application is authorized to make such requests.
Reject the request—If either admission control or policy control tests fail, then the reservation request is rejected and a ResvErr error message is sent back to the receiver.
Accept the request—If both tests succeed, the node uses the filter-spec information to configure the packet classifier and the flow-spec information to configure the packet scheduler. The packet classifier will now recognize all packets belonging to this new flow, and the packet scheduler will use the QoS defined by the flowspec to determine how best to queue and schedule packet release on the outgoing interface(s).
Forward the reservation request—If the request is accepted, it is forwarded to the next upstream RSVP node in the direction of the sender.
Note that the admission and policy control utilize information from underlying integrated services mechanisms, which are not part of RSVP and are to some extent implementation dependent (different routers have different queuing strategies).
If the reservation request does eventually reach the sender, then the sender knows that QoS reservation was accepted and configured in each router in the delivery path, and, by implication, all nodes in that path are RSVP enabled (even if a single router in the path does not support RSVP, the service cannot be guaranteed, and only a best-effort service is effected). The application can then begin to send packets downstream to the receivers. The packet classifier and the packet scheduler in each router ensure that these packets are handled and forwarded according to the requested QoS. It is important to understand that resource reservations are maintained in soft state. This requires that once established, each sender must periodically transmit Path and Resv messages to refresh the path and QoS state data for each flow it originates. This allows route changes to occur dynamically without resulting in protocol overhead. A reservation will be canceled if RSVP does not send refresh messages.
Path and reservation states can be explicitly deleted using RSVP teardown messages. There are two message types, as follows:
PathTear messages travel downstream from the point of issue to all receivers, deleting the path state and all dependent reservation states in each RSVP-enabled device along the path.
ResvTear messages travel upstream from the point of issue to all senders, deleting reservation states in each RSVP-enabled device along the path.
Any RSVP-enabled device that detects a state timeout should issue a teardown request automatically, so it is not strictly necessary to explicitly tear down an old reservation. However, it is recommended that all hosts issue a teardown request when an existing reservation is no longer required, since in a busy network this will release resources immediately.
Although RSVP supports conventional unicast operations, it was designed primarily with multicast applications in mind, since multicasting offers a challenging scenario for resource reservation on public networks such as the Internet. For multicast operations a host sends IGMP messages to join a host multicast group as standard, and then sends RSVP messages upstream to reserve resources along the delivery path of that group. Note that conventional multicast routing protocols are still responsible for the delivery tree topology. RSVP is designed to scale efficiently for large multicast delivery groups, so reservation requests need only travel to a point where the multicast delivery tree merges another reservation for the same multicast stream. This receiver-oriented design can accommodate large multicast groups and dynamic group membership.
In a multicast environment, a receiver could be receiving data from multiple senders, and the set of senders to which a Resv message is directed is called the scope of that request. Note that with multicast operations, the Resv message that is forwarded upstream after a successful reservation may differ from the request that was received from the downstream node, for the following two reasons:
In a multicast environment, reservations made for a common multicast source on different downstream branches are merged together as they travel upstream. This is necessary to conserve router resources and promote scalability.
The traffic control mechanism could modify the flowspec on a hop-by-hop basis.
In fact, a reservation request travels upstream along the multicast delivery tree until it reaches a point where an existing reservation is equal to, or greater than, that being requested. At this point, the request is simply folded in with the reservation already in place; there is no need to forward it any farther. For example, in Figure 8.11, H3 sends a Resv message back toward H1 after receiving a path message. Router R7 accepts the request and forwards it upstream to R4. When R4 examines the request, it sees that H2 already has an identical QoS request in place back through R2; it accepts the request (after checking with its own admission control or policy control for the downstream interface) but does not forward it any farther upstream.
Figure 8.11: RSVP behavior with multicast flows.
As described previously, receivers of multicast multimedia applications may receive flows from different senders. In the reservation process, a receiver must initiate a separate reservation request for each flow it wants to receive. However, RSVP provides a more flexible way to reserve QoS for flows from different senders. A reservation request includes a set of options that are called the reservation style. One of these options deals with the treatment of reservations for different senders within the same session. The receiver can establish a distinct reservation for each sender or make a single shared reservation for all packets from the senders in one session. Another option defines how the senders for a reservation request are selected. It is possible to specify an explicit list or a wildcard that selects the senders belonging to one session. In an explicit sender-selection reservation, a filterspec must identify exactly one sender. In a wildcard sender-selection, the filterspec is not needed.
Table 8.4 shows the reservation styles that are defined with this reservation option, as follows:
Wildcard-Filter (WF) uses the options shared reservation and wildcard sender selection. This reservation style establishes a single reservation for all senders in a session. Reservations from different senders are merged together along the path so that only the biggest reservation request reaches the senders. A wildcard reservation is forwarded upstream to all sender hosts. If new senders appear in the session—for example, new members enter a videoconference—the reservation is extended to those new senders.
Fixed-Filter (FF) uses the option's distinct reservations and explicit sender selection. This means that a distinct reservation is created for data packets from a particular sender. Packets from different senders that are in the same session do not share reservations.
Shared-Explicit (SE) uses the option's shared reservation and explicit sender selection. This means that a single reservation covers flows from a specified subset of senders. Therefore, a sender list must be included in the reservation request from the receiver.
Fixed-Filter Style (FF)
Shared-Explicit Style (SE)
Wildcard-Filter Style (WF)
Shared reservations (WF and SE) are generally used for multicast applications. For these applications it is unlikely that several data sources transmit data simultaneously, so it is not necessary to reserve QoS for each sender. For example, an audioconference could be directed to ten identically equipped receivers, each with a 64 Kbps back to the sender. With a fixed-filter reservation, all receivers must establish nine separate 64-Kbps reservations for the flows from every other sender. This is overkill, since we know that audioconferences generally operate on the principle that only one or two people speak at the same time, and most audioconferencing software today uses silence suppression (if a person does not speak, then no packets are sent). Therefore, it would be appropriate in this case to reserve a total bandwidth of maybe 128 Kbps for all senders, if every receiver makes one shared reservation of 128 Kbps for all senders. If the shared-explicit style is used, all receivers must explicitly identify all other senders in the conference. If the wildcard-filter style is used, then the reservation applies to every sender that matches the reservation specifications (e.g., if the audioconferencing program transmits data to a special TCP/IP port, the receivers can make a wildcard-filter reservation using this destination port number).
A receiver can make a reservation request for itself or on behalf of another application. To do so requires a set of RSVP-relevant APIs. There are currently a number of standard RSVP API proposals that multicast application programmers should be aware of for possible future enhancement of their applications. These include the following:
RSVP Application Programming Interface (RAPI)—for Sun OS/BSD: v4.0, describes a set of APIs that provide low-level access to the protocol services.
WinSock 2 Application Programming Interface (API) specification—has a set of easy-to-use, protocol-independent QoS-sensitive APIs that will map to RSVP services.
The WinSock 2 protocol-specific annex—specification has some RSVP protocol-specific APIs available for low-level access to the protocol services.
Vendors have implemented host RSVP stacks both above and below the Winsock layer. Another approach is to use an RSVP proxy, which runs independently of the real application, making RSVP reservations on its behalf.
There are a number of issues with the IS model, including the following:
Scalability—One key issue with the IS model is scalability. Fundamentally this is limited by the state that would be required in the network for RSVP on links with very high levels of statistical multiplexing. The size of the state tables expands relative to number of flows; with a large, busy network this can consume a significant portion of router processing power and memory. As a direct consequence, at present only high-end router platforms typically support RSVP.
End-to-end RSVP support—A key drawback with the IS model is that it requires unanimous support for RSVP between the end systems to offer QoS guarantees. Although intermediate routers and routing domains can happily coexist without needing to support RSVP, this reduces the guarantee to no better than best-effort unless static service-level agreements can be mapped at intermediate non-IS domains. More router manufacturers are beginning to support RSVP; however, its support is difficult, and it is typically supported on the high-end, more expensive platforms.
Pricing model—The issue of pricing structure is fundamental to the widespread deployment of RSVP. It is expected that service providers will charge premiums for RSVP QoS reservations. Consider a flow traversing the global Internet. It is very unlikely that this flow would receive special handling from all of the routers along a path unless those routers have a real incentive to do so, in preference to handling other flows with the same level of care. Furthermore, on a free network most users would eventually request special handling, negating its value, so there needs to be a way of dissuading such practice. The most likely mechanism to promote the required behavior is differential pricing. However, it is difficult to imagine a practical pricing model to handle bandwidth reservation and billing across multiple carrier networks. This issue still requires further research and definition.
Maintaining acceptable best-effort support—If the IS model is to be adopted for the Internet, it must ensure that the current best-effort traffic characteristics are still maintained. It would be unacceptable for some routers to be so busy handling RSVP reservations that they could not process the default best-effort traffic. This could even be a policy decision for ISPs. For example, an ISP could specify that one-half of the routing capacity is reserved for RSVP flow reservations and the other half for the best-effort traffic.
Performance—An important consideration when running the IS model over an internetwork is the traffic control overhead in RSVP-enabled routers. This may degrade the routing performance of underpowered devices or devices without hardware assist. As the number of data flows handled by a router increases, more RSVP sessions must be handled by the RSVP agent, and more CPU time and memory capacity will be utilized. The computational resources required for routers to inspect and handle these packets in a priority order are likely to be significant. Approaches such as tag switching are being developed to alleviate this issue. Another area of research is enhancing RSVP to use routing services that provide alternate and fixed paths. In the meantime router manufacturers must ensure that in high-traffic situations a router is not too busy managing RSVP sessions instead of routing packets and maintaining the network integrity.
Real-time applications—Protocols such as RTP can complement RSVP by allowing applications to respond to the underlying network performance. For multimedia, the audio and video are carried in a separate RTP session with RTCP packets controlling the quality of the session. Routers communicate via RSVP to set up and manage reserved-bandwidth sessions.
These limitations are considerable barriers to IS deployment, and the jury is still out. Even if eventually successful, it will be some time before end-to-end RSVP services are available on the Internet. Currently IS is perhaps best employed in corporate intranets, providing end-to-end QoS for multimedia and other real-time applications.