The Broadband Architecture: Multiservice Networks

The Broadband Architecture: Multiservice Networks

Network architectures are definitely in transition. In today's environment, time-division and statistical multiplexers gather customer traffic for additional circuit-based aggregation through a stable hierarchy of edge (that is, local), tandem, and core switching offices in the carrier networks. Overlay networks, such as X.25, Frame Relay, ATM, and the Internet, have been put in place and have created the need to internetwork services, thereby eroding traditional network borders. Then additional access in transport options including cable, DSL, and wireless began to be introduced, all joining traditional modems and bringing their own high-density access aggregation devices into the picture. Meanwhile, in the core, SDH/SONET transport has been layered over DWDM, adding capacity and producing a variety of vendor-specific switching, routing, and management options.

Figure 10.2 puts today's networks into a visual context. Residential customers on POTS connect through their first point of access, the Class 5 (that is, local exchange) switch. Some users are serviced by xDSL, and these lines terminate on a DSL access multiplexer (DSLAM). The DSLAM links back to the local exchange for regular voice traffic, which is diverted out over the PSTN, and it also has connections into the packet-based backbone (which could be an IP, an ATM, a Frame Relay, or an MPLS-based core or backbone network) for data traffic.

Figure 10.2. Today's networks

graphics/10fig02.gif

Some users have dial-in modems that terminate on remote access devices, whereby through digital access cross-connects and routers they use private lines to access their corporate facilities to work with internal LANs and resources. Some customers have optical networks, so they have a series of multiplexers on premise that multiplex suboptical carrier levels up to levels that can be introduced into a SDH/SONET add/drop multiplexer to carry that traffic through the SDH/SONET ring. Customers also have Frame Relay, ATM, and IP switches and routers that interface into complementary equipment within the carrier network. So between the access and the edge there is a plethora of different equipment, requiring different interfaces, different provisioning, billing and network management systems, and different personnel to handle customer service and technical support and maintenance.

The core network is increasingly becoming optical. Therefore, there is access into the high-speed optical multiplexers via routers or switches. Those optical carrier levels in the SDH/SONET hierarchy are further multiplexed via DWDM systems to take advantage of the inherent bandwidth available in those fibers.

The broadband architecture is an increasingly complicated arena. Many different alternatives in the network have been engineered to support specific voice, data, or video applications, meeting certain performance characteristics and cost characteristics. When we add up all the different platforms and networks that we have, it's quite a costly environment and one that's difficult to maintain and manage cohesively. By building the overlay networks and separating access and transport functions, carriers manage to add capacity and new services without interrupting their existing services. However, the downside of this system is that the new services rarely use the same provisioning management and troubleshooting systems as the old network. These operations and management costs can amount to as much as half of the carrier's total cost to provide a service.

The Three-Tiered Architecture

The broadband architecture has three tiers. The first tier involves the access switches; it is the outer tier, associated with delivering broadband service to a customer. The second tier involves the edge switches. This tier is associated with protocol and data service integration. The third tier, the inner tier, involves the core switches. This tier handles transmission of high-speed packet data throughout the backbone. Figure 10.3 shows the components that comprise these three tiers, and the following sections describe them.

Figure 10.3. A multiservice network

graphics/10fig03.gif

The Outer Tier: The Broadband Access Tier

Access tier devices include legacy network infrastructure devices such as Class 5 local exchanges and digital loop carriers. The access tier also includes DSLAMs, which are designed to concentrate hundreds of DSL access lines onto ATM or IP trunks and then route them to routers or multiservice edge switches.

Also in the access environment are IADs, which provide a point of integration at the customer edge, integrating voice, data, and video networks and supporting broadband access options. Also in the access tier are remote access servers, which typically provide access to remote users via analog modem or ISDN connections, and which include dialup protocols and access control or authentication schemes. Remote access routers are used to connect remote sites via private lines or public carriers, and they provide protocol conversations between the LAN and the WAN.

The Middle Tier: The Intelligent Edge

The second tier involves the intelligent edge devices. These can include next-generation switches, VoIP gateways, media gateways, trunking gateways, ATM switches, IP routers, IP switches, multiservice agnostic platforms, optical networking equipment, and collaborating servers. This tier is also home to the network management stations that manage all these devices.

The edge devices and the intelligent edge in general handle authentication, authorization, and accounting. They identify the specific levels of performance required and map the proper QoS levels into the packet according to the backbone protocol. The intelligence keeps moving closer and closer to the customer, and it is actually being extended to customer premises equipment. We're trying to get away from an environment where we have a lot of single-purpose networks associated with single-purpose boxes and their own individual access lines (see Figure 10.4). As mentioned previously, there are complexities involved with acquisition, with ongoing maintenance, and with the talent pool to administer and maintain these systems.

Figure 10.4. Complexities with single-purpose boxes

graphics/10fig04.gif

The ideal configuration is a multipurpose WAN switch that could facilitate the termination of any type of data protocol, as well as facilitate aggregation at high speeds to the various optical levels (see Figure 10.5). This is what we're striving for with the intelligent edge.

Figure 10.5. Simplicity with multipurpose switches

graphics/10fig05.gif

Most equipment manufacturers today produce one of two types of devices for the edge:

         Access-oriented devices These devices include multiservice provisioning platforms (MSPPs), which can handle all the popular data protocols and interfaces, except that they are not designed to be optical aggregators.

         Transport-oriented devices These are optical aggregations systems, and they support a full range of hierarchical aggregation, from DS-3 to OC-48. They offer electrical-to-optical conversion as well. But they don't offer all the data interfaces.

Successful edge devices will have to handle multiprotocol data services as well as multispeed aggregation. Thus, emerging solutions for the intelligent network edge will have to meet three critical objectives. First, there's a need to bridge the bandwidth bottleneck that currently exists between user LANs and the optical core. We have LANs that operate at Gigabit Ethernet, and soon we'll even have 10Gbps Ethernet. We have optical cores that operate at OC-192 (that is, 10Gbps) and are moving beyond that to 40Gbps and 80Gbps. By applying multiple lambdas in a fiber, we can even achieve terabits per second. But our WAN link between the LAN and the optical core is often limited to a link that can handle only 56Kbps to 2Mbps. A severe bottleneck is occurring at the LAN/WAN integration point, and that needs to be resolved. Second, we need to improve the serviceability of the carrier networks; we need to make it easier to define, provision, bill, and manage services and equipment across a converged area. Third, we need to enable converged carrier infrastructures to simplify the carrier networks and to simplify the support of end-user services.

Four new developments promise that equipment providers will soon be able to provide broadband access switches that will address the services providers' bandwidth and serviceability problems:

         Advanced network processors Advanced network processors are programmable and enable the required service function in software. This allows carriers to have the flexibility of "generic cards" instead of having to purchase expensive specialized cards. Some processors will even provide an autodetection feature, automatically detecting the service required. These generic cards can accept any type of native input TDM connections, IP frames, ATM cells, and so on and convert that traffic to optically multiplexed flows while maintaining the service requested by each user flow.

         Advanced computing memory Thanks to inexpensive and reliable memory, equipment vendors can now load large volumes of data and software onto individual cards. This is vital to the intensive data collection and computing that is required to support increasingly demanding SLAs. Maintaining user-specific information at the edge makes it easier for the carrier to manage each user's individual service level.

         High-capacity switching High-capacity switching is greatly improving performance by allowing traffic to move at optical speeds without any blocking or buffering. This also allows full protocol processing without affecting the network performance. Thus, using these types of switch fabrics in the edge makes it much easier to allocate and manage capacity.

         Standardized software support Several software vendors, including CrossKeys and Vertel, have been developing standards-based solutions featuring both Telecommunications Management (TMN) and Common Object Request Broker Architecture (CORBA) software. Telcordia has also developed Operations Systems Modifications for the Integration of Network Elements (OSMINE) solutions, which are packages that provide a standard means to integrate new software and hardware systems into the existing regional Bell operating company (RBOC) operations support systems.

The next-generation edge must provide media-agnostic service interworking between multiple access technologies. We have the benefit of multiple options for broadband access, but with that comes the challenge and complication of supporting multiple access techniques. The intelligent edge must also support each converged service, recognizing and properly handling all the voice, data, video, and multimedia traffic. That edge should also be providing an H.323 or perhaps Session Initiation Protocol (SIP) signaling gateway function between the enterprise network and the POTS network. (Chapter 10, "Internet/IP Applications and Services," describes these signaling systems.) By doing so, the edge will provide convergence between the POTS SS7 signaling network and the IP signaling network.

The new edge systems have many things to accomplish. They also need to reduce the human intervention time required to perform simple network tasks. This is key to addressing the people shortage that's affecting all providers. These days it is very difficult to find the appropriate talent and maintain it. Half of the job vacancies in information technology and telecommunications each year go unfilled, and in the United States alone, the number of jobs open is 1.6 million! This is not a minor problem, and the newer the technology, the harder it is to find and maintain knowledgeable and experienced support.

New network designs are promising to facilitate a number of issues above all to eliminate all the service-specific and hierarchical aggregation layers that reside in today's edge network. All those layers contribute to cost and complexity over time. Figure 10.6 depicts what the next-generation access edge might look like. You can see that we've replaced separate platforms throughout the edge with more integrated environments; for example, we might have softswitches that enable traditional PSTN call telephony-type features, but over packet backbones. Circuit switches are predicted to continue to be present in the network for another 10 years to 20 years, depending on location. Trunking gateways are used to attach multiple media gateways that are putting voice into IP packets to the underlying SS7 network. Remote access concentrators enable remote access for telecommuters and people who need to access remote corporate hosts. New generations of broadband access switches enable the multialternative broadband access environment cable modems, Frame Relay, DSLs, wireless alternatives, and so on. We want to reduce the edge environment to a simpler set of agnostic, multiplatform, multiprotocol intelligent edge devices.

Figure 10.6. The next-generation network edge

graphics/10fig06.gif

The main responsibilities of the intelligent edge include broadband access, adaptation of the native traffic to the underlying backbone technique, and concentration of many customer streams onto the bigger pipes within the core. This is the point at which the service attributes will be mapped to QoS mechanisms in order to deliver the requested performance and thereby live up to the SLAs. A major benefit is that it allows rapid and dynamic service provisioning, and it even allows customization for individual users. These service changes can be made without affecting the core, so as new service logic is required, as market segments find demand for new services, we will not necessarily have to reengineer the entire core network to accommodate those changes. Service provisioning is decoupled from service specification and service delivery. The intelligent edge could maintain a policy engine to handle the service provisioning. It could also include features such as encryption, key and certificate distribution, tunneling, accounting, address allocation, and QoS administration.

The Inner Tier: The High-Speed Core

The access and edge switches are designed to be scalable, both in port counts and in their capability to deliver multiservice support, and they are evolving to include more and more intelligence and features that would enable policy-based services management. In contrast, core switches are designed to be incredibly big and incredibly fast, but sometimes quite dumb. Their main objective is to transport the traffic as reliably and quickly as possible at the highest available rate.

Thus, in the emerging environment we see a reversal. In the traditional PSTN, the edges served the network core, and the network core had all the intelligence. Now, the network core is serving the edges, and intelligence is being distributed closer and closer to the customer premise (see Figure 10.7).

Figure 10.7. The network core serving the edges

graphics/10fig07.gif

The Next-Generation Switching Architecture

Next-generation telephony is being introduced into networks, and this is causing a bit of a change in the architecture. In the PSTN, we use traditional Class 5, or local, exchanges and Class 4, or toll, exchanges, that are based on common control architecture. This means that intelligence is centralized in proprietary hardware and software.

Another characteristic of the traditional architecture is the use of remote switching modules or digital loop carriers as a means of extending the range of the local exchange as well as shortening the loop length between the subscriber and the access node. Adjunct boxes were used for the provisioning of enhanced services and generally separate data network access altogether. Network operators had to wait for generic software releases from the manufacturers in order to launch new applications and services, which led to long time frames for applications development. Another big issue was the very high up-front costs, somewhere between US$3 million and US$5 million per deployment. In this traditional common control circuit-switched architecture, all the intelligence resided in monolithic switches at the network edge, and then that intelligence was distributed outward to the dumb customer premise equipment.

With next-generation switch architectures, intelligence resides on the network edge, which facilitates distributed control. The core backbone rests on ATM, or IP, or, going forward, MPLS. At the customer premise, the IAD merges the voice and data streams that are coming from various users at the customer premise. A gateway switch provides the Class 5 functionality, with integrated enhanced services and converged network access. Furthermore, switch interfaces with SS7 are accomplished via SS7 gateway switches. This way more applications can be developed more quickly, generating more revenues. Service creation is occurring at the network edge, closer to the customer, which simplifies and speeds the process. This provides the combined benefits of lower cost and faster applications development, and, as an added bonus, the startup cost is around US$100,000.

In the next-generation switch architecture, as all the intelligence is being pushed out to the edges, to the customer premises environment, internetworking is needed between the legacy circuit-switched network and the emerging packet-switched environments. This internetworking is accomplished through a series of gateway switches and signaling system gateways, as well as gateway control mechanisms, called softswitches. This programmable networking approach requires a routing system of softswitches and media gateways that can convert different media and protocol types.

Programmable Networks

Programmable networking is based on separating applications and call control from the switching platform. It requires separating the service logic that activates, controls, bills for, and manages a particular service from the transmission, hardware, or signaling nodes, overall making call functions easier to manage.

Programmable networking gives providers the chance to offer circuit-switched voice and advanced intelligent network services such as caller ID and call waiting. In addition, it supports packet-switched services and applications such as Internet access, e-mail, Web browsing, and e-commerce; and, of course, programmable networking embraces wireless networks. Programmable networking is catching on among carriers, who are seeking to packetize the core of their networks for greater efficiencies, and this is leading to a rise in the sales of softswitches.

Softswitches

The basic appeal of the softswitch architecture is the capability to use converged voice/data transport and to open the PSTN signaling network. Carriers have been buying softswitches primarily for Class 4 tandem replacement and Internet offload. It now seems that for the softswitch market to grow and prosper, the RBOCs, the cable companies, and the foreign service providers will have to decommission their Class 5 local exchanges.

In the next-generation network, call control intelligence is outside the media gateways and is handled by a media gateway controller or a softswitch (also referred to as a call agent). The softswitch implements service logic and controls external trunking gateways, access gateways, and remote access servers. Softswitches can run on commercial computers and operating systems, and they provide open applications programming interfaces. A softswitch is a software-based, distributed switching and control platform that controls the switching and routing of media packets between media gateways across the packet backbone (see Figure 10.8).

Figure 10.8. The softswitch model

graphics/10fig08.gif

The softswitch controls the voice or data traffic path by signaling between the media gateways that actually transport the traffic. The gateway provides the connection between an IP or ATM network and the traditional circuit-switched network, acting very much like a multiprotocol cross-connect.

The softswitch ensures that a call's or a connection's underlying signaling information is communicated between gateways. This includes information such as automatic number identifiers, billing data, and call triggers. The softswitch architecture has three parts, as shown in Figure 10.9. The switching layer involves the media gateways. The call control layer involves telephony signaling protocols (for example, SIP, MGCP, H.323, SS7), which are discussed shortly. The third layer is the application layer, and this is where services are supported (for example, lifeline services and regulatory issues, such as the ability to perform legal intercept).

Figure 10.9. Softswitch architecture

graphics/10fig09.gif

Evolving Signaling Standards

Softswitches must communicate with packet switches, VoIP gateways, media gateways, and the SS7 networks. To do so, they have to rely on standardized protocols. A number of different technical specifications, protocols, and standards are used to deliver these services and the desired end functions, some of which are briefly reviewed here; you can find further information on them in Chapter 11, "Next-Generation Network Services."

H.323 version 2 is the ITU standard for IP telephony in the LAN and was used as the basis for several early VoIP gateways. Most VoIP gateways support H.323 and thereby ensure interoperability between different vendors' products.

Gateway Control Protocol (GCP) is the ITU extension to H.323 to enable IP gateways to work with SS7.

Another ITU standard is Multimedia Gateway Control (MEGACO). This emerging standard describes how the media gateways should behave and function.

SIP, from the IETF, links end devices and IP media gateways. It's a thinner and slightly less robust version of H.323, but it is gaining popularity over H.323. Given the strength of the IETF in Internet-related environments, SIP promises to become quite significant.

Level 3 defined Internet Protocol Device Control (IPDC), and Cisco and Telcordia defined Simple Gateway Control Protocol (SGCP). These two standards have been combined to create the Media Gateway Control Protocol (MGCP), which controls the transport layer and its various media gateways, sending messages about routing priority and quality.

Key Concerns in the Next-Generation Switch Architecture

The key concerns for providers regarding future deployments of the softswitch environment are scalability, reliability, and security. Current products can scale up to a threshold of 200,000 busy-hour call attempts. Class 5 local exchanges are designed to handle more than 1.5 million busy-hour call attempts and they also support more than 3,000 features, whereas softswitches can support perhaps 300 busy-hour call attempts. Clustering software can help to resolve these scalability issues, but this is yet to be demonstrated. There are also issues of reliability and security, in terms of securing the way to interact with intelligence at the customer premises.

Another critical issue that needs to be addressed is the need for softswitches to support the lifeline PSTN services, including 911 or other emergency services and the ability of authorities to wire-tap phone conversations. The problem is not reliability or fault tolerance in the systems; the problem is the complexity of the software required. For example, it takes some 250,000 lines of code just to implement emergency services such as 911. In order to realize the ability to use these applications in the same way as is done on the PSTN, tighter integration between the three layers of the softswitch architecture is required. At this point, we are still trying to determine where the best fit is. It would appear that operators that are new to the environment, who seek to gain access into the local exchange, may be well served by such an architecture. On the other hand, for the incumbent local exchange carriers, the motivation may not be as strong.

QoS

As mentioned throughout this chapter and other chapters, QoS issues play a very important role in the next-generation network. We need to think carefully about how to actually provide for very granulated levels of service, thereby enabling very high performance while simultaneously creating platforms for multitudes of new revenue-generating services.

QoS is the capability to provide different levels of service to differently characterized traffic or traffic flows. It constitutes the basis for offering various classes of service to different segments of end users, which then allows the creation of different pricing tiers that correspond to the different CoS and QoS levels. QoS is essential to the deployment of real-time traffic, such as voice or video services, as well as to the deployment of data services.

QoS includes definitions of the network bandwidth requirements, the user priority control, control of packet or cell loss, and control of delays both transit delay (which is end-to-end) and traffic delay variations (that is, jitter). Traffic characterizations include definitions of the delay tolerance and elasticity for that application. They can also associate delay tolerance and elasticity with applications and users and potentially even time-of-day, day-of-week scenarios. We have to be able to ensure various levels of service; the availability of bandwidth, end-to-end delay, delay variances, and packet losses that support the application in question; and the relative priority of traffic. Also, QoS is associated with policy admission control and policing of the traffic streams.

There are two ways to implement QoS. Implicit QoS means that the application chooses the required QoSs. Explicit QoS means that the network manager controls that decision.

There are three main approaches to QoS. The first is an architected approach, and ATM falls under this category. The second is per-flow services, where the QoS is administered per flow, or per session. This includes the reservation protocol that is part of the IETF IntServ specification, as well as MPLS. The third approach is packet labeling, in which each individual packet is labeled with an appropriate QoS or priority mark, and the techniques that use this approach include 802.1p and the IETF DiffServ specification.

The following sections describe various QoS tactics, including ATM QoS, IP QoS, Class-Based Queuing (CBQ), MPLS, Policy-Based Management, COPS, and DEN.

ATM QoS

ATM QoS defines four different service levels (one of which has two variations) that define a series of specific QoS parameters that tailor cells to fit video, data, voice, or mixed media traffic. The following are the four service classes:

         Constant bit rate (CBR) CBR provides a constant, guaranteed rate to real-time applications, such as streaming video, so it is continuous bandwidth. It emulates a circuit-switched approach and is associated with minimum latencies and losses. CBR is the highest class of service that you can get and it's for very demanding applications, such as streaming media, streaming audio, streaming video, and video-on-demand. Initially CBR was to be used for things like voice and videoconferencing, but we have found that in fact in those applications we don't necessarily need the continuous bandwidth. As mentioned previously, much of a voice conversation is silence. If we were to be carrying that voice over CBR service, whenever there was silence, the ATM switches would be stuffing in empty cells to maintain that continuous bandwidth, and of course that's overkill and a waste of network resources.

         Variable bit rate (VBR) VBR has two subsets: real-time (VBR-RT) and nonreal-time (VBR-NRT). VBR provides a fair share of available bandwidth according to a specific allocation policy, so it has a maximum tolerance for latencies and losses. VBR is the highest class of service in the data realm, and it is also an adequate class of service for real-time voice. VBR-RT can be used by native ATM voice with bandwidth compression and silence suppression. So when somebody is silent, VBR-RT makes use of the available bandwidth to carry somebody else's cells, making VBR appropriate for multimedia functions such as videoconferencing. VBR-NRT can be used for data transfer where response time is critical (for example, transaction-processing applications such as airline reservations, banking transactions).

         Available bit rate (ABR) ABR supports VBR data traffic with average and peak traffic parameters (for example, LAN interconnection and internetworking services, LAN emulation, critical data transfer that requires service guarantees). Remote procedure calls, distributed file services, and computer process swapping and paging are examples of applications that would be appropriate for ABR.

         Unspecified bit rate (UBR) You could call UBR a poor man's ATM. It provides best-effort service. UBR offers no service guarantee, so you would use it for text data, image transfer, messaging, and distributing information that's noncritical, where you don't have to have a set response time or service guarantee.

ATM provides a very well-planned approach to providing QoS. Table 10.3 shows how each service class allows you to define or not define certain parameters. The parameters boil down to two major categories: QoS parameters, including the cell error rate (CER; that is, the percentage of errored cells), cell loss ratio (CLR; that is, the percentage of lost cells), the cell transfer delay (CTD; that is, the delay between the network entry and exit points), the cell delay variation (CDV; that is, the jitter), and cell misinsertion rate (CMR; referring to the number of cells inserted on the wrong connection). The second set of parameters are traffic parameters, including peak cell rate (PCR; which allows you to specify the maximum amount of bandwidth allowed on a connection), sustainable cell rate (SCR; which allows you to specify guaranteed bandwidth during the variable transmissions used only by VBR), maximum burst size (MBS; allows you to specify the maximum number of cells that will be transmitted at PCR used only by VBR), cell delay variation tolerance (CDVT; allows you to specify the maximum allowable jitter), minimum cell rate (MCR; allows you to specify the rate in cells per second that the source can transmit used only in ABR), and allowed cell rate (ACR; works with ABR's feedback mechanism that determines cell rate). As Table 10.3 shows, UBR allows you to define very little, whereas CBR allows you to tightly control most of these parameters.

Table 10.3. ATM classes of service

Parameter

CBR

VBR-NRT

VBR-RT

ABR

UBR

Cell loss ratio

Yes

Yes

Yes

No

No

Cell transfer delay

Yes

Yes

Yes

No

No

Cell delay variation

Yes

Yes

Yes

No

No

Peak cell rate

Yes

Yes

Yes

Yes

Yes

Sustained cell rate

No

Yes

Yes

No

No

Minimum cell rate

No

No

No

Yes

No

Maximum burst size

No

Yes

Yes

No

No

Allowed cell rate

No

No

No

Yes

No

Depending on the service class, you have the option of defining or not defining certain parameters, and that gives you control over the performance of an application within a service level. The transmission path in a virtual circuit with ATM is comprised of virtual paths and its virtual channels. Think of the virtual channel as being an individual conversation path and the virtual path as a grouping of virtual channels that all share the same QoS requirement. So all CBR streaming video traffic may go over Virtual Path 1. All bursty TCP/IP data traffic may go over Virtual Path 2. All MPEG-2 compressed video traffic may go over Virtual Path 3. Again, what we're doing is organizing all the virtual channels that have the same demands from the network into a common virtual path, thereby simplifying the administration of the QoS and easing the network management process for the carrier. Within the cell structure, the key identifier in the header is which path and which channel is to be taken between any two ATM cells. And those addresses change, depending on what channels were reserved at the time the session was negotiated.

Remember that because it is connection oriented, ATM gives service providers the traffic engineering tools they need to manage both QoS and utilization. In provisioning a network, the service provider can assign each virtual circuit a specific amount of bandwidth and set the QoS parameters. The provider can then dictate what path each virtual circuit takes. However, it does require that the service provider be managing the ATM switches and whatever else is running over that ATM network (for example, IP routers).

IP QoS

There are two IP schemes for QoS: IntServ and DiffServ. The following sections describe each of these schemes in detail.

IntServ IntServ was the IETF's scheme to introduce QoS support over IP networks. It provides extensions to the best-effort service model to allow control over end-to-end packet delays. In essence, IntServ is a bandwidth reservation technique that builds virtual circuits across the Internet. Applications running in the hosts request bandwidth.

IntServ was introduced first as a setup protocol, used by hosts and routers to signal QoS into the network. It also introduces flowspecs, which are definitions of traffic flow according to traffic and QoS characteristics. Finally, IntServ introduces traffic control, which delivers on QoS by controlling traffic flows within the hosts and routers. IntServ is a per-flow, resource reservation model, requiring Resource Reservation Protocol (RSVP). Its key building blocks include resource reservation and admission control. In IntServ, data transmissions are built around a flow, a unidirectional path with a single recipient. In routing, traditional routers examine packets and determine where to send them and then switch them to output ports. With IntServ, routers must also apply the appropriate queuing policy if packets are part of a flow.

IntServ routers usually use first in, first out (FIFO) queuing. FIFO queuing is fast and easy but can make delay-sensitive applications wait behind long bursts of delay-insensitive data. IntServ uses fair queuing, which ensures that a single flow does not use all the bandwidth and provides minimal guarantees to different flows.

The IntServ model involves a classifier. Packets are mapped to a service class, and, based on their service class, a packet scheduler forwards the packets (see Table 10.4). Admission control determines whether the requested QoS can be delivered, and the setup protocol is RSVP. RSVP relies on router-to-router signaling schemes, which allow IP applications to request priority delay and bandwidth guarantees. Connections are established link by link, and a connection can be denied if a router cannot accept the request (see Figure 10.10). RSVP is particularly well suited for real-time applications and delay-sensitive traffic. RSVP allows applications to reserve router bandwidth. RSVP guaranteed service provides bandwidth guarantees and a reliable upper bound to packet delays. But the resource requirements for running RSVP on a router increase proportionately with the number of separate RSVP reservations. This scalability problem makes using RSVP on the public Internet impractical, so it has largely been left to campus and enterprise-type networks.

Figure 10.10. RSVP in hosts and routers

graphics/10fig10.gif

Several other protocols are associated with RSVP (see Figure 10.11). Real-Time Transport Protocol (RTP) is for audio, video, and so on. It is based on UDP, to cut down on overhead and latency. RTP is specified as the transport for H.323, and receivers are able to sequence information via the packet headers. Real-Time Control Protocol (RTCP) provides status feedback from senders to receivers. Both RTP and RTCP are standardized by the ITU under H.225. Real-Time Streaming Protocol (RTSP) runs on top of IP Multicast, UDP, RTP, and RTCP.

Figure 10.11. RSVP and related protocols

graphics/10fig11.gif

Table 10.4. IntServ Service Classes

Service Class

Guaranteed Service

Controlled Load Service

Best-Effort Service

End-to-end behavior

Guaranteed maximum delay

Best effort on unloaded net

Best effort only

Intended applications

Real time

Sensitive to congestion

Legacy

ATM mapping

CBR or rtVBR

NRT-VBR or ABR with MCR

UBR or ABR

RSVP is simplex (that is, it is a reservation for unidirectional data flow), it is receiver driven (that is, the receiver of data flows initiates and maintains the resource reservation for that flow), and it supports both IPv4 and IPv6. RSVP is not a routing protocol. Again, key issues regarding RSVP include scalability, security, and how to ensure that policy-based decisions can be followed.

DiffServ Today, we concentrate more on DiffServ than on its parent, IntServ. The DiffServ approach to providing QoS in networks uses a small, well-defined set of building blocks from which a variety of services can be built (see Figure 10.12). A small bit pattern in each packet in the IPv4 Type of Service (ToS) octet, or the IPv6 Traffic Class octet, is used to mark a packet to receive a particular forwarding treatment or per-hop behavior at each network node. For this reason, DiffServ is really a CoS model; it differentiates traffic by prioritizing the streams, but it does not allow the specification and control of traffic parameters. DiffServ differentiates traffic by user, service requirements, and other criteria. It then marks the packets so that the network nodes can provide different levels of service via priority queuing or bandwidth allocation, or by choosing dedicated routes for specific traffic flows. DiffServ scheduling and queue management enables routers to act on the IP datagram. Service allocation is controlled by a policy management system. Routers can do four things after receiving an IP datagram: manage a queue, schedule interfaces, select which datagram is the logical choice for discard, and select an outbound interface. Most of the current methods for QoS are based on the first three. QoS routing technologies are still in conceptual stages.

Figure 10.12. DiffServ

graphics/10fig12.gif

DiffServ evolved from IETF's IntServ. It is a prioritization model, with preferential allocation of resources based on traffic classification. DiffServ uses the IP ToS field to carry information about IP packet service requirements. It classifies traffic by marking the IP header at ingress to the network with flags corresponding to a small number of per-hop behaviors. This DiffServ byte replaces the ToS octet, sorting into queues via the DiffServ flag. Queues then get different treatment in terms of priority, share of bandwidth, or probability of discard. The IETF draft stipulates a Management Information Base for DiffServ, which will make DiffServ-compliant products Simple Network Management Protocol (SNMP) manageable.

CBQ

Another QoS tactic is CBQ, which is based on traffic management algorithms deployed at the WAN edge. CBQ is a fully open, nonproprietary technology that brings bandwidth-controlled CoS to IP network infrastructures. It allows traffic to be prioritized according to IP application type, IP address, protocol type, and other variables. It allocates unused bandwidth more effectively than do other QoS mechanisms, and it uses priority tables to give critical applications the most immediate access to unused bandwidth.

MPLS

A lot of attention is being focused now on the emerging environment MPLS, which was born out of Cisco's tag switching. MPLS was designed with large-scale WANs in mind. MPLS was originally proposed by the IETF in 1997, and core specifications were completed in fall 2000. By plotting static paths through an IP network, MPLS gives service providers the traffic engineering capability they require and it also helps them build a natural foundation for VPNs. Remember that traffic engineering allows service providers to control QoS and optimize network resource use.

Another benefit of MPLS is its potential to unite IP and optical switching under one route-provisioning umbrella. Because IP is a connectionless protocol, it cannot guarantee that network resources will be available. Additionally, IP sends all traffic between the same two points over the same route. During busy periods, therefore, some routes become congested and others remain underused. That's one key difference between MPLS and IP: In MPLS, packets sent between two points can take different paths based on different MPLS labels. Without explicit control over route assignments, the provider has no way to steer excess traffic over less-busy routes. MPLS tags or adds a label to IPv4 or IPv6 packets so that they can be steered over the Internet along predefined routes. MPLS also adds a label identifying the type of traffic, the path, and the destination. This enables routers to assign explicit paths to various classes of traffic. Using these explicit routes, service providers can reserve network resources for high-priority or delay-sensitive flows, distribute traffic to prevent network hot spots, and preprovision backup routes for quick recovery from outages.

As shown in Figure 10.13, an MPLS network is composed of a mesh of label-switching routers (LSRs). These LSRs are MPLS-enabled routers and/or MPLS-enabled ATM switches. As each packet enters the network, an ingress LSR assigns it a label, based on its destination, VPN membership, ToS bits, and other considerations. At each hop, an LSR uses the label to index a forwarding table. The forwarding table assigns each packet a new label and directs the packet to an output port. To promote scaling, labels have only local significance. As a result, all packets with the same label follow the same label-switched path through the network. Service providers can specify explicit routes by configuring them into edge LSRs manually, or they can use one of two new signaling protocols. RSVP-TE is RSVP with traffic engineering extensions. The other is the MPLS Label Distribution Protocol (LDP), augmented for constraint-based routing. Most equipment vendors support both.

Figure 10.13. MPLS

graphics/10fig13.gif

With MPLS, network operators don't have to use explicit routing and they probably won't in networks that have plenty of bandwidth. Instead, they can let ingress LSRs use LDP without any constraint-based extensions, to automatically associate labels with paths. With plain LDP, MPLS packets follow the same routes as ordinary routed packets. With MPLS, you can support all applications on an IP network without having to run large subsets of the network with completely different transport mechanisms, routing protocols, and addressing plans.

MPLS offers the advantages of circuit-switching technology including bandwidth reservation and minimized delay variations, which are very important for voice and video traffic as well as the advantages of existing best-effort, hop-by-hop routing. It also enables service providers to create VPNs that have the flexibility of IP and the QoS of ATM.

The MP part of MPLS means it's multiprotocol it is an encapsulating protocol that can transport a multitude of other protocols. LS indicates that the protocols being transported are encapsulated with a label that is swapped at each hop. A label is a number that uniquely identifies a set of data flows on a particular link or within a particular logical link. The labels, again, are of only local significance. They must change as packets follow a path hence the switching aspect of MPLS.

MPLS can switch a frame from any kind of Layer 2 link to any other kind of Layer 2 link, without depending on any particular control protocol. Compare this to ATM, for example: ATM can switch only to and from ATM and can use only ATM signaling protocols, such as the private network-to-network interface, or Interim Interface Signaling Protocol. MPLS supports three different types of label formats. On ATM hardware, it uses the well-defined Virtual Channel Identifier (VCI) and Virtual Path Identifier (VPI) labels. On Frame Relay hardware, it uses a Data-Link Connection Identifier (DLCI) label. Elsewhere, MPLS uses a new generic label, known as a shim, which sits between Layers 2 and 3. Because MPLS allows the creation of new label formats without requiring changes in routing protocols, extending technology to new optical transport and switching could be relatively straightforward.

MPLS has another powerful feature: label stacking. Label stacking enables LSRs to insert an additional label at the front of each labeled packet, creating an encapsulated tunnel that can be shared by multiple label-switched paths. At the end of the tunnel, another LSR pops the label stack, revealing the inner label. An optimization in which the next-to-last LSR peels off the outer label is known in IETF documents as "penultimate hop popping." Whereas ATM has only one level of stacking virtual channels inside of virtual paths MPLS supports unlimited stacking. An enterprise could use label stacking to aggregate multiple flows of its own traffic before passing the traffic on to the access provider. The access provider could then aggregate traffic from multiple enterprises before handing it off to the backbone provider, and the backbone provider could aggregate the traffic yet again before passing it off to a wholesale carrier. Service providers could use label stacking to merge hundreds of thousands of label-switched paths into a relatively small number of backbone tunnels between points of presence. Fewer tunnels mean smaller routing tables, and smaller routing tables make it easier for providers to scale the network core.

Before you get too excited about the MPLS evolution, be aware that there are still a number of issues to be resolved between the IETF and the MPLS Forum. For example, they must reconcile MPLS with DiffServ, so that the ToS markings can be transferred from IP headers to MPLS labels and interpreted by LSRs in a standard manner. They must also clarify how MPLS supports VPNs. Right now two models exist one based on BGP and the other on virtual routers and which will prevail is unknown. Protocols such as RSVP, OSPF, and IS-IS must be extended in order to realize the full benefit of MPLS.

Major efforts are under way to adapt the control plane of MPLS to direct the routing of optical switches, not just LSRs. This will enable optical switches, LSRs, and regular IP routers to recognize each other and to exchange information. The same routing system can control optical paths in the DWDM core, label-switched paths across the MPLS backbone, and paths involving any IP routers at the edge of the network. So, with MPLS, service providers can simplify their operational procedures, deliver more versatile IP services, and, most importantly to customers, sign meaningful SLAs.

Policy-Based Management, COPS, DEN, and LDAP

A few additional concepts are relevant to QoS: policy-based management, Common Open Policy Services (COPS), Directory Enabled Networking (DEN), and Lightweight Directory Access Protocol (LDAP).

The idea behind policy-based networking is to associate information about individual users, groups, organizational units, entire organizations, and even events (such as the beginning of the accounting department's month-end closing) with various network services, or classes of service. So, on a very granular basis, and on a time-sensitive basis, you can ensure that each user is receiving the QoS needed for the particular application at a specific time and place.

COPS is an IETF query-response-based client/server protocol for supporting policy control. It addresses how servers and clients on a network exchange policy information, and it transmits information between a policy server and its clients, which are policy-aware devices such as switches. The main benefit of COPS is that it creates efficient communication between policy servers and policy-aware devices and increases interoperability among different vendors' systems.

DEN is an industry group formed by Microsoft and Cisco to create a common data format for storing information about users, devices, servers, and applications in a common repository. DEN describes mechanisms that will enable equipment such as switches and routers to access and use directory information to implement policy-based networking. Enterprise directories will eventually be able to represent, as directory objects, all of the following: network elements, such as switches and routers; network services, such as security; class of service; network configurations that implement the network services; and policy services that govern the network services in a coordinated, scalable manner.

QoS and Prevailing Conditions

There's quite a list of potential approaches to implementing QoS. Again, which one makes sense oftentimes depends on what's available and what the prevailing conditions are. At this point, ATM is used most frequently because it offers the strongest capabilities to address traffic engineering and resource utilization. Right now, high hopes are also pinned on MPLS because it does a good job of marrying the best qualities of IP with the best qualities of ATM. But, again, we are in an era of many emerging technologies, so stay tuned. This chapter should give you an appreciation of how many issues there are to understand in the proper administration of the emerging business-class services that promise to generate large revenues.

LDAP is a standard directory server technology for the Internet. LDAP enables retrieval of information from multivendor directories. LDAP 3.0 provides client systems, hubs, switches, and routers, as well as a standard interface to rewrite directory information. Equipment and directory vendors plan to use LDAP for accessing and updating directory information.

For more learning resources, quizzes, and discussion forums on concepts related to this chapter, see www.telecomessentials.com/learningcenter.

 



Telecommunications Essentials
Telecommunications Essentials: The Complete Global Source for Communications Fundamentals, Data Networking and the Internet, and Next-Generation Networks
ISBN: 0201760320
EAN: 2147483647
Year: 2005
Pages: 84

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net