Packet-Switched Networks

Packet-Switched Networks

Packet switching was developed as a solution for the communications implications of interactive processing it was designed to support bursty data traffic, which has long connect times but low data volumes. Packet switching involves the application of statistical multiplexing, whereby numerous conversations can make use of one common communications channel, which significantly increases transmission efficiency. (Chapter 2, "Telecommunications Technology Fundamentals," discusses statistical multiplexing in more detail.) However, sharing a communications link introduces latency. A key issue that we have to address in the future is how packet-switched networks can support latency-sensitive traffic such as real-time streams.

With packet switching, packets are routed through a series of intermediate nodes, often involving multiple networks; they are routed in a store-and-forward manner through a series of packet switches (that is, routers) that ultimately lead to the destination. Information is divided into packets that include a destination address and a sequence number. Let's look at an analogy. Think about telecommunications as a transportation network in which the physical roadway is a gravelly, potholed, single-lane alley. This is the traditional voice channel twisted-pair deployed in a limited spectrum to support voice. We can pave that alleyway and make it a slicker surface, which equates to DSL. The road can now accommodate some additional information and move traffic along at a faster rate. We could then build a street with four lanes, the equivalent of coaxial cable. We could even build a much more sophisticated interstate turnpike, with eight lanes in each direction, that we could say is equivalent to fiber, which gives us increased capacities over traditional roadways. And we even have vehicles that travel through the air and that would be the wireless realm.

Over these roadways travel vehicles that is, packets, such as X.25, Frame Relay, Internet Protocol (IP), and ATM. The vehicles can carry different numbers of passengers, and the sophistication of their navigation controls also varies. The vehicles vary in how quickly they can accelerate and move through the transportation grid. So, for instance, IP is like a bus. What's the advantage of a bus over, say, a Ferrari (that is, packets that don't want to end up queued up behind a busload of tourists or slick multimedia versus unpredictable bursty data)? The bus can hold a large number of passengers, and it needs only one driver to carry those passengers between their stops. However, the bus takes longer than the Ferrari to maneuver through the intersections, or switching points. Whereas the Ferrari can zip through an intersection, the bus lumbers through, in what equates to latency. Smaller vehicles can move through the intersections more quickly, and they reduce latency, but larger vehicles reduce the number of drivers that you need. You can move more passengers using a smaller number of controls, but at the cost of higher latencies. Also, if there's congestion at an intersection, the Ferrari would be able to move onto the shoulder of the road to move around that congestion and continue on its way, whereas the bus would be forced to wait because it can't navigate as easily.

The secret to understanding the various packet formulas is realizing where their strengths and weaknesses lie. They vary as to the number of bits they contain, how much control they give you over delays or losses, and the rules they use to address the highways and the destination points.

Remember from Chapter 2 that packet switching deals with containerized, labeled entities we generically call packets, which vary in size. These packets come from different sources from different users at one customer site or from different users at different customers. All these different packets are statistically multiplexed and sent on to their destinations over virtual circuits. Also remember from Chapter 2 that a virtual circuit is a set of logical connections that create a pathway between two points; they are not a physical connection that you can trace end-to-end that belongs to just one conversation. So, a virtual circuit is a shared communications link that is set up on demand based on negotiated communications parameters.

Because packet switching is a store-and-forward process of relaying through a series of intermediate nodes, latency and packet loss can considerably degrade real-time applications. In fact, the first generation of packet switching, X.25, dealt with data only. It could not handle voice or video. As discussed later in this chapter, newer generations can handle data because we have found ways to tweak the network.

In general, in the traditional mode, packet switching offered no Quality of Service (QoS) guarantees. It did, however, offer the knowledge that packets would make it to their destination point because they could be rerouted around trouble points. But because they could be rerouted around trouble points, which might mean congestion points or failed points, there could be no guarantees about the latencies or losses that you would experience. Therefore, it's a relatively new concept to try to build in QoS as a metric in packet-switched networks.

A packet-switched network is a data-centric environment, and instead of switching millions of physical circuits, as happens in the circuit-switched environment, the data-centric network switches packets, packet switches, or switched virtual circuits. Aggregation of these physical packets tends to happen at the edge of the carrier network. The first packet switch in the network immediately converts the physical circuit to a virtual circuit, or a stream of packets. As you can see in Figure 7.12, multiple packets are being statistically multiplexed as they come in through the packet switch, a routing table is consulted, an appropriate path is selected, and the packets are sent over the correct virtual circuit, leading to the next most logical stop in the network.

Figure 7.12. Packet switching

graphics/07fig12.gif

The speed of the transmission facilities between the switches directly affects the performance of packet-switched networks; this is why many new-generation packet switches IP and ATM switches, for instance are now shipping with high-speed interfaces, such as OC-48 (that is, 2.5Gbps) interfaces. OC-48 interfaces on a switch could potentially eliminate the need for an entire layer of aggregation that we currently do according to the traditional model of 64Kbps channels. By eliminating that layer of aggregation, we can actually allow direct connection to an optical network by using DWDM at the full rate of the service and interface. (See Chapter 2 for information on DWDM.) With data traffic growing monthly, transport networks will increasingly rely on data switches to manage and aggregate the traffic, and the transport network will be providing low-cost and reliable connections between these switches.

Remember from Chapter 4, "Establishing Communications Channels," that there are two main types of packet-switched networks: connection-oriented and connectionless networks. In a connection-oriented environment (such as X.25, Frame Relay, ATM, and VPNs that are based on Frame Relay or ATM networks), a call is set up end-to-end at the onset of the communication. Only one call request packet that contains the source and destination address is necessary. That initial call request packet establishes a virtual circuit to the destination so that subsequent packets need only be read for the marking information that defines the virtual circuit to be taken. The intermediate nodes do not need to look at the addressing information in order to calculate a path for each packet independently. This reduces delay because routing decisions do not have to be made at the intermediate nodes. Where the error control is performed depends on the generation of the network. With X.25, error detection and correction was a value-added feature of the network. A provision of X.25 was to detect and correct for errors while they were in transport, hence improving data communications. But as networks became more digital and fiber based, noise became less of a problem; thus, the subsequent generations of packet switching Frame Relay and ATM, for instance give the endpoints the responsibility for error detection and correction. Not having to stop packets and investigate them in the throes of transmission greatly decreases the delays that would otherwise be encountered. So in a connection-oriented environment, a virtual circuit defines the path end-to-end, and all packets follow the same path throughout the course of the session.

As discussed in Chapter 4, the connectionless environment (which includes X.25 networks, the public Internet, private IP-based backbones, and LANs) can be likened to the postal service, in which a message is relayed from point to point, with each relay getting one step closer to its ultimate destination. In a connectionless environment, each packet of a message is an independent unit that contains the source and destination address. Each packet is independently routed at each intermediate node it crosses. The more hops it goes through, the greater the delays that are accumulated, which greatly affects delay-sensitive applications, including any form of real-time voice, real-time audio, real-time video, video-on-demand, and streaming media. But connectionless environments can work around problems, which is why they were so strong in the early days, when there were frequent system failures and links that were too noisy to perform correctly. Connectionless packets could circumvent these system failures or noisy conditions and still meet at the destination point with high integrity. The connectionless environment offered the flexibility of routing around problem areas, but at the risk of greater overhead associated with the overall transmission, because addressing had to be included in each packet, and also at the risk of greater delays because each packet had to be independently routed.

X.25

In 1970 Tymnet introduced X.25, which was the first generation of packet switching. X.25 packet-switching networks evolved as an option for data communications and therefore did not compete directly with the telephony providers. The providers of such networks were put in a special category, called value-added network (VAN) providers.

The X.25 packet-switching technique emerged out of a need to address the characteristics of interactive processing, which had been introduced in the late 1960s. As mentioned earlier in the chapter, interactive processing is a bursty data flow that implies long connect times but low data volumes. X.25 provided a technique for many conversations to share a communications channel.

X.25 Networks

Because of when X.25 was created, it was based on an analog network infrastructure. A big problem with analog networks is the accumulation of noise through the amplification points, which leads to the very high error rate associated with analog networks. So, one of the value-added services provided by X.25 networks was error control as a function within the network. Because packet switching is a store-and-forward technique, at every intermediate node at which an X.25 packet would be halted the packet would undergo an error check. If everything in the packet was correct, the intermediate node would return an acknowledgment to the original transmitting node, requesting it to forward the next packet. If the packet the node received was not correct, the node would send a message requesting a retransmission. Thus, at any point in the routing and relaying of those packets, if noise contributed to errors, the errors could be resolved, which resulted in a much more accurate data flow.

Remember that what is beneficial or not beneficial about a particular network depends on the prevailing conditions, so in an analog infrastructure, where noise was an issue, error control was a highly desirable feature. But performing that error control procedure on every packet at every node in addition to developing routing instructions at each intermediate node for the next point to which to relay the packet increased the delays that were encountered end-to-end in the transmission of information. Because X.25 packet-switching networks were for data only, it was not important to be able to tightly control delays or losses.

Another early attribute of X.25 was the size of its packet. It used relatively small packets, generally 128 bytes or 256 bytes long. This is another issue that changes according to the times. Small packets were desirable in the X.25 generation because of the noise factor. If there was noise in the network, there would be errors, and hence fairly frequent retransmissions were necessary. Retransmitting a smaller packet is more efficient than retransmitting very long blocks of information, so X.25 was specifically designed to use small packets. Again, X.25 was designed in an older generation, so it tends to operate over comparatively slow links, largely in the 56Kbps to 2Mbps range.

Packet Size in X.25, Frame Relay, and ATM

The next generation of packet-switched networks after X.25 is Frame Relay. In Frame Relay, the packet sizes are variable, but they can be up to 4,096 bytes. Frame Relay operates over a digital network, where noise is not much of an issue because regenerative repeaters eliminate the noise that may accumulate on the signal during transmission. As a result, all the error control procedures are removed from Frame Relay networks in order to make them faster. Furthermore, in Frame Relay we're not very concerned about having to retransmit information. There's less likelihood that errors or noise in the network will cause the need for retransmission. Thus, Frame Relay uses a larger packet size than X.25, and the result is bandwidth efficiency. A Frame Relay packet contains less control information for a larger group of bytes of information, and this means the packet makes better use of the available bandwidth.

If we jump ahead one more generation, to ATM switching, we find that packets are called cells. They're small only 53 bytes which gives them the capability to cross intersections quickly. They can transition through network nodes very quickly, but because they are so small, control information is applied to very small cells. This means that there's an underlying assumption that bandwidth isn't an issue, that you're not trying to conserve on bandwidth, and that the prevailing condition that you're trying to satisfy is low latencies.

You can see that the optimum size of the packets really depends on a number of factors, such as the performance of the network, the cost of the bandwidth, and the demand of the applications it's serving.

Like the PSTN, the X.25 packet-switched network has a hierarchy. There are two main categories of packet switches. Some packet switches (such as the square packet switches in Figure 7.13) are close to the customer in fact, they are at the points where the customers access the network. These packet switches are involved with routing and relaying packets and with error detection and correction. This is also where added intelligence would reside in order to be able to convert between different protocols being used by different computers, or to convert between different operating speeds or between different coding schemes. In other words, this is another value-added piece of the equation. One benefit of X.25 was that it allowed you to create a network that provided connectivity between unlike equipment because it could perform the necessary conversions on your behalf, as part of the network service. The packet switches in the second category the inner tier of packet switches (such as the round packet switches in Figure 7.13)-do not provide those sorts of high-level value-added features. They are involved strictly with the routing and relaying of packets and with error control and detection.

Figure 7.13. An X.25 packet-switched network

graphics/07fig13.gif

Again, the links that join the packet switches throughout the network could be combinations of analog and digital facilities. In advanced network infrastructures, they would be digital, but there are still infrastructures around the world that have not yet been reengineered or upgraded, and this is where X.25 still has an application. If you're only concerned with networking locations that are served over advanced digital infrastructures, you're probably not very interested in X.25. You'll get much better performance from Frame Relay. But if you have a global network and you have locations in areas that don't have sophisticated Frame Relay services or even digital networks in place, then X.25 will suit your data networking needs very well, on a cost-effective basis, albeit accommodating slower data rates. But if you're only trying to accommodate an automatic teller machine in a banking environment, for example, you don't need broadband capacity to the kiosk. Applications for X.25 continue to prevail, but this is a quickly aging technology, and many successors improve on its performance.

Another of the original benefits of X.25 was that it could handle alternative routing. X.25 was built with the intention of being able to circumvent failed nodes or failed links.

To connect to an X.25 network, you need a packet assembler/disassembler (PAD) to interface non-X.25 devices to an X.25 network. PADs convert protocols into packets, as prescribed by the X.25 standard, so that the data can travel across an X.25 packet network. These PADs may reside either at the customer premise or in the network.

X.25 is essentially the ITU-T's standard access protocol between user devices and a packet-switching network. It defines the interface for terminals operating in the packet mode, connected to public data networks by dedicated circuits. Some additional X protocols are commonly used:

         X.28 is the standard protocol between the terminal and the PAD.

         X.29 is the standard protocol between the PAD and the network.

         X.75 is the gateway protocol that defines how to interconnect two or more packet-switched data networks. One could be a private packet data network and the other a public packet data network, or they could be two different network operators' networks, and so on. (Gateway protocols, which imply a means by which you can cross into other people's backyards, are discussed in Chapter 9, "The Internet: Infrastructure and Service Providers.")

Advantages and Disadvantages of X.25

The advantages of X.25 are as follows:

         Powerful addressing facilities, because X.25 is the first approach to providing Layer 3 networking address information to enable routing and relaying through a series of intermediate nodes and networks

         Better bandwidth utilization, thanks to statistical multiplexing

         Improved congestion control because it enables packets to circumvent congested nodes and be rerouted via other links and nodes

         Improved error control that is done continually in the network at each intermediate node and in the face of all sorts of failures

         High availability in the face of node and line failures because rerouting is possible

The disadvantages of X.25 are as follows:

         Queuing delays

         Lower-speed communications links

         Smaller packet sizes, which means it doesn't make use of bandwidth as well as some of the newer protocols that involve larger frames

         No QoS guarantees, so delay-sensitive applications will likely suffer

         For data only, and today we are striving for integrated solutions

Frame Relay

The second generation of packet switching, Frame Relay, was introduced in 1991. Frame Relay assumes that there's a digital infrastructure in place and that few errors will result from network noise. Therefore, the entire error detection and correction process has been removed from the Frame Relay network, and error control is done entirely in the endpoints. This means that traffic is not delayed by being stopped and checked, which translates to much faster throughput over Frame Relay networks than over X.25 networks.

The lack of error control in the network also means that it is possible to carry voice and video over a Frame Relay network. However, Frame Relay is not innately designed to do that. The packet sizes enabled under Frame Relay are large up to 4,096 bytes and variable, which means that there could be a 100-byte packet going through a network node, with a 4,000-byte packet right behind it. When you have packets of varying sizes, you can't predict the delay in processing those packets through the network, and when you can't predict the delay, you can't properly address the latency requirements of real-time voice or video. Yet we do, in fact, run voice and video over Frame Relay networks, by tweaking the system in one of several ways. For example, we could provision separate links to carry the voice and the data traffic, and thus some excess data bursting wouldn't affect any real-time telephony, for instance, that is under way. We could prioritize traffic by application and in that way enable access to bandwidth, based on priority. In public Frame Relay networks, we often convert frames to equal-sized cells. At the core of the Frame Relay network is ATM because ATM, at the moment, offers the strongest suite of tools for traffic management. Thus, many networks, including IP backbones, the Internet, and Frame Relay have ATM at their core. You can trick the system in order to get added utility out of Frame Relay networks, but keep in mind that when you do this, you lose a little bit of the cost-efficiencies you would otherwise have by running all your traffic in the same manner over the same link.

The types of links that connect the Frame Relay switching points operate at high speeds they run the full range of the wide band of the PDH hierarchy. Where a Frame Relay network is running over a T-carrier infrastructure, the links can operate at 1.5Mbps to 45Mbps; for networks being served by E-carrier platforms, the links can operate at 2Mbps to 34Mbps.

The standards for Frame Relay come from the ITU-T, which defines Frame Relay as "a conversational communication service provided by a subnetwork for high-speed bursty data." This definition implies that we have a two-way capability (it is "conversational") and that Frame Relay is not an end-to-end solution (it is a "subnetwork"). So we don't look for a Frame Relay device such as a Frame Relay telephone; instead, we look at Frame Relay to serve as the cloud that is, the WAN solution that links together computer networks that are distributed across a country or across the world. And "high-speed bursty data" suggests that Frame Relay's preliminary application is in support of data and, specifically, LAN-to-LAN internetworking.

Frame Relay Applications

What type of an environment might be a candidate for Frame Relay? One such environment is a hub-and-spoke network, in which traffic from remote locations travels through a central site. This is similar to the airline system, in which key airports serve as main hubs; the largest of the 777s travel between the main hubs, and to get to a smaller city, you go through a hub to get on a smaller aircraft that then takes you to your destination. Frame Relay is also used where you are seeking to replace the use of the very expensive leased lines. Depending on the network topology, Frame Relay could potentially reduce costs up to 50% as compared to using leased lines.

Frame Relay is also used to give a network some bandwidth flexibility that is, bandwidth on demand. Because the main application of Frame Relay is LAN internetworking, and because LANs produce highly unpredictable traffic flows, paying for a subscribed set of bandwidth whether you're using it or not may not be very cost-effective. Frame Relay provides the capability to burst above what you've committed to financially. (This is discussed later in this chapter, in the section "Frame Relay Networks.")

Frame Relay is also useful in a multiprotocol environment. Although IP seems to rule the world, it is not the only protocol in use. It is a multiprotocol world. There are SNA networks in place, still making use of IBM's Synchronous Data Link Control (SDLC). The largest legacy networks today are some of the billing systems run by the world's telco operators. Frame Relay is used by more than 60,000 enterprises worldwide, and those that are highly focused on multimedia applications use ATM. Few customers use only one protocol. They have multiple protocols in their networks, and Frame Relay can handle them all because it simply encapsulates another protocol into a Frame Relay envelope and carries it through the network it doesn't care what's inside the envelope.

Closed user groups where you want to know who has access in and out of your network can be achieved with Frame Relay, unlike with the public Internet, where you have no idea who's on there at any point in time. Frame Relay also allows you to predict the level of the network's performance, so it enables you to set metrics. This makes it an especially attractive solution if you are operating with countries where there are good carrier infrastructures.

Frame Relay Networks

Frame Relay is an interface specification that defines how information must be packaged in order for the Frame Relay network to act on it and to deliver it to its destination. Therefore, it is not necessarily associated with a specific piece of equipment. The Frame Relay interface could reside on multiple platforms. As shown in Figure 7.14, the Frame Relay interface resides on DTE, which is most likely a router but could also be a Frame Relay access device (FRAD), used to provide access for Voice over Frame Relay (VoFR). It could be a T-1 or an E-1 multiplexer with a Frame Relay interface. One of the things that is so valuable about Frame Relay is that it doesn't represent an investment in altogether new technology. You can upgrade existing platforms, which can make a lot of economic sense. Frame Relay can be deployed on a wide range of platforms, and predominantly it is seen today on routers.

Figure 7.14. Frame Relay interface definitions

graphics/07fig14.gif

The Frame Relay interface takes the native data stream, no matter what the protocol (for example, TCP/IP, SDLC, X.25), and puts it inside a Frame Relay envelope. Essentially, Frame Relay puts the native data into an encapsulated form, using Link Access Protocol D (LAPD), that the Frame Relay switches can act on.

The Frame Relay header format, LAPD, is shown in Figure 7.15. A beginning flag essentially starts the communication. A Frame Relay header is the very important part of the envelope that contains the addressing information. The user data is the native block of information. Next, the frame-check sequence performs a cyclical redundancy check, and an ending flag closes the frame. An expanded view of the Frame Relay header includes the data link connection identifier (DLCI), which is the addressing scheme that defines the source and destination addresses. A few fields can be used for purposes of managing a minimal amount of QoS. The forward explicit congestion notifier (FECN) and backward explicit congestion notifier (BECN) fields are used to manage the traffic flow. The FECN tells the receiver, "I'm experiencing delays getting to you, so anticipate those delays. Don't time-out the session." BECN tells the transmitter, "Whoa! We've got delays ahead. Throttle back or slow down on your introduction of data, or we'll end up losing those frames because of congestion." You use the discard eligibility field to mark a frame as being either discard eligible or not and to control what occurs between voice and data in, for instance, a period of congestion. Frame Relay enables you to control the traffic flow a bit and you can determine whether to drop a frame. But notice that there is no place in the frame for defining latency requirements or loss tolerances the stricter QoS traffic measurements. Nonetheless, the switches will read the DLCIs to determine how to properly forward the frame.

Figure 7.15. Frame Relay frame format (LAPD)

graphics/07fig15.gif

In a Frame Relay network, the customer environment includes the full complement of information resources that the customer wants to use on this network. Next, the CPE which could be a router, bridge, FRAD, mux, or switch contains the interface that formats packets into the Frame Relay frames. From the CPE, an access line (called a user network interface [UNI]) connects to the Frame Relay provider switch. That UNI could be a leased line, such as 56Kbps/64Kbps, or T-1/E-1, an ISDN line, or an analog dialup line.

The UNI then leads to the Frame Relay switch, which is basically a statistical multiplexer. Based on the type of subscription in place, the traffic is either sent out over a permanent virtual circuit (PVC) or over a switched virtual circuit (SVC). Recall from Chapter 2 that a PVC is analogous to a leased line. It is predetermined, and it is manually configured and entered into a network management system so that it stays between two locations until it is reprogrammed. SVCs, on the other hand, are like the dialup scenario; they are dynamically provisioned via signaling on an as-needed basis. Figure 7.16 illustrates the use of PVCs. When a packet goes through the interface in the DTE (probably a router or a FRAD), it is put into the LAPD format, and then the LAPD frame is passed to the switching point. The switching point looks at the DLCI and then looks it up in its table to determine over which particular circuit or virtual circuit to send the message.

Figure 7.16. PVCs in a Frame Relay network

graphics/07fig16.gif

The Frame Relay Forum

If you want to know the latest on what standards are mature, available, and deliverable, consult the Frame Relay Forum (www.frforum.com). The Frame Relay Forum is a pseudostandards body that produces the recommendations to which most of these devices are built and whose recommendations are largely observed throughout the operator community. You can get a full list of the UNIs from the Frame Relay Forum, along with all the published and draft standards associated with Frame Relay.

Subscribers specify the port speed and the committed information rate (CIR) in a Frame Relay network. Port prices are based on bandwidth, which determines the speed of the interface into the network. The PVC charges are based on the CIR and the distance. The CIR generally refers to the PVC's minimum bandwidth under normal conditions. Generally, the CIR is less than the access rate into the network, and the access rate into the network determines the maximum amount of bandwidth that you can use.

Figure 7.17 illustrates the bandwidth-on-demand flexibility mentioned earlier in this chapter. Say you have an access line that allows 2.048Mbps, an E-1, to your carrier's switching point. Between these two locations of the network, you have contracted for a PVC that is essentially 1Mbps. In this environment, bandwidth-on-demand works like this: You are allowed to burst above your PVC's CIR of 1Mbps, up to the rate of your access line, or port speed, which is 2Mbps. In other words, you are paying for 1Mbps, but you're actually allowed to transmit at 2Mbps for short periods of time.

Figure 7.17. Frame Relay bandwidth-on-demand

graphics/07fig17.gif

If you try to keep transmitting at your burst rate over a sustained period, the network will do one of two things. It might start dropping frames, which is another reason voice and video might suffer over Frame Relay. Or there might be a software mechanism that allows the excess traffic to be captured so that you can be billed for overtime. But the carrier is banking on the fact that not everybody is making use of the CIR at all times. Again, LAN traffic is quite unpredictable, so there are lulls in the day when you're not transmitting anything and other times when you need twice your CIR, and ideally, at the end of the day it all balances out. But the carrier is playing the same gamble, assuming that not everybody is going to try to exercise their CIR at the same time. If they do, whether you still experience your CIR will depend on the integrity of engineering of the Frame Relay provider. In other words, if the provider oversubscribes this PVC and if everyone attempts to burst at the same time, somebody is not going to have capacity available. This is a big issue in terms of vendor selection. Frame Relay networks are much less expensive than other options because the operators are also saving on how they're carrying that traffic.

With SVCs, the connections are established on demand, so the routing tables do not store path identifiers just the address of each site. Users can connect to any site, as long as the address is programmed into the router and SVC capacity is available. Subscribers control call setup via their own routers or FRADs. The router programming, then, controls allocation of the aggregate bandwidth. SVCs share bandwidth, and they do so either on a first-come, first-served basis or on a custom basis, where chosen SVCs are disconnected when a higher-priority application needs bandwidth.

Frame Relay Performance Issues

You need to consider a number of performance issues with Frame Relay:

         Likelihood of bottlenecks This depends on whether the operator has oversubscribed the backbone.

         Ability to handle bursts Does the operator let you burst above your CIR for sufficient periods, or are the bursts so limited that you really don't get bandwidth-on-demand?

         Level of network delay Operators commit to different maximum delays on different routes, so if you are going to be handling delay-sensitive traffic, you especially need to address this issue.

         Network availability guarantees You need to determine to what level you can get a service-level agreement (SLA) that guarantees network availability. This depends on the vendor, not on technology.

As far as Frame Relay QoS goes, you can expect to be able to have classes of service (CoSs), where you specify your CIR and your maximum burst rate, as well as some minor traffic parameters, such as the discard eligibility bits and the congestion notifiers. Otherwise, Frame Relay has no provisions for the control of latencies and losses.

VoFR

VoFR has been gaining interest among both carriers and users in the past few years. A main driver behind VoFR is more efficient use of Frame Relay bandwidth. The average full-duplex voice conversation consists of about half silence, which indicates that voice has a bursty quality. Data networks have been sharing bandwidth for many years. Voice is just another protocol, so why not let it also share bandwidth, as it is a rather bursty stream, and in this way achieve better use of the Frame Relay resource? The goal of VoFR is not to replace existing voice networks, but rather to make use of what you have available in Frame Relay to carry overflow traffic or additional voice traffic. Voice is compressed in Frame Relay, and then encapsulated into the Frame Relay protocol via a FRAD. Again, the main advantage of this is better use of a single data network and the cost savings derived from this efficiency. But remember that if you run everything over a single network, voice quality may suffer, and, even worse, data performance may suffer.

The Frame Relay Forum has specified the FRF.11 standard for how to deploy VoFR. It provides bandwidth-efficient networking of digital voice and Group 3 fax communications over Frame Relay. It defines multiplexed virtual connections, up to 255 subchannels on a single Frame Relay DLCI, and it defines support of data subchannels on a multiplexed Frame Relay DLCI.

The ITU has defined some VoFR compression standards:

         ITU G.711 PCM Regular PCM is the compression standard that was part and parcel of the PDH hierarchy, which carried voice at 64Kbps. That's a very high rate, given what we can achieve today.

         ITU G.726/G.727 ADPCM In the PSTN, we also went to Adaptive Differential PCM (ADPCM), which reduced the data rate to 32Kbps.

         ITU G.723.1 MP-MLQ With Frame Relay networks we can apply Multipulse-Maximum Likelihood Quantization (MP-MLQ), which reduces voice to 4.8Kbps and can permit up to 10 voice channels on a single 64Kbps connection.

Another feature of VoFR that is important is voice activity detection (VAD). VAD algorithms reduce the amount of information needed to re-create the voice at the destination end by removing silent periods and redundant information found in human speech; this also helps with compression.

Jitter is another quality issue related to VoFR. Jitter is the variation in delays on the receive side of the transmission from one packet to the next. Delay varies, depending on the traffic in the switch, and severe jitter can make conversations very difficult to understand. Dropped packets can cause clicks or pops, and a great deal of packet loss would result in altogether unintelligible conversation.

FRF.12 addresses the fragmentation of both data frames and VoFR frames. It reduces delay variation, segments voice signals into smaller data bundles, and, ultimately, provides better performance. Because bundles are smaller, when some get lost, the network feels less impact.

Another VoFR consideration is the ability to prioritize voice traffic, which, of course, is very delay sensitive. The need for echo cancellation that's caused by round-trip delay is another consideration. Echo cancellation is required on voice circuits over 500 miles (800 kilometers) long. A final consideration is voice interpolation. Equipment is needed to re-create lost voice information so that retransmissions don't need to be performed, because voice retransmissions would be ineffective. Voice, unlike data, cannot wait for retransmissions to occur.

Advantages and Disadvantages of Frame Relay

The advantages of Frame Relay are as follows:

         Provides cost savings compared to leased lines

         Runs on multiprotocol networks

         Provides control over the user community

         Gives predictable performance and reliability

         Provides minimum guaranteed throughput

         Provides network management and control

         Provides greater bandwidth flexibility

         Currently used by some 60,000 companies and provided about US$8 billion in revenue in 2000

Disadvantages of Frame Relay include the following:

         Provides weak network management ability

         Inherently unsuitable for delay-sensitive traffic, such as voice and video

         Requires high-quality digital circuits, so it does not work everywhere

         Not entirely standardized

Overall, Frame Relay represents a viable and cost-effective solution for data networking, particularly where LAN-to-LAN interconnection is the main goal.

ATM

ATM is a series of standards that was first introduced by the ITU-T, in 1988, as part of a larger vision for the future of networks called Broadband ISDN. Broadband ISDN defined a new genre of applications, and most of those applications, not surprisingly, involved video or multimedia content, and this is where ATM shines. ATM was designed to be a master integrator: one platform, one infrastructure over which voice, data, video, multimedia, images, and other forms of traffic that we may have not thought of yet can all coexist and all be assigned the appropriate network resources based on their needs. ATM wasn't designed to be a technique for voice; it wasn't designed as a new solution for data. It was designed for multimedia, but it hasn't yet had a chance to really demonstrate its greatest strengths in today's environment.

A huge number of networks roughly 80% to 85% of all Internet backbones and Frame Relay networks have ATM at their core. Today, ATM is still the only WAN approach that provides an architected QoS, which then gives network operators the opportunity to manage the traffic inside the network, which is a prerequisite to being able to offer business-class services, such as virtual private networks (VPNs), VoFR, Voice over IP, and Voice over ATM. ATM has the capability to provide the appropriate guarantees to delay-sensitive traffic. ATM is working on your behalf more than may be evident in what you read and hear, especially as IP is the public's current darling. (Chapter 11, "Next-Generation Network Services," discusses the possibility and benefits of marrying ATM and IP.)

By definition ATM is a high-bandwidth, fast packet-switching and multiplexing technique that enables the seamless end-to-end transmission of voice, data, image, and video traffic. It's a high-capacity, low-latency switching fabric that's adaptable for multiservice and multirate connections. The capacities it affords, including low latency, are absolutely prerequisite to the support of advanced applications for which this switching technology was designed.

ATM switches characteristically have large capacities. They range from 10Gbps to 160Gbps, and new products are emerging in the Tbps range. (In comparison, IP routers typically offer capacities ranging from 4Gbps to 60Gbps, although there are also new Tbps switch routers emerging.)

The best advantages of ATM include the robust QoS and high-speed interfaces. ATM was the first networking approach that supported high-speed interfaces, both 155Mbps and 622Mbps. Therefore, as an enterprise wanted to reengineer its campus network to higher bandwidth, ATM presented a viable solution. The 1997 introduction of Gigabit Ethernet presented a more economical approach, and today, ATM is implemented in the enterprise because it offers the capability to administer QoS for multimedia and real-time traffic. Of course, over time other solutions and architectures also begin to incorporate the features that people seek, so new-generation IP routers and switches accommodate the same high-speed interfaces that ATM does. Both ATM and IP today ship with 2.5Gbps (that is, OC-48) interfaces. Today, ATM can administer QoS, and IP is getting close. (QoS and ATM's service classes are discussed in detail in Chapter 10, "Next-Generation Networks.")

ATM enables access bandwidth to be shared among multiple sources and it enables network resources to be shared among multiple users. It allows different services to be combined within a single access channel (see Figure 7.18).

Figure 7.18. Mapping services into ATM

graphics/07fig18.gif

ATM Applications

There are many key applications for ATM. The ATM standard began in the carrier community, as a means of reengineering the PSTN to meet the demands of future applications. As Frame Relay networks began to see the demand to accommodate voice and video, they also began to institute ATM in their core in order to be able to administrate service guarantees. The same goes for the Internet backbone, especially where there's an interest in providing more than just consumer Internet access, but also in providing business-class services, where the customer wants some SLAs tied to QoS and network performance.

There's also a need for ATM in VPNs that need to carry multimedia traffic, and where you want to reengineer the network environment to be integrated for example, replacing individual PBXs for voice and LAN switches for data with an enterprise network switch that can integrate all your traffic into one point at the customer edge.

Finally, ATM can be used to enhance or expand campus and workgroup networks; that is, it can be used to upgrade LANs. In the early days of ATM, one of the first marketplaces where it saw adoption was in the LAN community. If you wanted to make a move to a campus network that could support 155Mbps or 622Mbps, the only solution was to go to an ATM environment. However, at the end of 1997, the standards for Gigabit Ethernet were formalized and introduced, and that is a much cheaper technology and transition path than ATM. To go from 100Mbps Ethernet to ATM means going to an entirely new technology. It's an investment in an entirely new generation of equipment and a requirement for an entirely new set of technical skills. A great many more programmers are knowledgeable in other techniques, such as IP, than in ATM. However, Gigabit Ethernet doesn't require learning a new protocol, which is a benefit for network engineers. Gigabit Ethernet also has a much lower cost in terms of the actual components and boards. Therefore, with the formalization of Gigabit Ethernet, people turned away from ATM in the LAN and decided they would simply throw bandwidth at the problem in the campus network. But remember that Ethernet does not within itself address QoS, so we can't continue to throw bandwidth at this problem much longer because when applications truly turn to the visual and multimedia realm, Gigabit Ethernet will not suffice, and QoS will need to be included.

Early adopters, such as the U.S. Navy, universities, and health care campuses, today are deploying ATM. ISPs are the biggest customers of ATM, followed by financial institutions, manufacturers, health care, government, education, research labs, and other enterprises that use broadband applications.

ATM drivers include the capability to consolidate multiple data, voice, and video applications onto a common transport network with specified QoS on a per-application basis. It is also being used to replace multiple point-to-point leased lines, which were used to support individual applications' networks. In addition, Frame Relay is being extended to speeds above T-1 and E-1.

The major inhibitor of ATM is the high service cost. Remember that one of the benefits of Frame Relay is that it is an upgrade of existing technology, so it doesn't require an entirely new set of skills and an investment in new equipment. With ATM you do have an entirely new generation of equipment that needs to be acquired and skill sets that must be built to properly implement and manage ATM, which may have a big financial impact on the overall picture.

ATM Interfaces

ATM is a very high-bandwidth, high-performance system that uses a uniform 53-byte cell: 5 bytes of addressing information and 48 bytes of payload. The benefit of the small cell size is reduced latency in transmitting through the network nodes. The disadvantage of the small cell size, however, is that it means increased overhead. But remember that ATM was built in support of the vision of Broadband ISDN, and the second set of standards in support of Broadband ISDN was SDH/SONET. In other words, ATM was created with an eye toward the deployment of fiber, which offers tremendous capacities and hence makes bandwidth less of an issue.

ATM is a connection-oriented network, which for purposes of real-time, multimedia, and time-sensitive traffic is very important because it allows controlled latencies. It operates over a virtual circuit path, which leads to great efficiency in terms of network management. Payload error control is done at the endpoints, and some limited error control procedures are performed on the headers of the cells within the network itself. ATM supports asynchronous information access: Some applications consume a high percentage of capacity (for instance, video-on-demand) and others consume much less (for example, e-mail); thus, ATM allows multirate connections. Finally, ATM has a highly defined and structured set of QoS definitions.

The ATM Layers

As discussed in the following sections, ATM has three main layers (see Figure 7.19): the physical layer, the ATM layer, and the ATM adaptation layer.

Figure 7.19. ATM layers

graphics/07fig19.gif

The Physical Layer The physical layer basically defines what transmission media are supported, what transmission rates are supported, what physical interfaces are supported, and what the electrical and optical coding schemes are for the ones and zeros. Like the OSI physical layer, it's a definition of the physical elements of getting the ones and zeros over the network.

The ATM Layer The ATM switch performs activities at the ATM layer. It performs four main functions: switching, routing, congestion management, and multiplexing.

The ATM Adaptation Layer The ATM adaptation layer (AAL) is the segmentation and reassembly layer. The native stream (whether it's real-time, analog, voice, MPEG-2 compressed video, or TCP/IP) goes through the adaptation layer, where it is segmented into 48-byte cells. Those 48-byte cells are then passed up to the first ATM switch in the network, which applies the header information that defines on which path and which channel the conversation is to take place. (This speaks, again, to the connection-orientation of ATM.)

At the onset of the call, there is a negotiation phase, and each switch that's required to complete the call to the destination gets involved with determining whether it has a path and channel of the proper QoS to deliver on the requested call. If it does, at that time it makes a table entry that identifies what path and channel the call will take between the two switches. If along the way one of the switches can't guarantee the QoS being requested, the session is denied. ATM provides an end-to-end view of the network and an assurance that all along the way, the proper QoS can be met. Again, the adaptation layer segments the information into the 48-byte cells, and each switch, in turn, applies the headers that contain the routing information, and at the receiving end, the adaptation layer again reassembles the cells into the native stream that is understood by the end device. There are adaptation layers for various traffic types for real-time traffic, for connection-oriented data, for connectionless data, for compressed video, and so on.

Within the AAL are a number of options:

         AAL 0 When a customer's network equipment takes care of all the AAL-related functions, the network uses a Null AAL (also known as AAL 0). This means that no services are performed and that cells are transferred between the service interface and the ATM network transparently.

         AAL 1 AAL 1 is designed to meet the needs of isochronous, constant bit rate (CBR) services, such as digital voice and video, and is used for applications that are sensitive to both cell loss and delay and to emulate conventional leased lines. It requires an additional byte of header information for sequence numbering, leaving 47 bytes for payload. This adaptation layer corresponds to fractional and full T-1/E-1 and T-3/E-3. AAL Type 1 provides a timing recovery functional to maintain the bit timing across the ATM network and to avoid buffer overflow/underflow at the receiver.

         AAL 2 AAL 2 is for isochronous variable-bit-rate (VBR) services such as packetized video. It allows ATM cells to be transmitted before the payload is full to accommodate an application's timing requirements.

         AAL 3/4 AAL 3/4 supports VBR data, such as LAN applications, or bursty connection-oriented traffic, such as error messages. It is designed for traffic that can tolerate delay but not cell loss. This type performs error detection on each cell by using a sophisticated error-checking mechanism that consumes 4 bytes of each 48-byte payload. AAL 3/4 allows ATM cells to be multiplexed, and it supports the process of segmentation and reassembly required to carry variable length frames over the ATM network. It also provides a per-cell cyclical redundancy check (CRC) to detect transmission errors and a per-frame length check to detect loss of cells in a frame.

         AAL 5 AAL 5 is intended to accommodate bursty LAN data traffic with less overhead than AAL 3/4. It is also known as SEAL (simple and efficient adaptation layer). Its major feature is that it uses information in the cell header to identify the first and last cells of a frame, so that it doesn't need to consume any of the cell payload to perform this function. AAL 5 uses a per-frame CRC to detect both transmission and cell-loss errors, and it is expected to be required by ITU-T for the support of call-control signaling and Frame Relay interworking.

The ATM Forum

For the latest information on which ATM standards are supported, are in the works, and are completely formalized, the ATM Forum (www.atmforum.com) is the best resource.

The ATM Transmission Path

I've mentioned several times the virtual path and the virtual channel, and Figure 7.20 gives them a bit more context. Think of the virtual channel as an individual conversation, so that each voice, video, data, and image transmission has its own unique virtual channel. The number of that channel will change between any two switches, depending on what was assigned at the time the session was negotiated.

Figure 7.20. The relationship of VP, VC, and transmission path

graphics/07fig20.gif

All similar virtual channels that is, all those that have the same QoS request are bundled into a common virtual path. Virtual Path 1 might be all real-time voice that has a very low tolerance for delay and loss; Virtual Path 2 might be for streaming media, which requires continuous bandwidth, minimum delay, and no loss; and Virtual Path 3 might be for non-mission-critical data, so best-effort service is fine. ATM is very elastic in terms of its tolerance of any losses and delays and allocation of bandwidth. It provides an easy means for the network operator to administrate QoS. Instead of having to manage each channel individually in order to guarantee the service class requested, the manager can do it on a path basis, thereby easing the network management process.

To illustrate how we identify what paths and channels need to be taken within the cell, Figure 7.21 shows an ATM cell structure. The header information includes information on the virtual path between Switch A and Switch B. It also shows the channel assignment, the type of payload, and the loss tolerance. In essence, the header provides the QoS metric, and the payload makes up the other 48 bytes of that cell. QoS is one of the great strengths of ATM, and ATM defines a series of specific QoS parameters that tailor cells to fit the video, data, voice, and mixed-media traffic.

Figure 7.21. ATM cell structure

graphics/07fig21.gif

Where ATM Fits in the Network

When does ATM fit into the network, and in what part does it fit? The way technologies seem to evolve is that they find their first placement in the core network, where there are high traffic volumes, to justify the investments required for the new technologies. Therefore, much of today's ATM equipment is in core networks either ISPs, telcos, or other network operators. ATM then filters into the access network and into the metropolitan area network. Typically, you find it first where there are concentrations of early adopter-type customers. Ultimately, a new technology makes its way into the LAN, where you must reengineer the local enterprise to provide QoS, not just high bandwidth (see Figure 7.22).

Figure 7.22. The ATM infrastructure

graphics/07fig22.gif

ATM Markets

According to Vertical Systems Consulting (www.verticalsystems.com), by 2002 the global market for equipment and services is expected to be roughly US$13 billion, with about half of that, US$6.9 billion, outside the United States. Equipment revenues by 2002 are expected to be around US$9.4 billion, and service revenues around US$3.9 billion. The United States today accounts for 75% of the worldwide revenues. Europe is the second largest market, and the United Kingdom accounts for most of the ATM ports. Canada is next biggest, and Asia Pacific is the fastest growing. Some 50 providers currently offer ATM UNI services.

Advantages and Disadvantages of ATM

ATM's benefits can be summarized as follows:

         Provides hardware switching, which results in high performance

         Allows dynamic bandwidth for bursty data

         Provides CoS and QoS support for multimedia

         Scales in speed and network size

         Provides a common LAN/WAN architecture

         Provides opportunities for simplification via its virtual circuit architecture

         Has strong traffic engineering and network management capabilities

         Currently used by some 1,500 U.S. enterprises

The following are disadvantages of ATM:

         Has small cell size

         Has high overhead

         Has high service costs

         Requires new equipment

         Requires new technical expertise

Another disadvantage of ATM is that confusion arises when some of the capabilities of ATM begin to be offered by other approaches, such as Multiprotocol Label Switching (MPLS), as discussed in Chapter 10.

For more learning resources, quizzes, and discussion forums on concepts related to this chapter, see www.telecomessentials.com/learningcenter.

 



Telecommunications Essentials
Telecommunications Essentials: The Complete Global Source for Communications Fundamentals, Data Networking and the Internet, and Next-Generation Networks
ISBN: 0201760320
EAN: 2147483647
Year: 2005
Pages: 84

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net