Packet switching was developed as a solution for the communications implications of interactive processing—it was designed to support
With packet switching, packets are routed through a series of intermediate nodes, often involving multiple networks; they are routed in a store-and-forward manner through a series of packet switches (that is, routers) that ultimately lead to the destination. Information is divided into packets that include a destination address and a sequence number. Let's look at an analogy. Think about telecommunications as a transportation network in which the physical roadway is a gravelly, potholed, single-lane alley. This is the traditional voice channel—twisted-pair deployed in a limited spectrum to support voice. We can pave that alleyway and make it a slicker surface, which equates to DSL. The road can now accommodate some additional information and move traffic along at a faster rate. We could then build a street with four lanes, the equivalent of coaxial cable. We could even build a much more sophisticated interstate turnpike, with eight
Over these roadways travel vehicles—that is, packets, such as X.25, Frame Relay, Internet Protocol (IP), and ATM. The vehicles can carry different
The secret to understanding the various packet formulas is
that packet switching deals with containerized, labeled entities we generically call packets, which vary in
Because packet switching is a store-and-forward process of relaying through a series of intermediate nodes, latency and packet loss can considerably degrade real-time applications. In fact, the first generation of packet switching, X.25, dealt with data only. It could not handle voice or video. As discussed later in this chapter,
In general, in the traditional mode, packet switching
A packet-switched network is a data-centric environment, and instead of switching millions of physical circuits, as happens in the circuit-switched environment, the data-centric network switches packets, packet switches, or switched virtual circuits. Aggregation of these physical packets tends to happen at the edge of the carrier network. The first packet switch in the network immediately converts the physical circuit to a virtual circuit, or a stream of packets. As you can see in
, multiple packets are being statistically multiplexed as they come in through the packet switch, a routing table is consulted, an appropriate
The speed of the transmission facilities between the switches directly affects the performance of packet-switched networks; this is why many new-generation packet switches—IP and ATM switches, for instance—are now shipping with high-speed interfaces, such as OC-48 (that is, 2.5Gbps) interfaces. OC-48 interfaces on a switch could
, "Establishing Communications Channels," that there are two main types of packet-switched networks: connection-oriented and connectionless networks. In a connection-oriented environment (such as X.25, Frame Relay, ATM, and VPNs that are based on Frame Relay or ATM networks), a call is set up end-to-end at the onset of the communication. Only one call request packet that contains the source and destination address is necessary. That initial call request packet establishes a virtual circuit to the destination so that subsequent packets need only be read for the marking information that defines the virtual circuit to be taken. The intermediate nodes do not need to look at the addressing information in order to calculate a path for each packet independently. This
As discussed in
, the connectionless environment (which includes X.25 networks, the public Internet, private IP-based backbones, and LANs) can be likened to the postal service, in which a message is relayed from point to point, with each relay getting one step closer to its ultimate destination. In a connectionless environment, each packet of a message is an independent unit that contains the source and destination address. Each packet is independently routed at each intermediate node it crosses. The more hops it goes through, the greater the delays that are
In 1970 Tymnet introduced X.25, which was the first generation of packet switching. X.25
The X.25 packet-switching technique emerged out of a need to address the characteristics of interactive processing, which had been introduced in the late 1960s. As mentioned earlier in the chapter, interactive processing is a bursty data flow that implies long connect times but low data volumes. X.25 provided a technique for many conversations to share a communications channel.
Because of when X.25 was created, it was based on an analog network infrastructure. A big problem with analog networks is the accumulation of noise through the amplification points, which leads to the very high error rate associated with analog networks. So, one of the value-added services provided by X.25 networks was error control as a function within the network. Because packet switching is a store-and-forward technique, at every intermediate node at which an X.25 packet would be halted the packet would undergo an error check. If everything in the packet was correct, the intermediate node would return an acknowledgment to the original transmitting node, requesting it to forward the next packet. If the packet the node received was not correct, the node would send a message requesting a retransmission. Thus, at any point in the routing and relaying of those packets, if noise
Remember that what is beneficial or not beneficial about a particular network depends on the prevailing conditions, so in an analog infrastructure, where noise was an issue, error control was a highly desirable feature. But performing that error control procedure on every packet at every node in addition to developing routing instructions at each intermediate node for the next point to which to relay the packet increased the delays that were encountered end-to-end in the transmission of information. Because X.25 packet-switching networks were for data only, it was not important to be able to tightly control delays or losses.
Another early attribute of X.25 was the size of its packet. It used relatively small packets,
Packet Size in X.25, Frame Relay, and ATM
The next generation of packet-switched networks after X.25 is Frame Relay. In Frame Relay, the packet sizes are variable, but they can be up to 4,096 bytes. Frame Relay operates over a digital network, where noise is not much of an issue because regenerative repeaters eliminate the noise that may accumulate on the signal during transmission. As a result, all the error control procedures are removed from Frame Relay networks in order to make them faster. Furthermore, in Frame Relay we're not very
If we jump ahead one more generation, to ATM switching, we find that packets are called
You can see that the optimum size of the packets really depends on a number of factors, such as the performance of the network, the cost of the bandwidth, and the demand of the applications it's serving.
Like the PSTN, the X.25 packet-switched network has a hierarchy. There are two main categories of packet switches. Some packet switches (such as the square packet switches in
) are close to the customer—in fact, they are at the points where the customers access the network. These packet switches are involved with routing and relaying packets and with error detection and correction. This is also where added intelligence would reside in order to be able to convert between different protocols being used by different computers, or to convert between different operating speeds or between different coding schemes. In other words, this is another value-added piece of the equation. One benefit of X.25 was that it allowed you to create a network that provided connectivity between unlike equipment because it could perform the necessary conversions on your
Again, the links that join the packet switches throughout the network could be combinations of analog and digital facilities. In advanced network infrastructures, they would be digital, but there are still infrastructures around the world that have not yet been reengineered or upgraded, and this is where X.25 still has an application. If you're only concerned with networking locations that are
Another of the original benefits of X.25 was that it could handle alternative routing. X.25 was built with the
To connect to an X.25 network, you need a packet assembler/disassembler (PAD) to interface non-X.25 devices to an X.25 network. PADs convert protocols into packets, as prescribed by the X.25 standard, so that the data can travel across an X.25 packet network. These PADs may reside either at the customer
· X.28 is the standard protocol between the terminal and the PAD.
· X.29 is the standard protocol between the PAD and the network.
· X.75 is the gateway protocol that defines how to interconnect two or more packet-switched data networks. One could be a private packet data network and the other a public packet data network, or they could be two different network operators' networks, and so on. (Gateway protocols, which imply a means by which you can cross into other people's backyards, are discussed in Chapter 9 , "The Internet: Infrastructure and Service Providers.")
The advantages of X.25 are as
· Powerful addressing facilities, because X.25 is the first approach to providing Layer 3 networking address information to enable routing and relaying through a series of intermediate nodes and networks
· Better bandwidth utilization, thanks to statistical multiplexing
Improved congestion control because it enables packets to circumvent
· Improved error control that is done continually in the network at each intermediate node and in the face of all sorts of failures
· High availability in the face of node and line failures because rerouting is possible
The disadvantages of X.25 are as follows:
· Queuing delays
· Lower-speed communications links
Smaller packet sizes, which means it doesn't make use of bandwidth as well as some of the newer protocols that involve larger
· No QoS guarantees, so delay-sensitive applications will likely suffer
· For data only, and today we are striving for integrated solutions
The second generation of packet switching, Frame Relay, was introduced in 1991. Frame Relay assumes that there's a digital infrastructure in place and that few errors will result from network noise. Therefore, the entire error detection and correction process has been removed from the Frame Relay network, and error control is done entirely in the endpoints. This means that traffic is not delayed by being
The lack of error control in the network also means that it is possible to carry voice and video over a Frame Relay network. However, Frame Relay is not innately designed to do that. The packet sizes enabled under Frame Relay are large—up to 4,096 bytes—and variable, which means that there could be a 100-byte packet going through a network node, with a 4,000-byte packet right behind it. When you have packets of varying sizes, you can't predict the delay in processing those packets through the network, and when you can't predict the delay, you can't properly address the latency requirements of real-time voice or video. Yet we do, in fact, run voice and video over Frame Relay networks, by tweaking the system in one of several ways. For example, we could provision separate links to carry the voice and the data traffic, and thus some excess data bursting wouldn't affect any real-time telephony, for instance, that is under way. We could prioritize traffic by application and in that way enable access to bandwidth, based on priority. In public Frame Relay networks, we often convert frames to equal-
The types of links that connect the Frame Relay switching points operate at high speeds—they run the full range of the wide
The standards for Frame Relay come from the ITU-T, which defines Frame Relay as "a conversational communication service provided by a subnetwork for high-speed bursty data." This definition implies that we have a two-way capability (it is "conversational") and that Frame Relay is not an end-to-end solution (it is a "subnetwork"). So we don't look for a Frame Relay device such as a Frame Relay telephone; instead, we look at Frame Relay to serve as the cloud—that is, the WAN solution that links together computer networks that are distributed across a country or across the world. And "high-speed bursty data" suggests that Frame Relay's preliminary application is in support of data and, specifically, LAN-to-LAN internetworking.
What type of an environment might be a candidate for Frame Relay? One such environment is a hub-and-spoke network, in which traffic from remote locations
Frame Relay is also used to give a network some bandwidth flexibility—that is, bandwidth on demand. Because the main application of Frame Relay is LAN internetworking, and because LANs produce highly unpredictable traffic flows, paying for a subscribed set of bandwidth whether you're using it or not may not be very cost-effective. Frame Relay provides the capability to burst above what you've committed to
Frame Relay is also useful in a multiprotocol environment. Although IP seems to rule the world, it is not the only protocol in use. It is a multiprotocol world. There are SNA networks in place, still making use of IBM's Synchronous Data Link Control (SDLC). The largest legacy networks today are some of the billing systems run by the world's telco operators. Frame Relay is used by more than 60,000
Closed user groups—where you want to know who has access in and out of your network—can be achieved with Frame Relay, unlike with the public Internet, where you have no idea who's on there at any point in time. Frame Relay also allows you to predict the level of the network's performance, so it enables you to set metrics. This makes it an
Frame Relay is an interface specification that defines how information must be packaged in order for the Frame Relay network to act on it and to deliver it to its destination. Therefore, it is not
The Frame Relay interface takes the native data stream, no matter what the protocol (for example, TCP/IP, SDLC, X.25), and puts it inside a Frame Relay envelope. Essentially, Frame Relay puts the native data into an encapsulated form, using Link Access Protocol D (LAPD), that the Frame Relay switches can act on.
The Frame Relay header format, LAPD, is shown in
. A beginning flag essentially starts the communication. A Frame Relay header is the very important part of the envelope that contains the addressing information. The user data is the native block of information. Next, the frame-check sequence
In a Frame Relay network, the customer environment includes the full complement of information resources that the customer wants to use on this network. Next, the CPE—which could be a router, bridge, FRAD, mux, or switch—contains the interface that formats packets into the Frame Relay frames. From the CPE, an access line (called a user network interface [UNI]) connects to the Frame Relay provider switch. That UNI could be a leased line, such as 56Kbps/64Kbps, or T-1/E-1, an ISDN line, or an analog dialup line.
The UNI then leads to the Frame Relay switch, which is basically a statistical multiplexer. Based on the type of subscription in place, the traffic is either sent out over a permanent virtual circuit (PVC) or over a switched virtual circuit (SVC). Recall from Chapter 2 that a PVC is analogous to a leased line. It is predetermined, and it is manually configured and entered into a network management system so that it stays between two locations until it is reprogrammed. SVCs, on the other hand, are like the dialup scenario; they are dynamically provisioned via signaling on an as-needed basis. Figure 7.16 illustrates the use of PVCs. When a packet goes through the interface in the DTE (probably a router or a FRAD), it is put into the LAPD format, and then the LAPD frame is passed to the switching point. The switching point looks at the DLCI and then looks it up in its table to determine over which particular circuit or virtual circuit to send the message.
The Frame Relay Forum
If you want to know the latest on what standards are mature, available, and
Subscribers specify the port speed and the committed information rate (CIR) in a Frame Relay network. Port prices are based on bandwidth, which determines the speed of the interface into the network. The PVC charges are based on the CIR and the distance. The CIR generally refers to the PVC's minimum bandwidth under normal conditions. Generally, the CIR is less than the access rate into the network, and the access rate into the network determines the maximum amount of bandwidth that you can use.
Figure 7.17 illustrates the bandwidth-on-demand flexibility mentioned earlier in this chapter. Say you have an access line that allows 2.048Mbps, an E-1, to your carrier's switching point. Between these two locations of the network, you have contracted for a PVC that is essentially 1Mbps. In this environment, bandwidth-on-demand works like this: You are allowed to burst above your PVC's CIR of 1Mbps, up to the rate of your access line, or port speed, which is 2Mbps. In other words, you are paying for 1Mbps, but you're actually allowed to transmit at 2Mbps for short periods of time.
If you try to keep transmitting at your burst rate over a sustained period, the network will do one of two things. It might start dropping frames, which is another reason voice and video might suffer over Frame Relay. Or there might be a software mechanism that allows the excess traffic to be captured so that you can be billed for overtime. But the carrier is banking on the fact that not everybody is making use of the CIR at all times. Again, LAN traffic is quite unpredictable, so there are lulls in the day when you're not transmitting anything and other times when you need twice your CIR, and
With SVCs, the connections are established on demand, so the routing tables do not store path identifiers—just the address of each site. Users can connect to any site, as long as the address is programmed into the router and SVC capacity is available. Subscribers control call setup via their own routers or FRADs. The router programming, then, controls allocation of the aggregate bandwidth. SVCs share bandwidth, and they do so either on a first-come, first-served basis or on a custom basis, where
You need to consider a number of performance issues with Frame Relay:
· Likelihood of bottlenecks— This depends on whether the operator has oversubscribed the backbone.
· Ability to handle bursts— Does the operator let you burst above your CIR for sufficient periods, or are the bursts so limited that you really don't get bandwidth-on-demand?
· Level of network delay— Operators commit to different maximum delays on different routes, so if you are going to be handling delay-sensitive traffic, you especially need to address this issue.
· Network availability guarantees— You need to determine to what level you can get a service-level agreement (SLA) that guarantees network availability. This depends on the vendor, not on technology.
As far as Frame Relay QoS goes, you can expect to be able to have classes of service (CoSs), where you specify your CIR and your maximum burst rate, as well as some minor traffic parameters, such as the discard eligibility bits and the congestion
VoFR has been gaining interest among both
The Frame Relay Forum has specified the FRF.11 standard for how to deploy VoFR. It provides bandwidth-efficient networking of digital voice and Group 3 fax communications over Frame Relay. It defines multiplexed virtual connections, up to 255 subchannels on a single Frame Relay DLCI, and it defines support of data
The ITU has defined some VoFR compression standards:
ITU G.711 PCM—
Regular PCM is the compression standard that was part and
· ITU G.726/G.727 ADPCM— In the PSTN, we also went to Adaptive Differential PCM (ADPCM), which reduced the data rate to 32Kbps.
· ITU G.723.1 MP-MLQ— With Frame Relay networks we can apply Multipulse-Maximum Likelihood Quantization (MP-MLQ), which reduces voice to 4.8Kbps and can permit up to 10 voice channels on a single 64Kbps connection.
Another feature of VoFR that is important is voice activity detection (VAD). VAD algorithms reduce the amount of information needed to re-create the voice at the destination end by removing silent periods and redundant information found in human speech; this also helps with compression.
Jitter is another quality issue
FRF.12 addresses the fragmentation of both data frames and VoFR frames. It reduces delay variation, segments voice signals into smaller data bundles, and, ultimately, provides better performance. Because bundles are smaller, when some get lost, the network feels less impact.
Another VoFR consideration is the ability to prioritize voice traffic, which, of course, is very delay sensitive. The need for echo cancellation that's caused by round-trip delay is another consideration. Echo cancellation is required on voice circuits over 500 miles (800 kilometers) long. A final consideration is voice interpolation. Equipment is needed to re-create lost voice information so that retransmissions don't need to be performed, because voice retransmissions would be
The advantages of Frame Relay are as follows:
· Provides cost savings compared to leased lines
· Runs on multiprotocol networks
· Provides control over the user community
· Gives predictable performance and reliability
· Provides network management and control
· Provides greater bandwidth flexibility
· Currently used by some 60,000 companies and provided about US$8 billion in revenue in 2000
Disadvantages of Frame Relay include the following:
· Provides weak network management ability
· Inherently unsuitable for delay-sensitive traffic, such as voice and video
· Not entirely standardized
Overall, Frame Relay represents a
ATM is a series of standards that was first introduced by the ITU-T, in 1988, as part of a larger vision for the future of networks called Broadband ISDN. Broadband ISDN defined a new
A huge number of networks—
By definition ATM is a high-bandwidth, fast packet-switching and multiplexing technique that enables the seamless end-to-end transmission of voice, data, image, and video traffic. It's a high-capacity, low-latency switching fabric that's adaptable for multiservice and
ATM switches characteristically have large capacities. They range from 10Gbps to 160Gbps, and new products are emerging in the Tbps range. (In comparison, IP routers typically offer capacities
The best advantages of ATM include the robust QoS and high-speed interfaces. ATM was the first networking approach that supported high-speed interfaces, both 155Mbps and 622Mbps. Therefore, as an enterprise wanted to reengineer its campus network to higher bandwidth, ATM presented a viable solution. The 1997 introduction of Gigabit Ethernet presented a more economical approach, and today, ATM is implemented in the enterprise because it offers the capability to administer QoS for multimedia and real-time traffic. Of course, over time other solutions and architectures also begin to
ATM enables access bandwidth to be shared among multiple sources and it enables network resources to be shared among multiple users. It allows different services to be combined within a single access channel (see Figure 7.18 ).
There are many key applications for ATM. The ATM standard began in the carrier community, as a means of reengineering the PSTN to meet the demands of future applications. As Frame Relay networks began to see the demand to accommodate voice and video, they also
There's also a need for ATM in VPNs that need to carry multimedia traffic, and where you want to reengineer the network environment to be integrated—for example, replacing individual PBXs for voice and LAN switches for data with an enterprise network switch that can integrate all your traffic into one point at the customer edge.
Finally, ATM can be used to enhance or expand campus and workgroup networks; that is, it can be used to upgrade LANs. In the early days of ATM, one of the first
Early adopters, such as the U.S. Navy, universities, and health care campuses, today are deploying ATM. ISPs are the biggest customers of ATM, followed by financial institutions, manufacturers, health care, government, education, research labs, and other enterprises that use broadband applications.
ATM drivers include the capability to consolidate multiple data, voice, and video applications onto a common transport network with specified QoS on a per-application basis. It is also being used to replace multiple point-to-point leased lines, which were used to support individual applications' networks. In addition, Frame Relay is being extended to speeds above T-1 and E-1.
The major inhibitor of ATM is the high service cost. Remember that one of the benefits of Frame Relay is that it is an upgrade of existing technology, so it doesn't require an entirely new set of skills and an investment in new equipment. With ATM you do have an entirely new generation of equipment that needs to be
ATM is a very high-bandwidth, high-performance system that uses a uniform 53-byte cell: 5 bytes of addressing information and 48 bytes of payload. The benefit of the small cell size is reduced latency in transmitting through the network nodes. The
ATM is a connection-oriented network, which for purposes of real-time, multimedia, and time-sensitive traffic is very important because it allows controlled latencies. It operates over a virtual circuit path, which leads to great efficiency in terms of network management. Payload error control is done at the endpoints, and some limited error control procedures are performed on the headers of the cells within the network itself. ATM supports asynchronous information access: Some applications consume a high percentage of capacity (for instance, video-on-demand) and others
As discussed in the following sections, ATM has three main layers (see Figure 7.19 ): the physical layer, the ATM layer, and the ATM adaptation layer.
The Physical Layer The physical layer basically defines what transmission media are supported, what transmission rates are supported, what physical interfaces are supported, and what the electrical and optical coding schemes are for the ones and zeros. Like the OSI physical layer, it's a definition of the physical elements of getting the ones and zeros over the network.
The ATM Layer The ATM switch performs activities at the ATM layer. It performs four main functions: switching, routing, congestion management, and multiplexing.
The ATM Adaptation Layer The ATM adaptation layer (AAL) is the segmentation and reassembly layer. The native stream (whether it's real-time, analog, voice, MPEG-2 compressed video, or TCP/IP) goes through the adaptation layer, where it is segmented into 48-byte cells. Those 48-byte cells are then passed up to the first ATM switch in the network, which applies the header information that defines on which path and which channel the conversation is to take place. (This speaks, again, to the connection-orientation of ATM.)
At the onset of the call, there is a negotiation phase, and each switch that's required to complete the call to the destination gets involved with determining whether it has a path and channel of the proper QoS to deliver on the requested call. If it does, at that time it makes a table entry that identifies what path and channel the call will take between the two switches. If along the way one of the switches can't guarantee the QoS being
Within the AAL are a number of options:
When a customer's network equipment takes care of all the AAL-related functions, the network uses a Null AAL (also known as AAL 0). This means that no services are performed and that cells are transferred between the service interface and the ATM network
AAL 1 is designed to meet the needs of isochronous, constant bit rate (CBR) services, such as digital voice and video, and is used for applications that are sensitive to both cell loss and delay and to emulate conventional leased lines. It requires an additional byte of header information for sequence numbering, leaving 47 bytes for payload. This adaptation layer corresponds to
· AAL 2— AAL 2 is for isochronous variable-bit-rate (VBR) services such as packetized video. It allows ATM cells to be transmitted before the payload is full to accommodate an application's timing requirements.
AAL 3/4 supports VBR data, such as LAN applications, or bursty connection-oriented traffic, such as error messages. It is designed for traffic that can
· AAL 5— AAL 5 is intended to accommodate bursty LAN data traffic with less overhead than AAL 3/4. It is also known as SEAL (simple and efficient adaptation layer). Its major feature is that it uses information in the cell header to identify the first and last cells of a frame, so that it doesn't need to consume any of the cell payload to perform this function. AAL 5 uses a per-frame CRC to detect both transmission and cell-loss errors, and it is expected to be required by ITU-T for the support of call-control signaling and Frame Relay interworking.
The ATM Forum
For the latest information on which ATM standards are supported, are in the works, and are completely formalized, the ATM Forum ( www.atmforum.com ) is the best resource.
I've mentioned several times the virtual path and the virtual channel, and Figure 7.20 gives them a bit more context. Think of the virtual channel as an individual conversation, so that each voice, video, data, and image transmission has its own unique virtual channel. The number of that channel will change between any two switches, depending on what was assigned at the time the session was negotiated.
All similar virtual channels—that is, all those that have the same QoS request—are bundled into a common virtual path. Virtual Path 1 might be all real-time voice that has a very low tolerance for delay and loss; Virtual Path 2 might be for streaming media, which requires continuous bandwidth, minimum delay, and no loss; and Virtual Path 3 might be for non-mission-critical data, so best-effort service is fine. ATM is very elastic in terms of its tolerance of any losses and delays and allocation of bandwidth. It provides an easy means for the network operator to administrate QoS. Instead of having to manage each channel individually in order to guarantee the service class requested, the manager can do it on a path basis, thereby easing the network management process.
When does ATM fit into the network, and in what part does it fit? The way technologies seem to
According to Vertical Systems Consulting (
), by 2002 the global market for equipment and services is expected to be roughly US$13 billion, with about half of that, US$6.9 billion, outside the
. Equipment revenues by 2002 are expected to be around US$9.4 billion, and service revenues around US$3.9 billion. The United States today accounts for 75% of the worldwide
ATM's benefits can be summarized as follows:
· Provides hardware switching, which results in high performance
· Allows dynamic bandwidth for bursty data
· Provides CoS and QoS support for multimedia
· Scales in speed and network size
· Provides a common LAN/WAN architecture
· Provides opportunities for simplification via its virtual circuit architecture
· Has strong traffic engineering and network management capabilities
· Currently used by some 1,500 U.S. enterprises
The following are disadvantages of ATM:
· Has small cell size
· Has high overhead
· Has high service costs
· Requires new equipment
· Requires new technical expertise
Another disadvantage of ATM is that confusion arises when some of the capabilities of ATM begin to be offered by other approaches, such as Multiprotocol Label Switching (MPLS), as discussed in Chapter 10 .
For more learning resources, quizzes, and discussion forums on concepts related to this chapter, see www.telecomessentials.com/learningcenter .