Establishing Connections: Switching Modes and Networking Modes

Establishing Connections: Switching Modes and Networking Modes

For messages to travel across a network, a transmission path must be established to either switch or route the messages to their final destinations. Therefore, network providers need a mechanism that allows them to deliver the proper connections when and where a customer requests them. When, as you can imagine, is ideally now or bandwidth on demand. Where has two components: path calculation, which entails establishing the proper physical or logical connection to the ultimate destination, and forwarding, which is concerned with how to actually guide the traffic across the backbone so that it uses that physical and logical connection to best advantage.

The networking techniques that evolved over time to handle the when and where came about because traditionally, relatively few high-capacity backbone cables existed. Those few backbone cables had to be manipulated to meet the needs of many individual customers, all of whom had varied bandwidth needs. Two networking techniques arose:

         Networking modes There are two networking modes: connection oriented and connectionless.

         Switching modes There are also two switching modes: circuit switching and packet switching. Both of these switching modes offer forms of bandwidth on demand. (But remember that the connection speed can never be greater than the speed of the customer's access line; the fastest connection you can get into the network is what your access line supports.) As you'll learn later in this chapter, circuit switching and packet switching have different ways of performing path calculations and forwarding functions.

The following sections describe networking modes and switching modes in detail.

Networking Modes

When most people are evaluating a network, they concentrate on circuit switching versus packet switching. But it's also very important to consider the networking mode, which can be either connection oriented or connectionless.

Connection-Oriented Networking

As time-sensitive applications become more important, connection-oriented networks are becoming increasingly desirable. In a connection-oriented network, the connection setup is performed before information transfer occurs. Information about the connections in the networks helps to provide service guarantees and makes it possible to most efficiently use network bandwidth by switching transmissions to appropriate connections as the connections are set up. In other words, the path is conceived at the outset, and after the path is determined, all the subsequent information follows the same path to the destination. In a connection-oriented network, there can be some delay up front while the connection is being set up; but because the path is predetermined, there is no delay at intermediate nodes in this type of network after the connection is set up.

Connection-oriented networks can actually operate in either switching mode: They can be either circuit switched or packet switched. Connection-oriented circuit-switched networks include the PSTN (covered later in this chapter and in detail in Chapter 5, "The PSTN"), SDH/SONET (covered in more detail in Chapter 5), and DWDM (covered in detail in Chapter 12, "Optical Networking") networks. Connection-oriented packet-switched networks (covered later in this chapter and in detail in Chapter 7, "Wide Area Networking") include X.25, Frame Relay, and ATM networks.

Connection-oriented networks can be operated in two modes:

         Provisioned In provisioned networks, the connections can be set up ahead of time based on expected traffic. These connections are known as permanent virtual circuits (PVCs).

         Switched In switched networks, the connections are set up on demand and released after the data exchange is complete. These connections are known as switched virtual circuits (SVCs).

Connectionless Networking

In a connectionless network, no explicit connection setup is performed before data is transmitted. Instead, each data packet is routed to its destination based on information contained in the header. In other words, there is no preconceived path. Rather, each fragment (that is, packet) of the overall traffic stream is individually addressed and individually routed. In a connectionless network, the delay in the overall transit time is increased because each packet has to be individually routed at each intermediate node. Applications that are time sensitive would suffer on a connectionless network because the path is not guaranteed, and therefore it is impossible to calculate the potential delays or latencies that might be encountered.

Connectionless networks imply the use of packet switches, so only packet-switched networks are connectionless. An example of a connectionless packet-switched network is the public Internet that wild and woolly place over which absolutely no one has any control. It's a virtual network that consists of more than 150,000 separate subnetworks and some 10,000 Internet service providers (ISPs), so being able to guarantee performance is nearly impossible at this time. One solution is to use private internets (that is, Internet Protocol [IP] backbones), which achieve cost-efficiencies but, because they are private, provide the ability to control their performance and thereby serve business-class services. For example, a large carrier (such as AT&T or British Telecom) might own its own internet infrastructure, over a very wide geographic area. Because it owns and controls those networks end to end, it can provision and engineer the networks so that business customers can get the proper service-level agreements and can guarantee the performance of their virtual private networks and streaming media networks. The downside in this situation is reliance on one vendor for the entire network.

Switching Modes

Let's start our discussion of switching modes by talking about switching and routing. Switching is the process of physically moving bits through a network node, from an input port to an output port. (A network node is any point on the network where communications lines interface. So a network node might be a PBX, a local exchange, a multiplexer, a modem, a host computer, or one of a number of other devices.) Switching elements are specialized computers that are used to connect two or more transmission lines. The switching process is based on information that's gathered through a routing process. A switching element might consult a table to determine, based on number dialed, the most cost-effective trunk over which to forward a call. This switching process is relatively straightforward compared to the type of path determination that IP routers in the Internet might use, which can be very complex.

Routing, on the other hand, involves moving information from a source to a destination across an internetwork, which means moving information across networks. In general, routing involves at least one intermediate node along the way, and it usually involves numerous intermediate nodes and networks. Routing involves two basic activities: determining the optimal path and transporting information through an internetwork. Routing algorithms are necessary to initialize and maintain routing tables. Routing algorithms work with a whole slew of information, called metrics, which they use to determine the best path to the destination. Some examples of the metrics that a routing algorithm might use are path length, destination, next-hop associations, reliability, delay, bandwidth, load, and communication cost. A router could use several variables to calculate the best path for a packet, to get it to a node that's one step closer to its destination. The route information varies depending on the algorithm used, and the algorithms vary depending on the routing protocol chosen. Most manufacturers today support the key standards, including Routing Information Protocol (RIP), Open Shortest Path First (OSPF), and Intermediate System to Intermediate System (IS-IS). Network engineers generally decide which of these protocols to use. Routing protocols can also be designed to automatically detect and respond to network changes. (Protocols and metrics are discussed in detail in Chapter 9, "The Internet: Infrastructure and Service Providers.")

There are two types of routers that you should be familiar with: static routers and dynamic routers. A static router knows only its own table; it has no idea what the routing tables of its upstream neighbors look like, and it does not have the capability of communicating with its upstream neighbors. If a link goes down in a network that uses static routers, the network administrator has to manually reconfigure the static routers' routing tables to take the downed trunk out of service. This reconfiguration would not affect any change in the upstream routers, so technicians at those locations would then also have to include or accommodate the change. A dynamic router, on the other hand, can communicate with its upstream neighbors, so if a change occurred to its routing table, it would forward that change so that the upstream routers could also adjust their routing tables. Furthermore, a dynamic router not only has a view of its own routing table, but it can also see those of its neighbors, or the entire network or routing area, depending on the protocol. It therefore works much better in addressing the dynamic traffic patterns that are common in today's networks.

As noted earlier in the chapter, there are two switching modes: circuit switching and packet switching. Circuit switches are position based; that is, bits arrive in a certain position and are switched to a different position. The position to which bits are switched is determined by a combination of one or more of three dimensions: space (that is, the interface or port number), time, and wavelength. Packet switching is based on labels; addressing information in the packet headers, or labels, helps to determine how to switch or forward a packet through the network node.

Circuit Switching

Circuit switching has been the basis of voice networks worldwide for many years. You can apply three terms to the nature of a circuit-switched call to help remember what this is: continuous, exclusive, and temporary. One of the key attributes of a circuit-switched connection is that it is a reserved network resource that is yours and only yours for the full duration of a conversation. But when that conversation is over, the connection is released. A circuit-switched environment requires that an end-to-end circuit be set up before a call can begin. A fixed share of network resources is reserved for the call, and no other call can use those resources until the original connection is closed. A call request signal must travel to the destination and be acknowledged before any transmission can actually begin. As Figure 4.1 illustrates, you can trace the path from one end of the call to the other end; that path would not vary for the full duration of the call, and the capacity provisioned on that path would be yours and yours alone.

Figure 4.1. A circuit-switched call

graphics/04fig01.gif

Advantages and Disadvantages of Circuit Switching Circuit switching uses smany lines to economize on switching and routing computation. When a call is set up, a line is dedicated to it, so no further routing calculations are needed.

Since they were introduced in the mid-1980s, digital cross-connect systems (DCSs) have greatly eased the process of reconfiguring circuit-switched networks and responding to conditions such as congestion and failure. DCSs create predefined circuit capacity, and then voice switches are used to route calls over circuits that are set up by these DCSs. DCSs are analogous to the old patch panels. You may have seen a main distribution frame (MDF) on which twisted-pair wiring is terminated. The MDF is a manual patch panel, and before DCSs were introduced, when it was necessary to reconfigure a network based on outage, congestion, or customer demand as a result of shifting traffic patterns, technicians had to spend days or even weeks, manually making changes at the MDF. The DCS is a software patch panel, and within the software are databases that define alternate routes alternate connections that can be activated in the event that the network encounters a condition that requires some form of manipulation. DACSs are one of the elements of the PSTN that contribute to its reliability: When network conditions change, in a matter of minutes, a DCS can reconfigure the network around those changes. With such tools, the PSTN is able to offer five 9s reliability in other words, 99.999% guaranteed uptime. (DCSs are discussed in more detail in Chapter 5.)

Circuit switching offers the benefits of low latency and minimal delays because the routing calculation on the path is made only once, at the beginning of the call, and there are no more delays incurred subsequently in calculating the next hop that should be taken. Traditionally, this was sometimes seen as a disadvantage because it meant that the circuits might not be used as efficiently as possible. Around half of most voice calls is silence. Most people breathe and occasionally pause in their speech. So, when voice communications are conducted over a circuit that's being continuously held, and half the time nothing is being transmitted, the circuit is not being used very efficiently. But remember that this is an issue that is important when bandwidth is constrained. And as mentioned earlier in the book, through the optical revolution, bandwidth is being released at an astounding rate, so the efficient use of circuits because of bandwidth constraints will not present the same sort of issue in the future that it once did. Hence, the low latencies or delays that circuit switching guarantees are more important than its potential drawbacks in bandwidth efficiency.

Circuit switching has been optimized for real-time voice traffic for which Quality of Service (QoS) is needed. Because it involves path calculation at the front end, you know how many switches and cables you're going to go through, so you can use a pricing mechanism that's based on distance and time. The more resources you use, either over time or over distance, the greater the cost. Again, developments in fiber economics are changing some of the old rules, and distance is no longer necessarily an added cost element. (QoS is discussed in more detail in Chapter 10, "Next-Generation Networks.")

Generations of Circuit Switches Circuit switches have been around for quite some time. We've already been through three basic generations, and we're beginning to see a fourth generation.

The History of the Strowger Switch

The Strowger switch has a rather amusing history, and as it's so rare that we have really amusing stories in telecommunications, I'll share it with you. Once upon a time in the wild west, there was a young man named Alman B. Strowger who wasn't a telecommunications engineer by trade. He was a mortician. As life would have it, he had a competitor in town. During this period, there were no dial pads to use when making a telephone call. Instead, you had to talk with the town telephone operator, and she would extend the connection on your behalf. Mr. Strowger's competitor's wife was the town telephone operator. So, needless to say, anytime there was gossip about a gun battle about to brew on Main Street, she let her husband know, and he was there to collect the bodies before Mr. Strowger got a chance. Mr. Strowger decided to use technology to get a competitive advantage, and he invented the Strowger switch. The new switch meant that you could dial a number directly from your phone and thereby bypass the town telephone operator.

The first generation of circuit switches was introduced in 1888. It was referred to as the step relay switch, the step-by-step switch, or the Strowger switch, in honor of the man who invented it (see Figure 4.2).

Figure 4.2. A step relay switch

graphics/04fig02.gif

In 1935 the second generation of circuit switches was introduced: crossbar switches (see Figure 4.3). Crossbar switches were electromechanical, but each one could service a larger number of subscribers. Both step relay and crossbar switches still exist in the world. Of course, they are generally in underdeveloped areas, but they're not all relegated to museums quite yet. Every year you hear about one or two being decommissioned somewhere in the world.

Figure 4.3. A crossbar switch

graphics/04fig03.gif

The third generation of circuit switches stored program control (also referred to as electronic common control) was introduced in 1968. A stored program control is a computer-driven software-controlled switch. Because this type of switch is electronic, there are no moving parts, and the switch has a longer life than earlier generations of switches. Because it is software controlled, it offers more guarantees against obsolescence, easier upgradability to enhanced feature sets, and better control over user features and cost features because everything can be programmed into databases that facilitate the call control process (see Figure 4.4).

Figure 4.4. A stored program control switch

graphics/04fig04.gif

The three generations of circuit switches are in place and operating at various levels of activity. With each new generation of switches, we've basically added more connection-oriented features, features that somehow help in making connections (for example, customer features such as call forwarding and call waiting). Circuit switches in the future will likely be able to define connections based on a requested service class. Examples of variables that define a service class are the amount of delay that can be tolerated end-to-end, as well as between components, and the maximum loss that can be tolerated before the transmission is greatly hampered. Hence, we will be able to build connections to meet a particular service class and thereby aid in ensuring the proper performance of an application.

Customer premises equipment (CPE) circuit switches include PBXs. In the PSTN, circuit switches include the local exchanges with which subscribers access the network, the tandem or junction switches that interconnect numbers of local exchanges throughout a metro area, the toll or transit switches used for national long-distance communications, and international gateways used for cross-country communications. A large number of vendors sell these circuit switches, as well as more specialized niche products.

A fourth generation of switches optical networking switches is emerging now (see Chapter 13, "Broadband Access Solutions"). Often, these optical networking elements are referred to as wavelength routers or optical switches. The idea is to be able to provision a very high-speed path, at OC-48 (that is 2.5Gbps), to provision a path end-to-end across a network of dense wavelength division multiplexers. This will be increasingly important in providing communications interfaces to the high-speed switches that have become available.

Circuit switches double their performance: cost ratio approximately every 80 months to 40 months (that is, normally the performance improves every 80 months, although sometimes new generations are created more rapidly every 40 months). Major architectural changes in circuit switches occur relatively infrequently. Network switches are responsible for doing all the work of setting up and tearing down calls, as well as for addressing and providing the features that are requested. They provide a very high level of functionality on a very centralized basis within the network, and that enables the end stations to be very cheap and very dumb (for example, a single-line telephone). Again, when intelligence was extremely expensive, there was something to be gained by centralizing it in a monolithic switch because that allowed consumers to access the network and to participate as users with a very low entry point. Until recently, if you wanted to spend time on the Internet, you had to have a PC, and that costs considerably more than a single-line telephone. On the other hand, costs are dropping in electronics and appliances all the time, so this is also becoming less of an issue, and perhaps in this way, too, distributing the intelligence makes sense. This is the age-old argument about smart core/dumb edge versus dumb core/smart edge, and it speaks to the differences in philosophies between classic telecommunications engineers (affectionately referred to as "bell heads") and modern-day data communications engineers ("net heads"). Chapter 11 talks more about the evolution of the intelligent edge.

Packet Switching

Whereas circuit switching was invented to facilitate voice telephony, packet switching has its origin in data communications. In fact, packet switching was developed specifically as a solution for the communications implications of a form of data processing called interactive processing.

The first generation of data processing was batch processing, in which a data entry clerk would sit down at a job entry terminal and key a volume of data onto an intermediate medium initially key punch cards, and later tape or disk. The data were then preaccumulated on an intermediate medium, and at some later point a job would be scheduled and a link would be established to the host that would be responsible for processing the data. When you began to transmit this preaccumulated volume, you had a steady stream of continuous high-volume data, so batch processing made quite effective use of a circuit-switched environment.

In contrast to batch processing, in interactive processing, data entry occurs online, so, in essence, data is transmitted only when you press the Enter key, but when you're looking at the screen or filling in a spreadsheet, nothing is being transmitted. Thus, interactive processing involves a traffic stream that's described as being bursty in nature, and bursty implies that you have long connect times but low data volumes. Therefore, interactive processing does not make efficient use of circuit-switched links: The connection would be established and held for a long period of time, with only little data passed. Packet switching was developed to increase the efficiencies associated with bursty transmission. Packet switching involves the multiplexing of multiple packets over one virtual circuit (that is, the end-to-end logical connection that creates a complete path across the network from source to destination node; see Chapter 2, "Telecommunications Technology Fundamentals"). It also involves decentralizing the network intelligence not only the intelligence for maintaining and tearing down the connections in centralized switches but also the endpoints that participate in the control of the end-to-end session.

Packets A packet is a basically a container for bits. We also use terms such as blocks and frames and cells and datagrams to depict the same concept. A packet can be a number of sizes, contain different numbers of bits, and have varying amounts of navigational control that the network nodes can use to navigate and route the packet. (Chapter 7 discusses some of the different types of packets and the techniques that use them.) In general, the features of the packet depend on the considerations. Each protocol, as it's developed over time, makes certain assumptions about whether bandwidth is available, or whether there's too much noise, and therefore too much retransmission needed, or whether the key issue is latency. Packets of different sizes may therefore perform differently in different environments.

A packet is, in essence, a store-and-forward mechanism for transmitting information. Packets are forwarded through a series of packet switches, also known as routers, that ultimately lead to the destination. Information is divided into packets that contain two very important messages: the destination address and the sequence number. The original forms of packet switching (developed in the late 1960s and early 1970s) were connectionless infrastructures. In a connectionless environment, each packet is routed individually, and the packets might not all take the same path to the destination point, and hence they may arrive out of sequence. Therefore, the sequence number is very important; the terminating point needs it to be able to reassemble the message in its proper order.

Generally, in packet switching, packets from many different sources are statistically multiplexed and sent on to their destinations over virtual circuits. Multiple connections share transmission lines, which means the packet switches or routers must do many more routing calculations. Figure 4.5 illustrates a packet-switched network that uses virtual circuits. You can see that packets are queued up at the various nodes, based on availability of the virtual circuits, and that this queuing can impose delays. The first generation of packet-switched networks could support only data; it could not support voice or video at all because there was so much delay associated with those networks. As packet-switched environments are evolving, we are developing techniques to be able to separate and prioritize those traffic types. (Chapter 10 talks about those issues in depth.)

Figure 4.5. A packet-switched network

graphics/04fig05.gif

Connectionless Versus Connection-Oriented Packet-Switched Networks There are two forms of packet-switched networks: connectionless and connection oriented.

Connectionless Packet-Switched Networks You can picture connectionless networks by using a postal service metaphor: I write a letter, I put it in an envelope, and I address the envelope. My carrier does not care in the least what it says on my envelope because she knows where she is taking that envelope. It's going to the next point of presence, which is the local post office. The local post office will be concerned with the destination zip code, but it isn't concerned at all about the name or street address on the envelope. It simply wants to know what regional center to route it to. The regional center cares about the destination city, and the destination local post office cares about the actual street address because it needs to assign the letter to the right carrier. The carrier needs to care about the name so that the letter finds its way into the right mailbox. If you end up with someone else's letter in your box, the ultimate responsibility for error control is yours because you are the endpoint.

A connectionless environment worries about getting a packet one step closer to the destination (see Figure 4.6). It doesn't worry about having an end-to-end view of the path over which the message will flow; this is the fundamental difference between connection-oriented and connectionless environments, and, hence, between infrastructures such as the Internet and the PSTN. Examples of connectionless packet-switched networks include the public Internet, private IP backbones or networks, Internet-based VPNs, and LANs. Again, each packet (referred to as a datagram transmission) is an independent unit that contains the source and destination address, which increases the overhead. That's one of the issues with connectionless packet-switched networks: If we have to address each packet, then the overall percentage of control information relevant to the actual data being transported rises.

Figure 4.6. A connectionless network

graphics/04fig06.gif

Each router performs a path calculation function independently, and each relies on its own type of routing protocols (for example, Open Shortest Path First [OSPF], Intermediate System to Intermediate System [IS-IS], or Border Gateway Protocol [BGP]). Each router calculates the appropriate next hop for each destination, which is generally based on the smallest number of hops (although some routing protocols use an abstract notion of "cost," as defined by the network administrator, in making their decisions). Packets are forwarded, then, on a hop-by-hop basis rather than as part of an end-to-end connection. Each packet must be individually routed, which increases delays, and the more hops, the greater the delay. Therefore, connectionless environments provide less control over ensuring QoS because of unknown latencies, unknown retransmissions, and unknown sequences in which the packets will arrive.

Connectionless Packet-Switched Networks The connection-oriented packet-switched environment is something like a telephone network, in which a call setup is performed end-to-end. X.25, Frame Relay, ATM, and Multiprotocol Label Switching (MPLS) are all connection-oriented techniques. In a connection-oriented packet-switched network, only one call request packet contains the source and destination address (see Figure 4.7). Therefore, the subsequent packets don't have to contain the address information, which reduces the overall overhead. The call request packet establishes the virtual circuit. Each individual switch along each path, then, forwards traffic to the appropriate next switch until packets all arrive at the destination. With connection-oriented networks, we do not need to route each individual packet. Instead, each packet is marked as belonging to some specific flow that identifies which virtual circuit it belongs to. Thus, the switch needs only to look at the mark and forward the packet to the correct interface because the flow is already set up in the switch's table. No repeated per-packet computation is required; consequently, connection-oriented networks reduce latencies, or delays.

Figure 4.7. A connection-oriented network

graphics/04fig07.gif

In the connection-oriented environment, the entry node contains the routing table, where the path is calculated and determined, and all packets follow that same path on to the destination node, thereby offering a better guarantee of service.

Advantages and Disadvantages of Packet Switching Packet-switching techniques have a number of limitations, including the following:

         Latencies occur because connection-oriented packet switching is a store-and-forward mechanism.

         Jitter (that is, variable delay, or delay in moving the bits between any two switches) occurs. There are two main types of delay: jitter and entry-to-exit-point delay. Say that your end-to-end delay might meet the desired minimum of 150 milliseconds, but between Switches 1 and 2, the delay is 20 milliseconds and between Switches 2 and 3, it's 130 milliseconds. That variation in delay, or jitter, will hamper some applications, so it needs to be controlled so that the network can support demanding applications.

         Packet loss occurs as congestion occurs at the packet switches or routers, and it can considerably degrade real-time applications. For example, if a few packets of a voice call are lost, then you'll hear pops and clicks, but if the loss climbs into the 30% to 40% range, the voice might sound like "ye ah ng ng ah mm mm ah." This is the experience that many people today find at times when using the public Internet for telephony, where at peak periods of day, packet loss can be as great as 40%.

Given these drawbacks and the way packet-switched networks evolved, packet-switched networks originally gave no QoS guarantees they offered only best-effort QoS. But they guaranteed high reliability because you would be able to route packets through alternate nodes or pathways if they encountered link-resistant failures along the way; thus, you were guaranteed that information would be transported, but not within metrics such as latency and packet loss. Currently, protocols are being developed that will enable real-time applications such as voice, video, audio, and interactive multimedia to perform properly on packet-switched networks.

The pricing mechanism that evolved with packet-switched networks was a bit different from that used for circuit-switched networks. It was not based on time and distance but on usage. You're either billed based on the volume of packets or the amount of bandwidth that you subscribe to. Distance insensitivity is a part of the packet-switched networking environment.

Generations of Packet Switches Similar to circuit switches, packet switches have gone through three basic generations: X.25 switches (first generation), routers (second generation), and Frame Relay and cell switches (third generation). Each generation of packet switching has increased the efficiency of packet processing and in the speed of the interfaces that it supports. In effect, the size of the pipes and the size of the interfaces dictate how effectively the packet-switched network performs. In packet switching, the processing is being pushed outside the network to the end nodes, so you need to have more intelligent software at the end nodes that get involved in the session setup, maintenance, and teardown, as well as flow control from end to end.

Besides X.25 switches, routers, and Frame Relay switches, packet switches include ATM switches and a new breed, called Tbps (terabits per second) switch routers. A large number of vendors sell these packet switches, and it seems that more companies jump on the bandwagon each day.

Packet switches are doubling their performance: cost ratio every 20 to 10 months, so we see the evolution of new entries in the product line much more rapidly in this environment than in the circuit-switched world. However, again we rely on expensive end stations PCs or other computers to finish the job of communication in packet switching. These end stations have to rely on protocols such as Transmission Control Protocol/Internet Protocol (TCP/IP), an open standard for internetworking that performs the equivalent of call setup/teardown and correct receipt of data. (TCP/IP is discussed in Chapter 9.) These end stations also have to ensure that all the data has been received and that it has been received correctly.

Comparing Circuit Switching and Packet Switching

What does the future hold for circuit switching and packet switching? Circuit switching is superior to packet switching in terms of eliminating queuing delays, which results in completely predictable latency and jitter in the backbone. Given the trend toward real-time visual and sensory communication streams, this seems to be the most important characteristic for us to strive toward. With the large capacities that are afforded with the new DWDM systems and other optical network elements, minimizing latency becomes more important than optimizing bandwidth via statistical multiplexing. (DWDM and other forms of multiplexing are discussed in Chapter 2.) We're likely to see the use of statistical multiplexing continue to increase at the edge and at the customer premises, as a means of economically integrating and aggregating traffic from the enterprise to present it over the access link to the network. In the core, fiber-based and circuit-switched networks are likely to prevail.

Table 4.1 is a brief comparison of circuit switching and packet switching. As you look at the table, keep in mind that as we get more bandwidth, circuit-switched networks do not have to be so concerned with bandwidth efficiency. And as QoS is added to packet-switched networks, these networks are able to support real-time applications. Again, the prevailing conditions have a lot to do with what is best in a given network.

Table 4.1. Circuit Switching Versus Packet Switching

Characteristic

Circuit Switching

Packet Switching

Origin

Voice telephony

Data networking

Connectionless or connection oriented

Connection oriented

Both

Key applications

Real-time voice, streaming media, videoconferencing, video-on-demand, and other delay- and loss-sensitive traffic applications

Bursty data traffic that has long connect times but low data volumes; applications that are delay and loss tolerant

Latency/delay/jitter

Low latency and minimal delays

Subject to latency, delay, and jitter because of its store-and-forward nature

Network intelligence

Centralized

Decentralized

Bandwidth efficiency

Low

High

Packet loss

Low

High

 



Telecommunications Essentials
Telecommunications Essentials: The Complete Global Source for Communications Fundamentals, Data Networking and the Internet, and Next-Generation Networks
ISBN: 0201760320
EAN: 2147483647
Year: 2005
Pages: 84

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net