Network technologies (also called LAN technologies or network specifications) are used to run the basic unit of all internetworks-the LAN segment. The most widely known network technology is Ethernet, but there are several others, including Token Ring, Asynchronous Transfer Mode (ATM), wireless (WiFi), and FDDI.
Network technologies are implemented at the data-link layer (layer 2) of the sevenlayer OSI reference model. Put another way, network technologies are largely characterized by the physical media they share and how they control access to the shared medium. This makes sense, if you think about it. Networking is connectivity; but to be connected, order must somehow be maintained among those users doing the sharing. For that reason, layer 2 (the data-link layer) is also called the media access control layer, or MAC layer for short. The message unit format at this level is the data frame, or frame.
As such, most network technologies, such as Ethernet, Token Ring, and FDDI, in and of themselves can only deal with MAC addresses-those serial number-like device identifiers mentioned earlier. A network layer protocol such as IP is needed to route messages through the internetwork. Network technologies alone can only support switched internetwork operation-good only for local areas or simple paths over longer distances, where not much guidance is needed. Network technologies are used at opposite ends of the spectrum:
Access LANs (Distribution) Accept cabling from devices, tie workgroups together, and share resources, such as departmental printers and servers
Backbone LANs (Core) Link access LANs and share resources such as database servers, mail servers, and so on
Access LANs, formed by hubs or access switches, give users and devices connectivity to the network at the local level, usually within a floor in an office building. Backbone LANs, formed by routers or LAN switches, tie together access LANs, usually within a building or office campus. Routed internetworks are typically used to distribute traffic between the two.
Version 1 of Ethernet was developed by Xerox Corporation during the early 1970s. Over the subsequent decade, Xerox teamed with Intel and Digital Equipment Corporation to release Version 2 in 1982. Since that time, Ethernet has become the dominant network technology standard. Thanks mostly to economies of scale, the average cost per Ethernet port is now far lower than that of a Token Ring port. Indeed, it has become so much a de facto standard that many manufacturers are integrating Ethernet NICs into computer motherboards in an attempt to do away with the need for separate NIC modules.
Ethernet operates by contention. Devices sharing an Ethernet LAN segment listen for traffic being carried over the wire and defer transmitting a message until the medium is clear. If two stations send at about the same time and their packets collide, both transmissions are aborted, and the stations back off and wait a random period of time before retransmitting. Ethernet uses the Carrier Sense Multiple Access with Collision Detection (CSMA/CD) algorithm to listen to traffic, sense collisions, and abort transmissions. CSMA/CD is the traffic cop that controls what would otherwise be random traffic. It restricts access to the wire in order to ensure the integrity of transmissions. Figure 2-3 illustrates the CSMA/CD process.
Figure 2-3: Ethernet access is controlled by carrier sensing and detecting frame collisions
Because the medium is shared, every device on an Ethernet LAN segment receives the message and checks it to see whether the destination address matches its own address. If it does, the message is accepted and processed through the seven-layer stack, and a network connection is made. If the address doesn't match, the packets are dropped.
An algorithm is a structured sequence of rules designed to handle variable processes in an orderly manner automatically. Algorithms are commonplace in computing and networking, because things move so fast that there isn't time for a human to intervene.
Even apart from economies of scale, Ethernet is inherently less expensive, thanks to the random nature of its architecture. In other words, the electronics needed to run Ethernet are easier to manufacture because Ethernet doesn't try to control everything. In a general sense, it only worries about collisions.
Ethernet has several implementation options. The original Ethernet specification ran at 3 Mbps over coaxial cable or 10BaseT twisted-pair cable ( T stands for twisted-pair-we'll cover cabling specifications in Chapter 6). Fast Ethernet runs at 100 Mbps and runs over 100BaseTX or 100BaseFX fiber-optic cable ( F stands for fiber). Gigabit Ethernet runs at 1,000 Mbps (or 1 Gbps) over 1000BaseTX or 1000BaseFX cable. A popular configuration choice right now is Fast Ethernet access LANs interconnected through a Gigabit Ethernet backbone LAN.
Gigabit Ethernet is a 1,000-Mbps extension of the Ethernet standard. Gigabit Ethernet is sometimes also referred to as 1000BaseX in reference to the specification for the required copper or fiber-optic cabling. Gigabit Ethernet is being promoted by the Gigabit Ethernet Alliance, a nonprofit industry group much like the ATM Forum. The push for Gigabit Ethernet is largely motivated by its inherent compatibility with other Ethernet specifications (the original 10-Mbps Ethernet and 100-Mbps Fast Ethernet).
Gigabit Ethernet is ATM's main competition to replace FDDI as the backbone of choice. Its greatest advantage is familiarity, given that Ethernet is the pervasive technology. Originally designed as a LAN technology, at 1,000 Mbps, Gigabit Ethernet can scale to WAN configurations. Ethernet uses variable frame sizing-ranging between 64 bytes and 1,518 bytes per frame, and it does not enjoy the inherent QoS (Quality of Service) features "built-in" to the ATM protocol. However, many network managers are biased in favor of Gigabit Ethernet because it's easy to configure and deploy. Also, it presumably doesn't introduce the added layer of complexity that LANE (LAN emulation) adaptation requires. Like ATM, Gigabit Ethernet backbones operate over a variety of fiber-optic cable types.
As speedy as a Gigabit Ethernet connection is, there is an even faster connection available on some Cisco gear, namely 10 Gigabit Ethernet. As the name suggests, 10 Gigabit Ethernet runs ten times faster than Gigabit Ethernet, at 10,000 Mbps. Whereas Gigabit Ethernet was a natural extension of Fast Ethernet, 10 Gigabit Ethernet differs in an important way: namely, the physical medium required to transport its packets. The difference comes in that 10 Gigabit Ethernet does not run across twisted-pair cable, as does its Ethernet brethren. Rather, because of the sheer speed of 10 Gigabit Ethernet, single-mode or multimode fiber optics is normally required.
As of this writing, 10 Gigabit Ethernet over copper is in development, and Cisco already has models available.
At this time, 10 Gigabit Ethernet would be deployed in WAN, MAN, and datacenter backbone environments-anywhere large amounts of data need to be transported. As with anything that's new and super-fast, 10 Gigabit Ethernet devices don't come cheap. At the high end, the list price for a Catalyst 10GBase-EX4 Metro 10 Gigabit Ethernet module is $79,995.
The benefit of these modular implementations of Ethernet is that network administrators can leverage their investments in Ethernet networks. In many cases, an organization can upgrade from 10/100 Mbps up to Gigabit and 10 Gigabit Ethernet without having to reinvest as much in their core infrastructure.
Now almost totally displaced by Ethernet, Token Ring was at one time Ethernet's main competition as a LAN (networking) standard. It was (is) an interesting protocol that had some mentionable features. Token Ring differs sharply from Ethernet in its architectural approach.
As its own LAN standard, Token Ring is incompatible with Ethernet in terms of the type of NICs and software that must be used. Although Token Ring was widely installed in enterprises dominated by IBM, it never caught on as an open standard.
Token Ring takes its name from the fact that it defines attached hosts into a logical ring. We use logical to describe Token Ring here, because the LAN segment behaves like a ring by passing signals in a round-robin fashion as if the devices were actually attached to a looped cable. Physically, though, Token Ring LANs may be configured in a huband-spoke topology called a star topology. Figure 2-4 shows this. Note that in Token Ring parlance, the access concentrator is called a media access unit (MAU) instead of a hub.
Figure 2-4: Token ring LANs are logical rings, not actual physical loops
Token Ring avoids contention over a LAN segment by a token-passing protocol, which regulates traffic flow by passing a frame called a token around the ring. Only the host in possession of the token is allowed to transmit, thereby eliminating packet collisions. Token Ring's architecture in effect trades wait-time for collisions, because each station must wait its turn before capturing the token in order to transmit. Nonetheless, eliminating packet collisions greatly increases Token Ring's effective utilization of raw bandwidth. Tests show that Token Ring can use up to 75 percent of raw bandwidth, compared to Ethernet's theoretical maximum of about 37 percent. The trouble is that Token Ring only begins to pay off above certain traffic volumes.
Persuading the market to accept a new technology as a de facto standard requires volume. Token Ring is a great technology, but it lost out to Ethernet's immense edge in total number of LANs installed. From a market standpoint, Token Ring's need for large LAN sizes to pay off was probably fatal. Most LANs are small because most enterprises are small. Moreover, even big companies have mostly small LANs, whether in branch offices or even departments in large buildings. Remember, we're talking LAN segments in this context-the actual shared medium, not the "local network" that is a collection of all LAN segments attached to the LAN backbone.
Another problem is that Token Ring requires expensive electronics to operate its deterministic processes. If you think about it, it's only natural that manufacturing a NIC that transmits packets "at will" would be cheaper than making one to participate in an orderly regimen in which a token is required. Cost is one of the key reasons Token Ring lost out to Ethernet as the popular standard.
When Token Ring was introduced, it ran at 4 Mbps, but most LANs were upgraded to 16-Mbps media. If this seems slow compared to Fast Ethernet's 100-Mbps speeds, keep in mind that Token Ring yields far more effective bandwidth from rated wire speed. A 100-Mbps Token Ring specification was also offered to the public, but was not broadly deployed.
ATM (Asynchronous Transfer Mode) is a data-link network technology that, like Ethernet, Token Ring, and FDDI, is specified at layer 2 of the OSI model. But that's where the similarities end. ATM transmissions send 53-byte cells instead of packets. A cell is a fixedlength message unit. Like packets, cells are pieces of a message, but the fixed-length format causes certain characteristics:
Virtual circuit orientation Cell-based networks run better in point-to-point mode, in which the receiving station is ready to actively receive and process the cells.
Speed The hardware knows exactly where the header ends and data starts in every cell, thereby speeding up processing operations. Currently, ATM networks run at speeds of up to 40 Gbps.
Quality of Service (QoS) Predictable throughput rates and virtual circuits enable cell-based networks to better guarantee service levels to types of traffic that are priority.
ATM doesn't have a media access control technology, per se. ATM is a switching technology, in which a so-called virtual circuit is set up before a transmission starts. This differs sharply from LAN technologies such as Ethernet and Token Ring, which simply transmit a message without prior notification to the receiving host, leaving it up to switches and routers to figure out the best path to take to get there.
ATM cells are much smaller than Ethernet packets. Ethernet packet size can range from 64 bytes to over 1,500 bytes-up to about 25 times larger per message unit. By being so much more granular, ATM becomes that much more controllable.
ATM is designed to run over fiber-optic cable operating the SONET (Synchronous Optical Network) specification. SONET is an ANSI standard specifying the physical interfaces that connect to fiber-optic cable at various speeds. SONET specifications are set up for various cable speeds called optical carrier levels, or OC for short. The following is a list of commonly used speeds, past and present:
OC-1 52-Mbps fiber-optic cable
OC-3 155-Mbps fiber-optic cable
OC-12 622-Mbps fiber-optic cable
OC-24 1.2-Gbps fiber-optic cable
OC-48 2.5-Gbps fiber-optic cable
OC-96 4.9-Gbps fiber-optic cable
OC-192 10-Gbps fiber-optic cable
OC-256 13.27-Gbps fiber-optic cable
OC-768 40-Gbps fiber-optic cable
Like token-passing architectures, ATM's deterministic design yields high effective bandwidth from its raw wire speed. In fact, ATM's effective yield is said to be well above even Token Ring's 75 percent. Most ATM backbone LANs run OC-3 or OC-12. Most intercity links run OC-12, although major Internet backbone providers are now wiring OC-48 and higher to meet ever-increasing bandwidth demands.
ATM is regarded by many as the answer to the Internet's bandwidth shortage. It's been installed by many enterprises to replace overtaxed FDDI backbones. Not only is ATM fast, but also its inherent predictability lends itself to guaranteeing QoS-especially for multimedia applications, which, by nature, cannot tolerate latency. However, because of the need for LANE adaptation-which is an encapsulation technique breaking down Ethernet or Token Ring packets into ATM cells-ATM has a relatively high cost per port. It also introduces a new protocol into the mix, thereby increasing complexity.
FDDI stands for Fiber Distributed Data Interface, a 100-Mbps protocol that runs over fiber-optic cable media. Like IBM's Token Ring, FDDI uses a token-passing architecture to control media access, yielding high effective bandwidth from its 100-Mbps wire speed. If you've never heard of FDDI, you're not alone. FDDI has experienced a relatively low public profile because it has traditionally been used for backbones, not access LANs.
For years, FDDI was the network technology of choice for backbone LANs. This could be partially attributed to its speed. During the era of its introduction, FDDI was the first major fiber-based network technology, and its 100-Mbps speed set the standard.
However, backbone LANs are mission-critical. If the backbone goes down, the access LANs can't internetwork. For this reason, the FDDI specification was designed from the ground up for guaranteed availability, with a physical architecture configuring dual-redundant fiber-optic rings. Each station is connected to both rings, having two effects:
The station can failover to the backup ring if the primary ring fails.
The station nearest the point of failure on the primary ring serves as a loop-back connector, effectively becoming a ring-connecting device that keeps the ring unbroken.
While each station should be connected to both rings for safety's sake, you certainly could have a single-ring deployment.
FDDI's architecture made it attractive for use as backbone LANs, especially for office campuses and other large area applications. The dual rings provide redundant paths. Under normal operation, the secondary ring sits idle, passing only enough frames to keep itself running. The secondary ring goes into action when the primary ring fails. (Failures are usually caused by a break in the fiber or a faulty NIC somewhere in the network.) As Figure 2-5 shows, FDDI isolates the damaged station by wrapping around to the secondary ring and looping back in the other direction-thus keeping the ring intact.
Figure 2-5: FDDI was the backbone choice for years because of its speed and redundancy
Because of its design, an FDDI can have as much as 100 kilometers (60 miles) in fiberoptic cabling configured-a scale sometimes referred to as a metropolitan area network (MAN). The distance reach comes from the combined use of fiber-optic cabling and token-passing media access-both of which inherently support longer distances. In reality, though, most FDDI networks are located within a building or office campus. The few MANs that do exist generally belong to electrical utilities, which use them to centrally manage their power grids. FDDI is chosen for its fail-safe characteristics.
In an attempt to expand market acceptance, FDDI was adapted to run over unshielded twisted-pair media using a technology called CDDI (Copper Distributed Data Interface). CDDI is actually a trade name, not a standard, and Cisco acquired CDDI from Crescendo Communications in 1993. Even with Cisco's imprimatur, CDDI never really caught on as an alternative to Ethernet LANs. After a fairly long run of popularity as the backbone of choice with reliability-conscious large enterprises, FDDI itself has started to fade from the scene, with ATM and Gigabit Ethernet usurping it as the backbone technologies of choice.
On the LAN front, one of the more exciting and utilitarian technologies of recent years has been the development and growth of wireless networking. At first, clients were able to connect at modest 11-Mbps speeds. However, with recent advances in the technology, clients are able to connect at 54 Mbps, and faster speeds are in development.
There are now a variety of standards for wireless networking, including the ones everybody hears about and many have in their homes: 802.11b and 802.11g (802.11a is less common).
The first wireless LAN (WLAN) standard was devised in 1997 by the Institute of Electrical and Electronics Engineers (IEEE). It was slow by today's standard, running at a maximum of 2 Mbps. It did not really catch on, of course, so they came up with another standard in 1999 referred to as 802.11b. This commonly deployed standard runs at 11 Mbps.
While working on 802.11b, the IEEE also came up with another specification called 802.11a. This standard runs on the 5-Ghz range of the wireless spectrum, whereas 802.11b runs at 2.4 Ghz.
Because it runs at a higher frequency, 802.11a can support speeds up to 54 Mbps. The downside is that high frequencies tend to be sensitive to obstructions like furniture, floors, walls, and even trees. So the trade-off is often faster speeds for less range.
More recently, the new(er) 802.11g standard was finally settled on. In a nutshell, it's like 802.11b, but faster. It operates on the 2.4-Ghz part of the wireless spectrum, just like 802.11b, but provides speeds up to 54 Mbps (some vendors have "extended" the standard to support even higher speeds. As many may have noticed, 802.11g is also able to support 802.11b if the connecting equipment can't run the newer "g" standard.
One of the most prevalent places where wireless is seen is in a LAN environment. Beyond the "hey, cool" factor of wireless (and it is pretty cool), they allow the sheer functionality of being able to connect without having to string Cat 5 cabling all over the place. This is useful in old buildings that might not have suspended ceilings or any place else where stringing cables would be prohibitive.
Connectivity is made possible using a wireless adapter in a client computer and an access point (AP) connected to a LAN switch. Computers then communicate with the network using the wireless card to communicate with the AP.
Wireless isn't limited just to the LAN world. In fact, wireless WANs are becoming a great option for businesses, especially those with buildings within a few miles of each other.
For example, using a couple of Cisco Aironet 1300 bridges and directional antennas, you can connect two networks with a minimum of fuss. You needn't apply for special licenses for radio broadcasts and-even better-you don't have to pay for a leased T1 or T3 line.
Wireless networking is a rich subject-so rich, in fact, that we've dedicated an entire chapter to it. Flip ahead to Chapter 8 for more information about wireless technologies in general and Cisco's wireless products in particular.