7.3 Network Interfaces

This section discusses the physical layers that interconnect network devices; we'll focus primarily on Ethernet.

7.3.1 Ethernet

The origins of Ethernet date back to 1973, when it was invented by Bob Metcalfe [5] and David Boggs, then of Xerox PARC (Palo Alto Research Center). The first implementation ran at 3 Mb/s. About 5,000 machines were built with this interface; most were deployed within Xerox, although some went to other corporations (particularly Boeing), government, and academia. The initial Ethernet standard was developed by a consortium, formed in 1979, of Xerox, Digital Equipment Corporation, and Intel. Their work was sent to a newly formed IEEE working group, which was later named IEEE Project 802. This working group was divided into three groups:

[5] Who later went on to found 3Com Corporation and evangelize Ethernet as a multivendor standard.

  • The High Level Interface (HILI) group, focusing on internetwork protocols and management (which later became 802.1).

  • The Logical Link Control (LLC) group, which emphasized end-to-end connectivity and the media-independent layers (which later became 802.2).

  • The Data Link and Medium Access Control (DLMAC) group, responsible for the medium access protocol itself.

In 1982, the DLMAC group further split into three committees : 802.3 for Ethernet, driven by Xerox, DEC, and Intel; 802.4 for Token Bus, championed by Burroughs, Honeywell, and Western Digital; and 802.5 for Token Ring, which was the domain of IBM. The resulting 802.3 Ethernet standards committee developed a series of specifications for 10 Mb/s Ethernet to support different kinds of physical media. Initially, Ethernet ran on thick coaxial cable only; later, thin coaxial cable, unshielded twisted pair, and fiber optic cables were supported.

There was little initial emphasis on performance in the development of Ethernet for two reasons. First, networks were generally small, and traffic was not approaching bandwidth limits, making sophisticated performance monitoring unnecessary. Second, it was technically difficult, since implementing advanced metering and control capabilities would add significant complexity and cost.

As it became apparent that the 10 Mb/s data rate was becoming too slow, an effort began to increase the data rate by an order of magnitude. However, due to political considerations, two groups formed with different approaches. The first group aimed to retain the existing protocol as much as possible, and make only minor interface changes, focusing on Category 5 UTP cable initially. This would leverage the "Ethernet" name as well as the large base of existing engineering expertise. This group, which received a huge marketing and public relations push, became formally known as the 802.3u Task Force. The other group wanted to define a new protocol, with the intent of building a more efficient transmission method, and focused on reusing existing Category 3 UTP cable. This group became the 802.12 Task Force. Both groups fought on the technical and marketing fronts during the standards development process, and eventually 802.3u and 802.12 were both passed as official standards by the IEEE Standards Board on June 13, 1995. Whatever advantage 802.12 may have held, the court of public opinion clearly favored 802.3u, and virtually all mainstream LAN vendors adopted the 802.3u standard.

It might be assumed that the adoption of 100 Mb/s networking products would have immediately killed the 10 Mb/s Ethernet marketplace . This was definitely not the case for several reasons: the higher bandwidth was not required in all cases, there was a massive installed base of 10 Mb/s devices, and there was a substantial initial cost premium associated with the newer devices.

Shortly after the adoption of the Fast Ethernet standard, people began to be interested in increasing Ethernet performance by another order of magnitude. This need was driven by the widespread deployment of 100BASE-T networks, which put an immense load on backbone infrastructures . While there were existing technologies, such as ATM, that were capable of much higher data rates (622 Mb/s at the time; currently the figure is closer to 2.4 Gb/s), deploying these in combination with the wildly popular Ethernet required the frame formats to change. This is a very expensive thing to do quickly, and thus an effort began to evaluate 1 Gb/s networking within the Ethernet model. The standard was finalized in June of 1998, and is beginning to see deployment as of 2000.

7.3.1.1 Fundamentals of Ethernet signaling

In 10 Mb/s Ethernet, the actual signals placed on the wire use a technique known as Manchester encoding , which allows the clock signal and the data to be transmitted in one logical parcel. One way of thinking about this process is that it is like differential signaling (see Section 5.2.3.6), but using one wire instead of two. This parcel , formally called a bit-symbol , includes the logical inverse of the encoded bit followed by the actual value of the encoded bit, so that there is always a signal transition in the middle of the bit-symbol. For example, the bit "0" would be encoded in Manchester as the bit-symbol "01." This seems silly, since it appears to double the amount of work required to send a bit of data, but just like differential signaling, it is useful in long-distance communications. Its biggest disadvantage is that it generates signal changes on the wire twice as fast as the data rate, which makes the transmission more susceptible to interference. As a result, 100 Mb/s Ethernet does not use Manchester encoding.

The data encapsulation for Ethernet is rather nontrivial. The data, which is a minimum of 46 bytes and a maximum of 1,500 bytes, [6] has a 23-byte header and a 4-byte trailing block attached. The contents of the header and trailer are described in Table 7-2.

[6] The MTU size of Ethernet (1,500 bytes) measures only the payload capacity of an Ethernet frame. The Ethernet headers are not counted against the 1,500-byte payload limit.

Table 7-2. Ethernet header

Bits

Content

0-55

Preamble

56-63

Start Frame Delimiter

64-111

Data address

112-159

Source address

160-175

Type

 

Data

Last 32

FCS

The preamble is a sequence of alternating ones and zeroes. This provides a single frequency on the network that the receiver can use to "lock on" to the incoming bit stream, and it is not passed through to the host system. It turns out that the preamble plays a critical role in collision detection; if two devices are transmitting at the same time, no matter how the waveforms line up, the interference can be detected , since the preamble always fits the alternating pattern. The preamble is followed by a start-of-frame marker, which is called the Start Frame Delimiter (SFD). The data address is the 48-bit hardware address of the destination (see Section 7.1 earlier in this chapter), and the source address is the 48-bit hardware address of the source. The type field describes the protocol the message is using.

7.3.1.2 Topologies

There are a set of defined Ethernet topologies, each described by a specific nomenclature . The original standard used thick coaxial cable, and was known as 10BASE5. The 10 encodes the network data rate in Mb/s, the "BASE" refers to the use of a signaling method known as baseband, and the 5 describes the maximum segment length in 100 meter increments . The coaxial cable serves as a linear bus to which all nodes are connected; the center conductor is used to transmit the signal and the shield is used as a ground reference. The coaxial cable used in 10BASE5 is quite thick (about a centimeter) and not very flexible. The actual connection between a given node is accomplished by an attachment unit interface (AUI) cable, which has a maximum length of 50 meters and uses the common 15-pin D-type connector. The AUI cable is connected to the medium attachment unit (MAU), which serves as the interface between the two. There are two methods for making this connection. The first method, which is known as an extrusive tap, involves the MAU clamping onto the thick coaxial cable and inserting a probe into it. [7] The second method is an intrusive tap, which involves cutting the coaxial cable and inserting the MAU in the middle. There may be no less than 2.5 meters between MAUs, and the distance must be a multiple of 2.5 meters. In today's environments, this topology is a nightmare to work with: minor changes can easily involve rerouting the cable, vampire tap mechanisms are notoriously difficult to get correct and tend to fail, and intrusive taps involve network downtime. Despite its limitations, this topology can still be found.

[7] This is also known as a vampire tap .

In order to alleviate many of the installation difficulties of 10BASE5, the 10BASE2 standard was defined to allow the use of thin (less than 5 mm) coaxial cable, commonly known as "Thinnet." The thin coaxial cable is more flexible and easy to work with, and the attachment is much simpler, requiring a simple BNC connector. [8] However, the use of the lower-cost cable reduced the maximum cable length to 185 meters. Due to difficulties in reliability, 10BASE2 was slowly eclipsed in the early 1990s by 10BASE-T.

[8] BNC is from Bayonet-Neill-Concelman.

7.3.1.3 10BASE-T

10BASE-T, more commonly known as "twisted-pair Ethernet," allows the use of standard, Category 3, two-pair (one for data transmission, one for reception -- a total of four active wires) cables. Because separate transmit and receive paths are implicit in such a cabling system, there must be some way for either end to make sure the link is physically connected. This is known as the Link Integrity test , and is accomplished by each end transmitting a link test pulse when there is no data to be sent. If a link test pulse is not received, the Ethernet interface will mark the line as link fail and (in theory) refuse to transmit data.

This topology, unlike 10BASE2 and 10BASE5, is not serial; instead, it relies on a star topology, where each host in the network is connected to a central hub, which acts as a repeater. Each link may be up to 100 meters (that is, the distance from any host to the hub must be no more than 100 meters).

7.3.1.4 100BASE-T4

The 100BASE-T4 protocol is the result of an effort to provide 100 Mb/s data rates over widely installed Category 3 wiring. It uses a star topology, and requires the continuous use of four pairs of wire, three of which are used to transmit data, and the remaining pair to listen for simultaneous activity from the other end of the link (thus inducing a collision). Because three out of the four pairs of wire are required to transmit data, 100BASE-T4 is a half-duplex medium only. One problem with this signaling scheme is a phenomenon known as pair skew . Pair skew means that each of the signals sent on each of the three pairs will spread apart slightly in time as they progress down the wire because of variations in the lengths of the cables. While the 100BASE-T4 hardware is designed to cope with this condition, it is important that all pairs for a particular cable reside in the same jacket. Although 100BASE-T4 is well supported, it is not widely used.

7.3.1.5 100BASE-TX

The overall topology of 100BASE-TX, which provides 100 Mb/s data rates over distances of up to 100 meters using Category 5 unshielded twisted-pair cable, is similar to that of the other twisted-pair standards. It differs only in the type of cable required and in some esoteric signaling mechanisms that we will not discuss here. 100BASE-TX has quickly become the dominant 100 Mb/s Ethernet technology in the marketplace.

7.3.1.6 Gigabit Ethernet topologies

There are currently four topology standards for Gigabit Ethernet: 1000BASE-SX, short-wavelength laser light over multi-mode fiber; 1000BASE-LX, long-wavelength laser light over multi-mode or single-mode fiber; 1000BASE-CX, short-haul copper ; and 1000BASE-T, which uses Category 5 UTP cable. The difference between 1000BASE-SX and 1000BASE-LX is primarily one of distance, as illustrated in Table 7-3.

Table 7-3. Distance specifications for Gigabit Ethernet over fiber

Type

MMF (50 m)

MMF (62.5 m)

SMF (10 m)

1000BASE-SX

~500 m

~220 m

Not available

1000BASE-LX

~500 m

~500 m

~5 km

The intent of 1000BASE-CX is to cheaply enable interconnections between physically proximate devices, such as in a wiring closet. 1000BASE-CX has a range of approximately 25 meters. 1000BASE-T follows the same guidelines as 100BASE-T, but it suffers from technical complexity because of the huge impact of inteference when using copper-based cables at such high data rates.

7.3.1.7 The 5-4-3 rule

There is an IEEE rule that describes how repeated segments can be attached to each other. This rule defines the maximum size of an unrouted/unswitched network, and is called the 5-4-3 rule . It says that there can be no more than five repeated segments, nor more than four repeaters between any two Ethernet hosts , and of those five repeated segments, only three may be populated (that is, have devices other than bridges attached to them). These restrictions can be very limiting. Since switches regenerate and reframe packets, this rule does not apply to them.

7.3.1.8 Collisions

A collision occurs when two devices on the network sense that the network is idle and end up trying to send data at the same time. A collision may only be detected during the transmission of the preamble (see Section 7.3.1.1 earlier in this chapter). Since only one device can transmit at a time, both devices back off and try to retransmit. The retransmission algorithm mandates that each device wait a random amount of time, so the two are not likely to collide in their retransmission attempts. If a packet collides 16 consecutive times, it is aborted. Some collisions are normal on a repeated network, but excess collisions can cause serious performance problems. Unfortunately, there is no single good rule of thumb to say "this number of collisions is too many." As a general guideline, however, you can use Table 7-4.

Table 7-4. Collision rate guidelines

Network type

Cautionary levels

Severe levels

Unswitched

5-10%

15%

Switched

2-4%

5%

Resolving severe collision problems on unswitched networks is best done by moving to a switched design. The most likely cause for high collision rates on switched networks is duplex mismatches between the switch and hosts; that is, the switch thinks that the link is full-duplex , but the host is running at half-duplex, or vice versa.

A related problem is a late collision, which happens when a collision occurs but is not detected in time for the usual collision resolution protocol to occur. This is because the time required to send the data throughout the network is longer than the time to push the data onto the wire, so the two devices responsible for the collision never see the other data until they have put all of their own data on the network. Late collisions cause serious network performance problems, especially in NFS Version 2 environments. Late collisions are usually due to excessively long segments, faulty connectors, or defective devices.

The presence of late collisions on a network is a very strong indication of serious network problems. Normal collisions don't significantly impact performance until they exceed 10% on an unswitched network or 5% on a switched network.

7.3.1.9 Autonegotiation

When 100 Mb/s Ethernet was introduced, there was an obvious requirement for vendors to support the older 10 Mb/s standard in the new devices, especially in adapter cards and switches. 100BASE-T defines an auto-negotiation [9] feature that allows devices to interoperate with both 10 Mb/s and 100 Mb/s installations. Auto-negotiation also determines the capability of the device as half or full duplex.

[9] Sometimes called auto-sensing , which is misleading.

Duplex Modes

The duplex mode of a network link describes whether both parties can transmit concurrently. In a full-duplex link, both sides can transmit concurrently, effectively doubling the throughput on a well-balanced link. [a] In a half-duplex link, only one device can transmit at once.

[a] The ratio of transmitted data to received data is approximately 1.

So, auto-negotiation asks two questions, to each of which there can be two answers:

  • What speed can the interface run at? (10 or 100 Mb/s)

  • What duplex mode can the interface use? (Full or half)

Unfortunately, while the auto-negotiation mechanism looks great on paper, it can have some implementation issues. Although the final 100BASE-T standard defines a way of performing auto-negotiation, this was one of the last sections to be completed, and many vendors had shipping 100 Mb/s implementations before the standard was finished.

Let's look at an example where auto-negotiation works properly. Here, both the local network interface and the upstream network interface (e.g., on a workgroup switch) have auto-negotiation enabled. When the interfaces detect the link, each one transmits a list of the modes where it can operate . With this information, each side can independently pick the best common mode. The critical piece of information is that auto-negotiation is an active method of determining the mode for the link: each interface is expected to transmit the modes it can use in a specific format. If an interface does not receive this information, it assumes the other side is not an auto-negotiating device. The other important thing to understand is that auto-negotiation relies upon both devices picking the same choice from the list of common modes. This generally works, but it is not 100% reliable.

Many of the pre-standard interfaces use auto-sensing; they look at the incoming data and makes a guess about what the other end of the link is capable of. There is no explicit communication between the interfaces to negotiate this standard. Passively detecting the speed is easy. Each Ethernet packet starts with a sequence of bits that alternate between 1 and 0. To determine the link speed, the interface simply measures the time between these state changes. If an interface works from the fastest mode down, the other interface can pick the first speed it understands. Unfortunately, passive methods of reliably selecting a duplex mode are nearly impossible . This problem is compounded by the fact that an Ethernet link still functions if the duplex modes are mismatched (mostly). [10]

[10] Transmission of data in both directions is allowed in full-duplex mode, but this behavior is considered a collision in half-duplex mode.

In order to get around these problems, it is common practice to force an interface to a specific mode and not rely on any auto-negotiation or auto-sensing systems. This practice can cause problems, however; the classic example is when the upstream interface is forced into 100 Mb/s full-duplex operation, and the workstation configured at the other end of the link settles into 100 Mb/s half-duplex mode and starts to generate a high number of network errors. The reason for this is that most new interfaces use auto-negotiation; if auto-negotiation is turned off on the upstream side, the host on the other side of that link will send its own auto-negotiation information -- but the switch will not send any of its own, because its auto-negotiation functionality is disabled. The downstream host will then assume that the upstream interface is not capable of full-duplex mode, and use passive auto-sensing methods to select the transfer rate.

As a result, you must either trust auto-negotiation to succeed, or you must hand-configure both sides of the link. This is especially true if you intend to use full-duplex mode: if one side of the link has auto-negotiation disabled, the other side of the link will always find the right speed, but will also default to half-duplex mode.

7.3.1.10 Displaying and setting modes

On Solaris systems with the hme Ethernet driver, [11] you can view the current status of the Ethernet interface by using ndd -get device flag :

[11] hme is actually short for Happy Meal Ethernet. The predecessor 10/100 card, which was called be , is short for BigMac Ethernet (not Broken Ethernet, as is the common joke).

 #  ndd -get /dev/hme link_status  1 #  ndd -get /dev/hme link_speed  1 #  ndd -get /dev/hme link_mode  1 

The meanings of the returned values are defined in Table 7-5.

Table 7-5. ndd return values

Flag

Meaning of "0"

Meaning of "1"

link_status

Link down

Link up

link_speed

10 Mb/s

100 Mb/s

link_mode

Half-duplex

Full-duplex

The easiest way to change the configuration parameters of an interface is a utility called hmeconfig , which a coworker of mine wrote when we were at the University of Illinois. It is available from http://arrakis.cso.uiuc.edu/jak/code/hmeconfig.html and is self-documenting .

You can also use ndd to force the interface into a specific mode of operation. This can be done at runtime, as described in the next example, but please note that the order of these commands is important. For example, this command sequence will force an hme interface to 100 Mb/s full-duplex operation:

 #  ndd -set /dev/hme instance 0  #  ndd -set /dev/hme adv_100T4_cap 0  #  ndd -set /dev/hme adv_100fdx_cap 1  #  ndd -set /dev/hme adv_100hdx_cap 0  #  ndd -set /dev/hme adv_10fdx_cap 0  #  ndd -set /dev/hme adv_10hdx_cap 0  #  ndd -set /dev/hme adv_autoneg_cap 0  

If you want to preserve the changes across a reboot, you must change /etc/system :

 set hme:hme_adv_autoneg_cap=0            set hme:hme_adv_100T4_cap=0 set hme:hme_adv_100fdx_cap=1 set hme:hme_adv_100hdx_cap=0 set hme:hme_adv_10fdx_cap=0 set hme:hme_adv_10hdx_cap=0 

Each of the parameters has a clearly defined meaning, summarized in Table 7-6.

Table 7-6. hme network interface parameters

Variable

Meaning [12]

adv_autoneg_cap

Use auto-negotiation.

adv_100T4_cap

The device is capable of using the T4 cabling standard.

adv_100fdx_cap

The device is capable of running at 100 Mb/s full-duplex.

adv_100hdx_cap

The device is capable of running at 100 Mb/s half-duplex.

adv_10fdx_cap

The device is capable of running at 10 Mb/s full-duplex.

adv_10hdx_cap

The device is capable of running at 10 Mb/s half-duplex.

[12] For all of these, a value of 0 means "off" and a value of 1 means "on."

For example, forcing an interface into 10 Mb/s half-duplex operation is a matter of setting all these variables to zero except for adv_10hdx_cap , which should be set to 1.

You can also view the characteristics advertised by the other end of the connection by using the following commands:

 #  ndd -get /dev/hme lp_autoneg_cap  1 #  ndd -get /dev/hme lp_100fdx_cap  1 #  ndd -get /dev/hme lp_100hdx_cap  1 #  ndd -get /dev/hme lp_10fdx_cap  1 #  ndd -get /dev/hme lp_10hdx_cap  1 

In this case, the upstream host is auto-negotiating and able to use all of the possible network modes.

Unfortunately, due to the vast array of different network interface cards supported by Linux, there is no generic method of forcing an interface. A careful reading of the documentation for your particular driver should elucidate how this can be done.

On multi-module Cisco switches running Version 5.5 of the Switching System, such as the Catalyst 5500 series, the show port card / number command can be used to display the status of a particular port:

 Switch> (enable)  show port status 2/6  Port  Name               Status     Vlan       Level  Duplex Speed Type ----- ------------------ ---------- ---------- ------ ------ ----- ------------  2/6  Sample Connection  connected  trunk      normal full     100 100BaseTX 

The duplex mode of a port can be switched by using set port duplex card / number value , where value is either "half" or "full." Similarly, you can set the speed or autonegotation value of a port by using set port speed card / number value , where value is either "100," "10," or "auto." [13] Here's an example where we set port 4 on card 3 to run at 10 Mb/s in half-duplex mode:

[13] An interface set to auto-negotiate speed will also auto-negotiate its duplex mode.

 Switch> (enable)  set port speed 4/3 10  Port 4/3 speed set to 10 Mbps. Switch> (enable)  set port duplex 4/3 half  Port 2/1 set to half-duplex. 

Workgroup-class Cisco switches -- such as the Catalyst 2900 and 3900 series -- use a different configuration interface that is very much like Cisco's IOS operating system, which runs on most of their router products. If you have one of these switches, you can change a port by using the following command sequence:

 Switch#  configure terminal  configure#  interface interface  configure#  speed 10100auto  configure#  duplex halffullauto  configure#  end  

Changing these interface parameters usually involves the network interface being brought down and then up again. It can be dangerous to do this if you are connected to the system through the interface in question. . . .

7.3.2 FDDI

The Fiber Distributed Data Interface (FDDI) was developed by ANSI in the mid-1980s in response to growing pressures on the capabilities of existing bandwidth technologies. It had become clear that a new media was needed that could support the most recent, high performance workstations, as well as provide high levels of network reliability. These two characteristics -- performance and reliability -- shaped the new protocol.

FDDI is a 100 Mb/s architecture in which two continuous loops , or rings , connect to every host on the network. However, the rings are counter-rotating: traffic in each ring flows opposite to the other. One of these rings is dedicated as the primary, and during normal operation it carries all of the traffic, while the other ring remains idle. This provides a remarkable amount of fault tolerance.

If a station on both rings fails, or if the cable is damaged, the ring is automatically wrapped (that is, the "loose" ends on either side of the fault are "connected" by the FDDI hardware), which transforms it into a single ring. Network performance is absolutely not affected. However, FDDI is only tolerant of a single failure. If two or more failures occur, the ring will segment into multiple, independent rings that cannot communicate. The solution to this is the optical bypass switch , which is, in essence, a device that sits on the ring in place of the host. The optical bypass relays the signals on the FDDI ring to the host by means of mirrors; in the event of failure, the optical bypass will pass the light through itself and so maintain ring integrity.

However, a device need not be on both rings. One unique characteristic of FDDI is that there are multiple ways to connect devices. There are three defined devices: a concentrator, a single-attachment station (SAS), and a dual-attachment station (DAS). A dual-attachment station has two ports and is attached to both rings. In some cases, if a dual-attachment station is powered off, the ring incurs a fault, but modern FDDI interfaces have built-in optical bypass switches.

A concentrator, which is also called a dual-attachment concentrator (DAC), connects to both rings of a FDDI network and ensures that the failure of any attached stations does not compromise a ring. In essence, a concentrator is the equivalent of an Ethernet hub. A single-attachment station has a single port, and connects to only one ring through a concentrator: the primary advantage is that if the device is disconnected or powered off, the ring integrity is not compromised.

The FDDI frame format borrows heavily from the Token Ring frame format. A FDDI frame can be as large as 4,500 bytes. We won't discuss the protocol in detail, but the most important idea is that only one station (the one with the "token") can transmit at a time. When a station is done transmitting, it passes the token along to the next host. It also has very powerful network management protocols.

FDDI, as the name implies, is usually run over optical fiber. However, the FDDI protocol can also be deployed over copper cables, formally known as the Twisted Pair Distributed Data Interface (TP-DDI) but more commonly referred to as the Copper Distributed Data Interface (CDDI). CDDI supports a distance from the concentrator to a host of about 100 meters, whereas FDDI over multi-mode fiber is capable of 2 kilometers; using single-mode fiber gives an even larger distance.

FDDI was used extensively in backbones in the early 1990s because it was extremely fast for the time, capable of running over long distances, and fault-tolerant. With the appearance of ATM and Fast Ethernet at a lower price point, however, it has fallen out of favor.

7.3.3 ATM

Another common high-performance network design, Asynchronous Transfer Mode (ATM), offers an intriguing vision of a truly universally integrated services network, in which voice, data, and video can all flow over a single cost-effective infrastructure. Unfortunately, ATM is not without its downsides, and it has largely been overshadowed by Gigabit Ethernet, which offers faster data transfer rates and much less complex configuration.

ATM is a connection-oriented technology (it establishes explicit connections, called circuits , between endpoints) that uses a cell of 53 bytes to transfer information. This cell consists of 5 header bytes and 48 bytes of data. [14] Despite what is implied by its name, ATM does not transfer packets asynchronously. Cells are transmitted continuosly, even when no data is being sent. When there is no traffic, each cell is filled with a specific bit pattern that indicates that the cell is empty. The "asynchronous" part comes about because there is no set time when the first cell of a connection may start. The use of these small, fixed-length cells provides two definite advantages. It allows for simple, fast hardware switching, even at very high speeds; it also means that buffers can be allocated in exact segments, which prevents wasting of memory.

[14] Some readers might wonder why the data size is 48 bytes. Recall that ATM was designed to support both data and voice communications. Rumour has it that this number was settled upon because it was the average of what the voice engineers wanted (32 bytes) and the data engineers wanted (at least 64 bytes). The usual conclusion to this rumour is ". . . so everyone got screwed in the end."

ATM can be run over many sorts of physical medium at varying data rates, as summarized in Table 7-7.

Table 7-7. ATM physical media for various data rates

Data rate

Media

25 Mb/s

Category 3, 4, and 5 UTP

155 Mb/s

Category 5 UTP, multi-mode fiber

622 Mb/s

Multi-mode and single-mode fiber

One of the chief benefits of ATM technology, at least on paper, is that it has extensive quality of service (QoS) support. This allows applications to reserve specific amounts of bandwidth, and some applications to have priority handling of their cells.

There are three different approaches to transferring IP packets over ATM: classical IP over ATM, LAN Emulation (LANE), and Multiprotocol over ATM (MPOA). Classical IP maps IP addresses to ATM addresses, which allows ATM devices to generate IP traffic. Unfortunately, classical IP over ATM supports only IP proper (not IPX or AppleTalk or any other transport protocols), and it does not scale well. LANE takes a different approach, intended to accelerate the deployment of ATM, and attempts to make the ATM network invisible to legacy Ethernet LANs. The LANE interface looks like a standard Ethernet interface. Unfortunately, because LANE is so transparent, it loses the ability to use the ATM quality of service protocol.

7.3.4 Ethernet Versus ATM/FDDI

Before the widespread deployment of switched Fast Ethernet, ATM and FDDI had two big advantages: their performance decays more gracefully under load than did repeated Ethernet, and they are collision-free. In the modern world, however, switched Ethernet networks remove almost all of these advantages. As a consequence, FDDI has fallen almost completely out of use, and ATM has been relegated to a niche market. From a price/performance perspective, Fast or Gigabit Ethernet is hard to beat. ATM has specific technical advantages that may make it advantageous in certain environments, however.



System Performance Tuning2002
System Performance Tuning2002
ISBN: N/A
EAN: N/A
Year: 2004
Pages: 97

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net