6.1 Media Access and Control Protocol

6.1.1 Frame Format

Figure 6.1 shows the format of an Ethernet frame as defined in the original IEEE 802.3 standard. An Ethernet frame starts with a preamble and ends with a Frame Check Sequence (FCS).

Figure 6.1. Ethernet Frame Format

graphics/06fig01.gif

Preamble is a sequence of 56 bits having alternating 1 and 0 values that are used for synchronization. They serve to give components in the network time to detect the presence of a signal, and to begin reading the signal before the frame data arrives. SFD stands for Start Frame Delimiter and is a sequence of 8 bits having the bit configuration 10101011 that indicates the start of the frame. The Destination Address field identifies the station or stations that are to receive the frame. The Source Address identifies the station that originated the frame. The 802.3 standard permits these address fields to be either 2 bytes or 6 bytes in length, but virtually all Ethernet implementations in existence today use 6-byte addresses. A Destination Address may specify either an individual address destined for a single station or a multicast address destined for a group of stations. A Destination Address of all 1 bits refers to all stations on the LAN and is called a broadcast address. The Length/Type field normally indicates the number of bytes in the subsequent LLC (Logic Link Control) Data field. The Length/Type field can also indicate the protocol type of LLC Data if its value is larger than 1536 or 0600 in hexadecimal. LLC Data contains the data transferred from the source station to the destination station or stations. The maximum size of this field is 1500 bytes. If the size of this field is less than 46 bytes, then the use of the subsequent Pad field is necessary to bring the frame size up to the minimum length. If necessary, extra data bytes are appended in this Pad field to bring the frame length up to its minimum size. The minimum Ethernet frame size is 64 bytes from the Destination Address field through the Frame Check Sequence.

FCS stands for Frame Check Sequence and contains a 4-byte Cyclical Redundancy Check value used for error checking. When a source station assembles a frame, it performs a CRC calculation on all the bits in the frame from the Destination Address through the Pad fields (that is, all fields except the Preamble, SFD, and FCS). The source station stores the value in this field and transmits it as part of the frame. When the frame is received by the destination station, it performs an identical check. If the calculated value does not match the value in this field, the destination station assumes that an error has occurred during transmission and discards the frame.

The operation of the CRC is defined by the following polynomial:

Equation 6.1

graphics/06equ01.gif


Relying on this polynomial, the CRC values are generated with the following procedure.

  1. The first 32 bits of the frame are complemented to avoid initial zeros normally found in the Destination Address.

  2. The n bits of the frame are considered to be the coefficients of a polynomial of degree n 1.

  3. The degree of the frame polynomial is raised to n + 31 by multiplying the original by x32.

  4. The frame polynomial is then divided by G(x) to produce a remainder of degree <32.

  5. The remainder sequence of 32 bits is complemented to become CRC.

Since coefficients of these polynomials are binary, the Exclusive OR operation is used when remainders are calculated in the division process. The following expression shows a simple example of dividing a polynomial of degree 6, x6 + x4 + x + 1, with a polynomial of degree 5, x5 + x3 + x2 + 1, using the Exclusive OR operation.

graphics/06equ01a.gif


This expression can also be shown in the binary format as follows:

graphics/06equ01b.gif


Since we are only interested in the remainder, the operation of CRC can be implemented on the binary format frame with the following procedure:

  1. Complement the first 32 bits of the frame.

  2. Exclusive OR the first 33 bits of the frame with the G(x) binary sequence of 100000100110000010001110110110111.

  3. Throw away zeros to the left of the first one (counting from left) and combine the remainder binary sequence with the rest of the frame (the frame before Step 2 minus 33 bits) to form a reduced length frame.

  4. If the number of bits for the new frame is larger than or equal to 33, start from Step 2 again.

  5. Fill zeros at left side to make the remainder a 32-bit binary sequence, and complement this sequence for the CRC.

6.1.2 Carrier Sense Multiple Access/Collision Detection

The CSMA/CD protocol of Ethernet belongs to the Random Access technique group. Other members of this group include the Aloha and Slotted Aloha protocol. Aloha was developed ahead of and had inspired the creation of Ethernet. Let us examine features and performance characteristics of Aloha and Slotted Aloha protocols first as preparation for a better understanding of the CSMA/CD technique. Campuses of Hawaii University were scattered around on many different islands. The communication between computers at different campuses was carried out through radio transmission stations. The Aloha protocol was developed in the early 1970s to enable multiple campuses to share the same radio transmission medium. With the Aloha protocol, stations are allowed access to the radio transmission media whenever they have data to transmit. Because the threat of data collision exists, each station must either monitor its transmission or await an acknowledgment from the destination station. By comparing the transmitted packet with the received packet or by the lack of an acknowledgment, the transmitting station can determine the success of the transmitted packet. If the transmission was unsuccessful, it is resent after a random amount of time to reduce the probability of recollision.

In a centrally managed communication system such as the telephone network, the available capacity is not always utilized 100%. Instead, extra capacities are built in to handle peak traffic loads. However, sometimes the demand can still exceed the total available capacity (e.g., the calling traffic on Mother's Day) so that many experience busy signals. Similar behavior can also be observed in a random access system. Instead of a busy tone, transmissions can be blocked and delayed to avoid extensive collisions [1]. To analyze the behavior of the Aloha protocol, let there be N stations contending for the use of the channel. Each station transmits l packets per second on average. For simplicity, we also assume that each packet has a length of m in units of time. The traffic intensity then can be expressed as

Equation 6.2

graphics/06equ02.gif


Because of collision, the average number of packets to be transmitted from each station becomes l' > l. The traffic intensity with collision can be expressed by

Equation 6.3

graphics/06equ03.gif


The ratio S/G represents the fraction of messages transmitted without collision. The probability of no collision for packets of length m and Poisson arrival rate is e-2G. Therefore, we have the following relationship describing the behavior of the Aloha protocol:

Equation 6.4

graphics/06equ04.gif


We can find that the maximum traffic intensity is about 0.184 of the available transmission throughput as far as each newly generated packet is concerned.

By making a small restriction in the transmission freedom of the individual stations, the throughput of the Aloha protocol can be doubled. The transmission time is broken into slots of length m. Stations are only allowed to transmit at slot boundaries. When packets collide, they will overlap completely instead of partially. This has the effect of changing the probability of no collision to e-G and has come to be known as Slotted Aloha. The traffic behavior of the slotted Aloha is described by

Equation 6.5

graphics/06equ05.gif


We can find that the maximum traffic intensity is about 0.368 of the available transmission throughput as far as each newly generated packet is concerned. Compared with the Aloha protocol, synchronization is required among all stations. The preceding expressions only show the general behaviors of these two protocols. Detailed studies can be conducted with computer simulation. We can write a computer simulation program with the following inputs: number of stations, physical transmission throughput, average length of packets, and average packet arrival rate. With additional specifications of protocol type and simulation running time, we can observe simulation results of number of collisions, actual transmission throughput, and transmission delay under different input values. Figure 6.2 shows simulated results against analytical expressions for the Aloha protocol on the left and the Slotted Aloha on the right. The efficiency of the Aloha tends to improve when the random waiting time window is expanded.

Figure 6.2. Aloha Analytical and Simulation Results

graphics/06fig02.gif

The operation of CSMA/CD protocol can be explained by first looking at the basic rules for transmitting an Ethernet frame. Before transmission, the station monitors the transmission medium for a carrier. If a carrier is detected, the transmission is deferred. The station continues to monitor the network until the carrier ceases. If a carrier is not detected, and the period of no carrier is equal to or greater than the interframe gap (IFG), the station then immediately begins transmission of the frame. IFG provides a brief recovery time between frames to allow devices to prepare for reception of the next frame. While sending the frame, the transmitting station monitors the medium for a collision. If a collision is detected, the transmitting station stops sending the frame data and sends a 32-bit jam sequence. If the collision is detected very early in the frame transmission, the transmitting station will complete sending of the frame preamble before starting transmission of the jam sequence. The jam sequence is transmitted to ensure that the length of the collision is sufficient to be noticed by the other transmitting stations. After sending the jam sequence, the transmitting station waits a random period of time chosen using a random number generator before starting the transmission process. The probability of a repeated collision is reduced by having the colliding stations wait a random period of time before retransmitting.

The CSMA/CD protocol is similar to those of Aloha and Slotted Aloha in that both will back off and retransmit if a collision is detected. However, details are quite different. CSMA/CD does not follow a synchronized slot time to start transmission. On the other hand, a slot time of 512 bits is defined as the minimum size of an Ethernet frame for transmission throughputs of 10 and 100 Mbps. The slot time is 4096 bits for Gigabit Ethernet. The size of the slot time is designed such that collisions can be detected between stations located furthest away at opposite ends of the transmission medium. The carrier sense before transmission made the CSMA/CD an efficient random access protocol because it could avoid some avoidable collisions and end a transmission almost right after it detects a collision. In addition, the backoff procedure of Ethernet also reduces the chance of repeated collisions. The backoff algorithm implemented in Ethernet is known as truncated binary exponential backoff. Following a collision, each station generates a random number that falls within a specified range of values. It then waits that number of slot times before attempting retransmission. The range of values increases exponentially after each failed retransmission.

The transmission efficiency of the CSMA/CD protocol can be analyzed by comparing a message with an average length m in time units to the average time it takes to pass through the transmission medium with a certain probability of collisions. For a message of length m, it takes time t to travel from a transmitter to a receiver and takes 2t to detect a collision. Furthermore, it takes 2nt to resolve a collision. Therefore, the average time for a message to pass the transmission medium successfully is tv = m + t + 2nt = m[1 + a(1 + 2n)] where a = t / m is the ratio of transmission delay to message length. It has been found that the value of n approaches e. The transmission efficiency is then expressed by

Equation 6.6

graphics/06equ06.gif


This shows that the transmission efficiency of the CSMA/CD protocol is dependent on the packet length for a given network topology. For an average packet size of 800 bytes or 6400 bits and a transmission delay of 200 bits, the transmission efficiency is 0.832. When the average packet length decreases to 200 bytes or 1600 bits, the transmission efficiency becomes 0.383. Expression 6.6 shows the general behavior of the CSMA/CD protocols. Detailed studies can also be conducted with computer simulation. Figure 6.3 shows some simulated Ethernet traffic efficiency results obtained using the MATLAB program attached at the end of this chapter. In this simulation, collision can be detected after one simulation iteration which is about 1000 bits. The transmission delay is therefore t = 500 bits, a = 500/800 = 0.0625, and the transmission efficiency is S < 1(1 + 6.44a) = 0.713.

Figure 6.3. Ethernet Analytical and Simulation Results

graphics/06fig03.gif

This analysis is derived based on a bus topology where every transceiver is directly connected to the same transmission medium. For twisted pair based Ethernet, the transmission is relayed through a centralized repeater called a hub. A conventional hub repeats a message it receives from one port to the rest of other ports. Every port has the opportunity to sense if the hub, hence the shared medium, is busy. If the hub receives messages from more than one port at the same time, it transmits the jam sequence to every port to emulate a collision of the bus topology. The traffic behavior of twisted pair based Ethernet with a conventional hub is therefore the same as that derived for the bus topology.

Since ports of a hub are not physically connected to each other, traffic between different transceivers can be intelligently controlled. Collisions can be avoided if messages from different ports to the same destination port can be queued and multiple connections can be established simultaneously between pairs of transceivers. This leads to the general idea of a switched hub. There is no standard regulating the detailed architecture of a switched hub. If implemented properly, a switched hub should be able to talk to conventional twisted pair based Ethernet transceivers and make a significant improvement in transmission efficiency. A switched hub can make the transmission efficiency close to 1 on each individual port and larger than 1 in the aggregate.



Home Network Basis(c) Transmission Environments and Wired/Wireless Protocols
Home Networking Basis: Transmission Environments and Wired/Wireless Protocols
ISBN: 0130165115
EAN: 2147483647
Year: 2006
Pages: 97

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net