6.1 Media Access and Control Protocol
6.1.1 Frame Format
Figure 6.1 shows the format of an Ethernet frame as defined in the original IEEE 802.3 standard. An Ethernet frame starts with a preamble and ends with a Frame Check Sequence (FCS).
Figure 6.1. Ethernet Frame Format
is a sequence of 56 bits having alternating 1 and 0 values that are used for synchronization. They serve to give
in the network time to detect the presence of a signal, and to begin reading the signal before the frame data arrives.
stands for Start Frame Delimiter and is a sequence of 8 bits having the bit configuration 10101011 that indicates the start of the frame. The
field identifies the station or
that are to receive the frame. The
identifies the station that originated the frame. The 802.3 standard
these address fields to be either 2 bytes or 6 bytes in length, but virtually all Ethernet
in existence today use 6-byte addresses. A Destination Address may specify either an individual address destined for a single station or a multicast address destined for a
of stations. A Destination Address of all 1 bits refers to all stations on the LAN and is called a broadcast address. The
field normally indicates the number of bytes in the
(Logic Link Control)
field. The Length/Type field can also
the protocol type of LLC Data if its value is larger than 1536 or 0600 in hexadecimal. LLC Data contains the data transferred from the source station to the destination station or stations. The maximum size of this field is 1500 bytes. If the size of this field is less than 46 bytes, then the use of the subsequent Pad field is necessary to bring the frame size up to the minimum length. If necessary, extra data bytes are appended in this
field to bring the frame length up to its minimum size. The minimum Ethernet frame size is 64 bytes from the Destination Address field through the Frame Check Sequence.
stands for Frame Check Sequence and contains a 4-byte Cyclical Redundancy Check value used for error checking. When a source station assembles a frame, it performs a CRC calculation on all the bits in the frame from the Destination Address through the Pad fields (that is, all fields except the Preamble, SFD, and FCS). The source station stores the value in this field and transmits it as part of the frame. When the frame is received by the destination station, it
an identical check. If the calculated value does not match the value in this field, the destination station assumes that an error has occurred during transmission and discards the frame.
The operation of the CRC is defined by the following polynomial:
Relying on this polynomial, the CRC values are generated with the following procedure.
The first 32 bits of the frame are complemented to avoid initial zeros normally found in the Destination Address.
bits of the frame are
to be the coefficients of a polynomial of degree
The degree of the frame polynomial is raised to
+ 31 by multiplying the original by
The frame polynomial is then divided by
to produce a remainder of degree <32.
The remainder sequence of 32 bits is complemented to become CRC.
Since coefficients of these
are binary, the Exclusive OR operation is used when remainders are calculated in the division process. The following expression shows a simple example of dividing a polynomial of degree 6,
+ 1, with a polynomial of degree 5,
1, using the Exclusive OR operation.
This expression can also be shown in the binary format as
Since we are only interested in the remainder, the operation of CRC can be implemented on the binary format frame with the following procedure:
Complement the first 32 bits of the frame.
Exclusive OR the first 33 bits of the frame with the
binary sequence of 100000100110000010001110110110111.
Throw away zeros to the left of the first one (counting from left) and combine the remainder binary sequence with the rest of the frame (the frame before Step 2 minus 33 bits) to form a reduced length frame.
If the number of bits for the new frame is larger than or equal to 33, start from Step 2 again.
Fill zeros at left side to make the remainder a 32-bit binary sequence, and complement this sequence for the CRC.
6.1.2 Carrier Sense Multiple Access/Collision Detection
The CSMA/CD protocol of Ethernet belongs to the Random Access technique group. Other
of this group include the Aloha and Slotted Aloha protocol. Aloha was developed ahead of and had inspired the creation of Ethernet. Let us examine features and performance characteristics of Aloha and Slotted Aloha protocols first as preparation for a better understanding of the CSMA/CD technique. Campuses of Hawaii University were
around on many different islands. The communication between computers at different campuses was carried out through radio transmission stations. The Aloha protocol was developed in the early 1970s to enable multiple campuses to share the same radio transmission medium. With the Aloha protocol, stations are allowed access to the radio transmission media whenever they have data to transmit. Because the threat of data collision exists, each station must either monitor its transmission or await an acknowledgment from the destination station. By comparing the transmitted packet with the received packet or by the lack of an acknowledgment, the transmitting station can determine the success of the transmitted packet. If the transmission was
, it is
after a random amount of time to reduce the probability of recollision.
managed communication system such as the telephone network, the available capacity is not always utilized 100%. Instead, extra
are built in to handle peak traffic loads. However, sometimes the demand can still exceed the total available capacity (e.g., the calling traffic on Mother's Day) so that many experience busy signals. Similar behavior can also be
in a random access system. Instead of a busy tone, transmissions can be blocked and delayed to avoid
collisions . To analyze the behavior of the Aloha protocol, let there be
stations contending for the use of the channel. Each station transmits
packets per second on average. For simplicity, we also assume that each packet has a length of
in units of time. The traffic intensity then can be
Because of collision, the average number of packets to be transmitted from each station becomes
. The traffic intensity with collision can be expressed by
represents the fraction of messages transmitted without collision. The probability of no collision for packets of length
and Poisson arrival rate is
. Therefore, we have the following relationship describing the behavior of the Aloha protocol:
We can find that the maximum traffic intensity is about 0.184 of the available transmission throughput as far as each newly generated packet is
By making a small restriction in the transmission freedom of the individual stations, the throughput of the Aloha protocol can be
. The transmission time is broken into slots of length
. Stations are only allowed to transmit at slot boundaries. When packets collide, they will overlap completely instead of partially. This has the effect of changing the probability of no collision to
and has come to be known as Slotted Aloha. The traffic behavior of the slotted Aloha is described by
We can find that the maximum traffic intensity is about 0.368 of the available transmission throughput as far as each newly generated packet is concerned. Compared with the Aloha protocol, synchronization is required among all stations. The
expressions only show the general behaviors of these two protocols. Detailed studies can be
with computer simulation. We can write a computer simulation program with the following inputs: number of stations, physical transmission throughput, average length of packets, and average packet arrival rate. With additional specifications of protocol type and simulation running time, we can observe simulation results of number of collisions, actual transmission throughput, and transmission delay under different input values. Figure 6.2 shows simulated results against analytical expressions for the Aloha protocol on the left and the Slotted Aloha on the right. The efficiency of the Aloha tends to improve when the random waiting time window is expanded.
Figure 6.2. Aloha Analytical and Simulation Results
The operation of CSMA/CD protocol can be explained by first looking at the basic rules for transmitting an Ethernet frame. Before transmission, the station monitors the transmission medium for a carrier. If a carrier is detected, the transmission is deferred. The station continues to monitor the network until the carrier ceases. If a carrier is not
, and the period of no carrier is equal to or greater than the
gap (IFG), the station then immediately begins transmission of the frame. IFG provides a brief recovery time between
to allow devices to prepare for
frame. While sending the frame, the transmitting station
the medium for a collision. If a collision is detected, the transmitting station stops sending the frame data and sends a 32-bit jam sequence. If the collision is detected very early in the frame transmission, the transmitting station will complete sending of the frame preamble before starting transmission of the jam sequence. The jam sequence is transmitted to ensure that the length of the collision is sufficient to be noticed by the other transmitting stations. After sending the jam sequence, the transmitting station waits a random period of time
using a random number generator before starting the transmission process. The probability of a repeated collision is reduced by having the
stations wait a random period of time before retransmitting.
The CSMA/CD protocol is similar to those of Aloha and Slotted Aloha in that both will back off and retransmit if a collision is detected. However, details are quite different. CSMA/CD does not follow a synchronized slot time to start transmission. On the other hand, a slot time of 512 bits is defined as the minimum size of an Ethernet frame for transmission throughputs of 10 and 100 Mbps. The slot time is 4096 bits for Gigabit Ethernet. The size of the slot time is designed such that collisions can be detected between stations located furthest away at
ends of the transmission medium. The carrier sense before transmission made the CSMA/CD an efficient random access protocol because it could avoid some avoidable collisions and end a transmission almost right after it detects a collision. In addition, the backoff procedure of Ethernet also
the chance of repeated collisions. The backoff algorithm implemented in Ethernet is known as truncated binary exponential backoff. Following a collision, each station generates a random number that
within a specified range of values. It then waits that number of slot times before attempting retransmission. The range of values
exponentially after each failed retransmission.
The transmission efficiency of the CSMA/CD protocol can be
by comparing a message with an average length
in time units to the average time it takes to pass through the transmission medium with a certain probability of collisions. For a message of length
, it takes time
to travel from a transmitter to a receiver and takes
to detect a collision. Furthermore, it takes
to resolve a collision. Therefore, the average time for a message to pass the transmission medium successfully is t
(1 + 2
is the ratio of transmission delay to message length. It has been found that the value of
. The transmission efficiency is then expressed by
This shows that the transmission efficiency of the CSMA/CD protocol is dependent on the packet length for a given network topology. For an average packet size of 800 bytes or 6400 bits and a transmission delay of 200 bits, the transmission efficiency is 0.832. When the average packet length decreases to 200 bytes or 1600 bits, the transmission efficiency becomes 0.383. Expression 6.6 shows the general behavior of the CSMA/CD protocols. Detailed studies can also be conducted with computer simulation. Figure 6.3 shows some simulated Ethernet traffic efficiency results obtained using the MATLAB program attached at the end of this chapter. In this simulation, collision can be detected after one simulation iteration which is about 1000 bits. The transmission delay is therefore
= 500 bits,
= 500/800 = 0.0625, and the transmission efficiency is
< 1(1 + 6.44
) = 0.713.
Figure 6.3. Ethernet Analytical and Simulation Results
This analysis is derived based on a bus topology where every transceiver is directly connected to the same transmission medium. For twisted pair-based Ethernet, the transmission is relayed through a centralized repeater called a hub. A conventional hub repeats a message it receives from one port to the rest of other ports. Every port has the opportunity to sense if the hub, hence the shared medium, is busy. If the hub receives messages from more than one port at the same time, it transmits the jam sequence to every port to emulate a collision of the bus topology. The traffic behavior of twisted pair-based Ethernet with a conventional hub is therefore the same as that derived for the bus topology.
Since ports of a hub are not physically connected to each other, traffic between different transceivers can be intelligently controlled. Collisions can be avoided if messages from different ports to the same destination port can be queued and multiple connections can be established
between pairs of transceivers. This leads to the general idea of a switched hub. There is no standard regulating the detailed architecture of a switched hub. If implemented properly, a switched hub should be able to talk to conventional twisted pair-based Ethernet transceivers and make a significant improvement in transmission efficiency. A switched hub can make the transmission efficiency close to 1 on each individual port and larger than 1 in the aggregate.