Section 9.3. Data Link Layer Design


9.3. Data Link Layer Design

The data link layer moves a frame from a transmitting node to one or more receiving nodes that are within the radio range as determined by the physical layer. The data link layer is commonly defined as having two sublayers, the logical link control (LLC) sublayer and the medium access control (MAC) sublayer. The MAC sublayer allows multiple devices to share a single medium, while the LLC is responsible for realizing a point-to-point link between endpoints and can, optionally, provide error detection and control functions. The MAC is a particularly important component of the data link layer as it is an important factor in determining the performance of communication between adjacent nodes. MAC techniques are generally well understood, but UWB introduces some new constraints and changes some underlying assumptions, as discussed in this section.

9.3.1. Objectives of the Data Link Layer

In general, a data link layer should meet the following four key objectives:

  1. Reliable data delivery, including addressing and framing, via a single communication channel, as provided by the physical layer protocol.

  2. Point-to-point flow control to prevent receiver buffer overflow at the receiver.

  3. Power conservation to minimize power consumption at the sender and receiver and to reduce the likelihood of interference.

  4. Fair and efficient resource sharing between participating nodes.

Reliable Data Delivery

A wireless link, UWB or otherwise, is an unreliable medium, so packet error rates can be much higher than in wired links. If the data link layer does not provide a reliable data delivery service, errors are detected only on an end-to-end basis, and any corrupted packets have to be retransmitted on an end-to-end basis by the transport layer or, perhaps, by the application layer. This is clearly inefficient if there is a sufficiently large probability of lost packets in the path between the source and destination nodes. Error detection and retransmission at the data link layer results in corrupted packets being retransmitted more quickly and over only a single communication link. For a multiple hop network with one or more unreliable links, as in a wireless network, retransmission at the data link layer rather than at the transport layer can significantly reduce overhead and decrease latency by correcting errors at the link where they occur. Thus, the data link layer in a wireless network that has sufficiently high error rates needs to provide reliable data delivery for efficient operation.

Reliable data delivery can be achieved using acknowledgment schemes, where a receiver sends back an acknowledgment (ACK) packet to the sender to acknowledge the successful reception of one or more data packets. If no ACK is received before some time-out expires or if a negative acknowledgment (NACK) is received, the sender retransmits the lost data packet or packets. For some applications with stringent delay requirements, such as real-time audio or video, even link level retransmissions introduce too much latency and should not be used. In addition to acknowledgment schemes, forward error correction (FEC) schemes may be used at the data link layer or physical layer to reduce the effective error rate as seen by higher layer protocols. FEC can reliably correct corrupted data bits in a frame, up to a number of bits limited by the particular error correction coding scheme employed. Because FEC consumes link capacity by transmitting redundant information and increases the complexity of the transmitter and receiver, some data link layers implementations do not use FEC mechanisms.

Point-to-Point Flow Control

If a sender transmits data frames across a link faster than the receiver can receive and process them, the receiver must store the frames in a buffer. Buffering can accommodate a short burst where the rate that frames are sent exceeds the rate at which the frames can be processed at the receiver. However, if this situation persists for too long, the buffers at the receiver fill up and buffer overflow occurs, leading to frames being lost. Lost frames result in performance degradation, possibly introducing retransmissions at the data link, transport, or application layer. To prevent buffer overflow, the link layer can introduce flow control to throttle the sender so that it can send data frames no faster than the frames can be processed at the receiver. Flow control provides a feedback mechanism to make the sender aware of how many additional packets the receiver can handle. Flow control is a relatively straightforward process for one-to-one connections, but is problematic for one-to-many and many-to-one communication patterns because coordination is required among more than two nodes.

Power Conservation

Because wireless nodes are often battery powered, device- and network-level power conservation to extend battery life is an important design consideration. Many wireless data link layer standards define power-saving "sleep" or "snooze" modes. Such power-saving modes allow a node to temporarily turn off certain components, such as its transmitter and receiver, when it is not actively engaged in communication. Power-saving modes defined in the IEEE 802.15.3 standard are described in detail in Appendix 10.B, "UWB Standards for WPANs"

Fair and Efficient Resource Sharing

The main responsibility of the data link layer's medium access control sublayer is to ensure fair and efficient resource sharing. Because a wireless link is, essentially, a broadcast medium, sharing radio resources among the devices in the network is an important issue. There are two major categories of medium access control schemes: contention-based protocols and collision-free channel partition protocols. In contention-based protocols, no central control node is needed for allocating channel resources to other nodes in the network. To transmit, each node must contend for radio resources. Collisions result when more than one node tries to transmit at the same time. To resolve persistent conflicts in transmission, contention-based protocols often use random backoff schemes after (or, in some cases, even before) collisions are detected. Because each node transmits at will without the benefit of global coordination, contention-based protocols are also called random access protocols. In this chapter, we use the term contention-based protocols for consistency. Examples of well-known contention-based protocols include Aloha [8], Slotted Aloha [9], Carrier Sense Multiple Access with Collision Detection (CSMA/CD) [10] as used in Ethernet, and Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) [1] as used in IEEE 802.11's MAC sublayer.

To eliminate collisions, collision-free protocols assign dedicated channel resources to each node that wishes to communicate. This works well for constant bit rate traffic, such as uncompressed voice traffic. However, for variable bit rate traffic, which is typical for data applications, channel resources will be wasted if there is no packet queued for transmission. Therefore, the utilization of channel capacity can be low for bursty data traffic. Examples of channel partition protocols include Time Division Multiple Access (TDMA), Frequency Division Multiple Access (FDMA), and Code Division Multiple Access (CDMA).

For a UWB network, interference avoidance or mitigation is another important design objective. Because a UWB signal occupies such significant bandwidth, it may interfere with other wireless devices occupying some part of the same band, thus reducing their effective bandwidth. Moreover, other radio frequency (RF) transmissions that fall in the frequency band of a UWB transmitter may corrupt communications in a UWB network. Interference mitigation may also lead to reduced transmission power and, thus, can help nodes achieve longer battery life. Several interference mitigation mechanisms have been proposed for the emerging IEEE 802.15.3 standard. For instance, when a UWB network detects either an interferer or a non-IEEE 802.15.3 network operating in the network's current channel or overlapping with the current channel, it may either change channels to an unoccupied band or reduce the maximum transmission power in the network to avoid interference. In addition to these two schemes, an IEEE 802.15.3 network can merge with another IEEE 802.15.3 network if it detects interference coming from the other network. Another proposed technique considers interference avoidance or mitigation during the network formation phase [11]. This heuristic clustering algorithm forms clusters of nodes in a manner that minimizes interference subject to constraints on radio range and multiple access capability.

A UWB network may use a variation or a combination of different medium access schemes. The following sections introduce popular contention-based and channel partition or collision-free medium access protocols and discuss medium access schemes that are appropriate for UWB networks.

9.3.2. Contention-Based Medium Access Control

In contention-based or random access MAC schemes, nodes independently decide when to transmit, so contention may occur, which leads to frame collisions. The simplest contention-based MAC protocol is pure Aloha, which is described next. Slotted Aloha improves on the performance of pure Aloha by synchronizing the possible transmission times for nodes and, as described next, reduces the probability of collisions. Carrier sensing, as used in Carrier Sense Multiple Access (CSMA), yields even greater improvement by requiring nodes to detect if the channel is idle or not (by sensing the transmission carrier) and to not transmit if they detect that the channel is in use [12]. CSMA is extended with schemes to further reduce the likelihood of collision in Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA).

Aloha

The earliest contention-based medium access scheme, appropriately called Aloha, was developed in the early 1970s by Abramson at the University of Hawaii [8]. The basic operation of Aloha is simple, yet elegant: stations can transmit whenever they have a packet that needs to be sent. If a collision occurs, the data packet is corrupted. The receiver can acknowledge successful receipt of the data packet. If the sender does not receive an acknowledgment within a certain time-out period, the sender assumes that there was a collision. The sender then waits a random amount of time and sends the packet again in another frame.

If one or more other frames are sent during the transmission of a frame, the frame experiences a collision, as illustrated in Figure 9.2. Since a station will transmit a frame any time that it has a packet to send, two frames will collide if packets arrive at two or more stations during an interval that is less than the time to transmit one frame. The success or failure of a frame transmission in Aloha can be thought of in terms of a vulnerable period. Let t be the time to transmit a frame. The vulnerable period for a given frame begins at time t prior to the start of transmission of the frame and extends to the end of the time to transmit the frame. Thus, the vulnerable period is of length 2t. Any packet that arrives or any retransmission that is scheduled within the vulnerable period will collide with the frame. As illustrated in Figure 9.2, either Frame B, which is transmitted at any time t prior to the start of transmission of Frame A, or Frame C, which is transmitted during the transmission of Frame A, will cause a collision with Frame A. Assuming all frames have fixed length and packets arrive at the data link layer according to a Poisson process, we can calculate the probability that a frame is transmitted without a collision as the probability that exactly one frame is transmitted during the vulnerability period. If G denotes the offered load, that is, the number of transmission attempts during each frame transmission time t, the throughput, SAloha, expressed as the fraction of transmission times is the probability that exactly one transmission occurs during two frame transmission times [8]

Equation 9.1


Figure 9.2. Vulnerable Period for a Frame in Aloha.


Note that throughput S is the fraction of the channel used for successful transmission if the time to transmit one frame is normalized to t = 1. Figure 9.3 shows the throughput for Aloha. As seen, the maximum achievable throughput is only 18 percent of the channel capacity and occurs when the offered load is G = 0.5. While simple, Aloha fails to effectively use channel resources. Aloha also suffers from stability problems that can occur when a large number of stations have backlogged frames that need to be transmitted [13].

Figure 9.3. Theoretical Throughput Versus Offered Load for Aloha, Slotted Aloha, and CSMA.


Slotted Aloha

Slotted Aloha, as the name implies, adds the concept of time slots to Aloha [14]. Nodes are synchronized so as to implement discrete time slots, with the length of each slot being the time to transmit one frame. When a node has one or more packets to send, it must wait for the beginning of the next time slot to begin transmission.

By restricting the starting time of frame transmissions, collisions can occur only when two frames are transmitted in the same time slot, as illustrated in Figure 9.4. A collision will occur in slot j if frames become available for transmission at two or more nodes during slot j 1. Thus, the vulnerable period for slotted Aloha is one time slot or frame transmission time, versus two frame transmission times as in Aloha. Any frame that becomes ready for transmission in slot j must wait for slot j + 1 to be transmitted. In Figure 9.4, Frames A and B both become ready for transmission during slot j 1 (from time t0 to t0 + t), which will result in a collision. However, Frame C that becomes ready for transmission during slot j (from time t0 + t to t0 + 2t) will not collide with either Frame A or Frame B.

Figure 9.4. Vulnerable Period for a Frame in Slotted Aloha.


Because the vulnerable period is reduced by a factor of one-half compared to Aloha, the probability of a collision in Slotted Aloha is reduced by a factor of one-half, and the throughput, S Slotted Aloha, is doubled, as indicated in the following expression and illustrated in Figure 9.3.

Equation 9.2


However, throughput still has an exponential dependence on the offered load, G, so a small increase in the offered load can dramatically decrease system performance. This performance degradation occurs mainly because all nodes transmit at will without considering transmissions at other nodes.

Carrier Sense Multiple Access Protocols

If a node can detect whether or not other nodes are currently transmitting, it can adapt its behavior accordingly. Carrier Sense Multiple Access (CSMA) is based on this idea.

A family of CSMA protocols was proposed in the 1970s by Kleinrock and Tobagi [12]. The basic idea behind a CSMA protocol is simple: a node first senses the channel to make sure it is idle before starting to transmit a frame. This behavior is sometimes called "listen before talking." If the channel is not busy, the node can transmit. If the channel is busy, the node will defer transmission. The exact behavior of a node that senses a busy channel leads to different versions of CSMA.

In nonpersistent or 0-persistent CSMA, a node with a frame ready for transmission first senses the channel. If the channel is idle, the node immediately transmits the packet. If the channel is busy, the node waits for a random amount of time, the backoff interval, and senses the channel again. The throughput, S0-persistent CSMA, for 0-persistent CSMA is as follows [12].

Equation 9.3


Here, a is the ratio of propagation delay to packet transmission time. Long propagation delays can negatively impact the performance of CSMA protocols because long propagation delays increase the chance that a node which is ready to transmit cannot hear a transmission in progress at a distant node due to the long propagation delay. CSMA schemes may vary based on the behavior of a node when a packet arrives and the channel is busy [12]. As noted above, in 0-persistent CSMA, the node will always defer transmission of the packet for some random back-off interval that shares the same distribution as the backoff interval following a packet collision. In p-persistent CSMA, the distribution of the random backoff interval for new packets and the distribution of the random backoff interval for backlogged packets differ. Parameter p is associated with the backoff interval distribution for new packets. In 1-persistent CSMA, a new packet arriving when the channel is busy will be transmitted at the first available idle time.

Figure 9.3 shows that CSMA usually yields better channel utilization than Aloha or Slotted Aloha. CSMA performs better than Aloha or Slotted Aloha because of the carrier sensing scheme that avoids collisions with transmitting stations. Some more recent random access MAC schemes also use carrier sensing to increase throughput. Examples include Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) [1] and Carrier Sense Multiple Access with Collision Detection (CSMA/CD) [10].

CSMA/CA

CSMA/CA is a commonly used protocol in wireless local area networks, including the IEEE 802.11 MAC standard [1]. CSMA/CA leverages the performance benefits of CSMA, but extends CSMA to reduce the likelihood of a collision. CSMA/CA avoids use of collision detection, as in CSMA/CD, which is used in Ethernet-wired local area networks. Collision detection is not practical in a wireless environment because a node's own transmission will typically obscure any transmissions at other nodes that may cause a collision at a receiver. Additionally, it is impossible to ensure that all transmitters detect a collision if one occurs at an intended receiver, as required in CSMA/CD.

CSMA/CA can use a request to send (RTS) and clear to send (CTS) protocol to largely avoid the "hidden terminal" problem. Because radio signals attenuate over distance, simultaneous transmissions may lead to collisions at the receiver even though both senders have sensed an idle channel.

Figure 9.5 illustrates the hidden terminal problem. As illustrated, nodes A and C are out of each other's radio range. Thus, neither node A nor node C can hear whether the other node is currently transmitting. If node A is transmitting to node B, node C may still sense that the channel is idle. Therefore, node C starts transmitting and packets from nodes A and C collide at node B. Node C is a hidden terminal with respect to node A and vice versa. However, because of the technical difficulties involved in carrier sensing in UWB systems, CSMA is not an effective choice for a random access MAC protocol. Instead, slotted Aloha may be used for exchanging control frames and for node association with a central control node. However, the inefficiencies of slotted Aloha make it unsuitable for normal, high data rate data transfers. A channel partition or collision-free MAC scheme is better suited for data transfer in most UWB networks.

Figure 9.5. Illustration of the Hidden Terminal Problem.


9.3.3. Channel Partition Medium Access Control

TDMA, FDMA, and CDMA are commonly used and widely investigated collision-free medium access control protocols. They differ in how they partition physical layer resources among nodes. TDMA partitions physical layer channels into a set of predetermined time slots (also often called channels) and assigns different time slots to different nodes in the network. While data transmissions from different nodes are sent at different times, they share the same frequencies in a TDMA system. FDMA partitions the allocated bandwidth (frequency) into channels and assigns these channels to nodes in the network. In an FDMA system, data transmissions occur at different frequencies, but can occur at the same time. While TDMA and FDMA assign time slots and frequency channels, respectively, to nodes, CDMA assigns different spreading codes to different nodes. Therefore, CDMA allows simultaneous transmissions within the same frequency band, provided that the transmitters use different spreading codes. Note that the allocation of time, frequencies, or lengths of spreading codes can be controlled to allocate different quality of service (QoS) levels in TDMA, FDMA, and CDMA, respectively.

Transmitters and receivers must be synchronized in I-UWB and DS-UWB for efficient communication. Therefore, TDMA becomes a natural choice for the medium access scheme in I-UWB and DS-UWB systems. To use TDMA, UWB nodes need to be synchronized by a central control node. Finer synchronization is achieved through the preamble sequence transmitted along with each data packet. The IEEE 802.15.3 standard defines a TDMA scheme for data communications [15]. Appendix 10.B describes the medium access schemes used in IEEE 802.15.3 in more detail.

9.3.4. Multiple Access Protocols for UWB Networks

This section describes a multiple access protocol that is appropriate for UWB networks. As stated previously, a collision-free medium access control scheme is well-suited for the high data rates that can be achieved by the UWB physical layer. A random access scheme is better suited for control signaling and initial association of a node with a control node because collision-free schemes require prior coordination between a node and the control node.

Collision-Free Medium Access Schemes for Data Communications

In IEEE 802.15.3, a pure TDMA scheme is defined for multiple access among nodes within a piconet, regardless of the modulation scheme that is used. However, for different UWB physical layers, different multiple access schemes have been proposed to assign separate channels to simultaneously operating piconets that are in close proximity. Interpiconet channels are separated by frequency hopping codes for MC-UWB and by spreading codes for DS-UWB. Thus, current proposals for the IEEE 802.15.3 standard use TDMA within a piconet, and CDMA among different piconets.

As an alternative for consideration, a hybrid TDMA/CDMA medium access control scheme within a piconet can also be a good choice. For example, the current 802.15.3a DS-UWB proposal defines six spreading code sets that allow six independent piconets to be collocated within each other's interference range. In cases where less than six piconets are collocated, unused code sets can be used to provide higher data rates within one or more of the piconets.

The major advantages of a hybrid CDMA/TDMA scheme are greater flexibility and increased adaptability. Flexibility allows networks to be configured in different ways, while adaptability implies that the network can dynamically modify its configuration to accommodate different channel, network, and application environments. A UWB network is a likely choice for multimedia or other applications that have high data rate requirements or that require differentiated QoS. To meet different QoS requirements and to efficiently use channel resources, the medium access control scheme needs to be as flexible as possible. A pure CDMA scheme assigns one or more spreading codes to a single user for the duration of its connection, while a pure TDMA scheme only allows one user to transmit during a particular time slot. Pure CDMA or pure TDMA can achieve only one "degree of freedom," meaning channel partitioning can be based only on the assignment of spreading codes or time slots.

A hybrid TDMA/CDMA scheme is more flexible since it can achieve two degrees of freedom. This flexibility can be used to adapt to different conditions. A hybrid TDMA/CDMA scheme may assign spreading codes to a user only at certain times, such as when the user has queued packets, thus allowing multiple users to transmit during the same time slot. A central controller may assign the same spreading code to different users in different time slots or it may dynamically assign different time slots to users, which ensures great adaptability. The central control node can broadcast information about the assignment of time slots and codes to other nodes in the network to ensure coordination. Also, different sets of spreading or frequency hopping codes, different time slots, and different frequency bands (for MC-UWB systems) may be allocated to neighboring UWB networks to reduce or eliminate interference.

Random Access Schemes for Control Signaling

While a collision-free multiple access control scheme is desirable for the transmission of data frames, it is typically not feasible for some control signaling. Before a node is associated with a central control node, it cannot obtain a time slot or a code allocation because the central control node does not know of the existence of the node. A random access scheme, such as slotted Aloha, can be used to enable control frames to be sent to establish an association between a node and a control node. Random access can also be used to "elect" a control node when a new network is formed or if the previously designated control node can no longer perform that function due to node failure or node movement. The central control node listens during all uplink time slots dedicated for random access, and detects control frames sent by any nodes that want to associate with the central control node. A suitable scheme for many applications is to use slotted Aloha for random access in the uplink (frames sent to the controller node) and TDMA for the downlink (frames sent from the controller node) [16]. This scheme is illustrated in Figure 9.6. The scheme uses a beacon signal generated by the central control node to advertise the existence of the control node and to synchronize other nodes that wish to use the control node. The beacon interval is followed by the random access interval, which consists of several uplink time slots (UPi) where nodes use slotted Aloha to send frames to the central control node, and the same number of downlink slots (DNj) where the central control node uses TDMA to send frames to the other nodes. Normal data frames are sent during the data communication interval that could use, for example, the hybrid TDMA/CDMA scheme discussed previously. Guard times separate the three intervals to prevent overlaps in the intervals due, for example, to synchronization differences between nodes.

Figure 9.6. Example Communications Structure for a UWB Network.


Using this scheme, the association procedure for a node is as follows:

  1. The node first acquires downlink synchronization with the central control node using the beacon transmitted by the control node.

  2. The node then randomly picks one of the n uplink time slots in the random access interval and transmits its association request during the selected slot.

  3. If the kth uplink time slot is picked, the node attempts to receive the response from the controller in the kth downlink time slot.

  4. If a valid association response is not received in the downlink slot, the node assumes a collision has occurred, and resends its association request in the subsequent random access interval that follows the next beacon interval.

To reduce association latency and to increase channel utilization, a hybrid TDMA/CDMA scheme may be used, as shown in Figure 9.7 [16]. Slotted Aloha is used during the uplink time slots during the random access interval, as in the scheme illustrated in Figure 9.7. However, there is only one downlink time slot. The system uses CDMA to separate different channels during the downlink time slot. Note that the uplink and downlink time slots are separated using TDMA because the uplink slots must occur prior to the downlink slots.

Figure 9.7. Modified Communications Structure for a UWB Network.


Most steps of the association procedure for this second scheme are the same as for the first scheme, except step 3. In step 3, if the kth uplink time slot is picked, the node attempts to receive the response from the controller in the downlink time slot using the kth spreading code.

Because this modified scheme uses only one downlink time slot, it is possible that some radio resources can be freed up and channel utilization can be improved. The time for a node to associate with the central control node is also reduced because the time to receive a response is reduced. However, the central control node must be able to send multiple signals at once, which increases the complexity of the PHY layer.

9.3.5. Forward Error Correction and Automatic Repeat Request

Link reliability is an important consideration in the design of a wireless network. Error recovery is a critical aspect of the data link layer. Two types of error recovery schemes can be used: forward error correction (FEC), where coding is used to allow the receiver to extract the correct packet information from the frame even in the event of certain bit errors; and automatic repeat request (ARQ), where the sender and receiver coordinate to retransmit lost frames or frames received in error.

A wireless system can use error correction coding at both the physical layer and the data link layer to enhance error correction capability. However, because FEC necessarily introduces redundancy in data packets, it may result in wasted bandwidth, especially when the wireless channel conditions are good and the data error rate is low.

Compared to FEC, ARQ is easier to implement, and retransmission overhead is incurred only when there is an error. The basic idea of ARQ is quite simple. For every correctly received packet, the data link layer at the receiver side sends back a positive acknowledgment (ACK). If the received packet is corrupt, a negative acknowledgment (NACK) is sent back, or the sender times out waiting for an ACK, and the sender retransmits the lost packet. However, when the error rate is high, ARQ may introduce excessive delay and waste bandwidth.

A pure ARQ scheme may be sufficient for a small UWB network or a point-to-point UWB link due to the short radio range, robustness to multipath, and resilience to RF interference of UWB. For a larger UWB network, especially for a multihop ad hoc network, FEC combined with ARQ is likely a better choice. Cross-layer design techniques can be used to make the data link layer aware of the radio range, network size, and the interference level. One way to do this is to have the physical layer measure and predict future wireless channel conditions, for example, in terms of effective bit error rate, and then report the prediction to the data link layer. Based on the physical layer's prediction, the data link layer can adaptively choose an appropriate error coding rate, and decide whether or not ARQ should be activated. Adaptive error correction coding with ARQ has the advantages of both FEC and ARQ under different channel conditions, but comes at the cost of increased complexity at all nodes.



    An Introduction to Ultra Wideband Communication Systems
    An Introduction to Ultra Wideband Communication Systems
    ISBN: 0131481037
    EAN: 2147483647
    Year: 2005
    Pages: 110

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net