Section 3.4. Router Structure


3.4. Router Structure

Routers are the building blocks of wide area networks. Figure 3.10 shows an abstract model of a router as a layer 3 switch. Packets arrive at n input ports and are routed out from n output ports. The system consists of four main parts : input port processors , output port processors , switch fabric (switching network), and switch controller .

Figure 3.10. Overview of a typical router

3.4.1. Input Port Processor (IPP)

Input and output port processors , as interfaces to switch fabric, are commercially implemented together in router line cards , which contain some of the task of the physical and data link layers . The functionality of the data link layer is implemented as a separate chip in IPP, which also provides a buffer to match the speed between the input and the switch fabric. Switch performance is limited by processing capability, storage elements, and bus bandwidth. The processing capability dictates the maximum rate of the switch. Owing to the speed mismatch between the rate at which a packet arrives on the switch and the processing speed of the switch fabric, input packet rate dictates the amount of required buffering storage. The bus bandwidth determines the time taken for a packet to be transferred between the input and output ports.

An input port processor (IPP) typically consists of several main modules, as shown in Figure 3.11. These modules are packet fragmentation , main buffer , multicast process , routing table , packet encapsulator , and a comprehensive QoS .

Figure 3.11. Overview of a typical IPP in routers


Packet Fragmentation

The packet fragmentation unit , converts packets to smaller sizes. Large packets cause different issues at the network and link layers. One obvious application of packet fragmentation occurs in typical LANs, in which large packets must be fragmented into smaller frames . Another example occurs when large packets must be buffered at the input port interface of a router, as buffer slots are usually only 512 bytes long. One solution to this problem is to partition packets into smaller fragments and then reassemble them at the output port processor (OPP) after processing them in the switching system. Figure 3.12 shows simple packet fragmentation at the input buffer side of a switch. It is always desirable to find the optimum packet size that minimizes the delay.

Figure 3.12. Packet fragmentation: (a) without fragmentation; (b) with fragmentation


Routing Table

The routing table is a look-up table containing all available destination addresses and the corresponding switch output port. An external algorithm fills this routing lookup table. Thus, the purpose of the routing table is to look up an entry corresponding to the destination address of the incoming packet and to provide the output network port. As soon as a routing decision is made, all the information should be saved on the routing table. When a packet enters an IPP, the destination port of the switch should be chosen , based on the destination address of the incoming packet. This destination port needs to be appended to the incoming packet as part of the switch header.

The look-up table management strategy takes advantage of first in, first out (FIFO) queues' speed and memory robustness. To increase memory performance, queue sizes are fixed to reduce control logic. Since network packets can be of various lengths, a memory device is needed to store packet payloads while a fixed-length header travels through the system. Since packets can arrive and leave the network in different order, a memory monitor is necessary to keep track of which locations in memory are free for use. Borrowing a concept from operating systems principles, a free-memory list serves as a memory manager implemented by a stack of pointers. When a packet carrying a destination address arrives from a given link i , its destination address is used to identify the corresponding output port j .

Figure 3.13 shows an example of routing tables at routers between hosts A and B. Assume that host B's address is requested by a packet with destination address 182.15.0.0/22 arriving at router 1. The routing table of this router stores the best-possible path for each destination. Assume that for a given time, this destination is found in entry row 5. The routing table then indicates that port 2 of the router is the right output to go. The table makes the routing decision, based on the estimated cost of the link, which is also stated in the corresponding entry. The cost of each link, as described in Chapter 7, is a measure of the load on each link. When the packet arrives at router 2, this switch performs the same procedure.

Figure 3.13. Routing tables at routers

Multicast Process

A multicast process is necessary for copying packets when multiple copies of a packet are expected to be made on a switching node. Using a memory module for storage, copying is done efficiently . The copying function can easily be achieved by appending a counter field to memory locations to signify the needed number of copies of that location. The memory module is used to store packets and then duplicate multicast packets by holding memory until all instances of the multicast packet have exited IPP. Writing to memory takes two passes for a multicast packet and only one pass for a unicast packet. In order to keep track of how many copies a multicast packet needs, the packet counter in the memory module must be augmented after the multicast packet has been written to memory. Each entry in the memory module consists of a valid bit, a counter value, and memory data. The multicast techniques and protocols are described in a greater detail in Chapter 15.

Packet Encapsulation

Packet encapsulation instantiates the routing table module, performs the routing table lookups, and inserts the switch output port number into the network header. The serial-to-parallel multiplexing unit converts an incoming serial byte stream into a fully parallel data stream. This unit also processes the incoming IP header to determine whether the packet is unicast or multicast and extracts the type-of-service field. Once the full packet is received, it is stored into memory. The packet encapsulation unit formats the incoming packet with a header before forwarding the packet to the crossbar.

Congestion Controller

The congestion controller module shields the switching node from any disorders in the traffic flow. Congestion can be controlled in several ways. Sending a reverse-warning packet to the upstream node to avoid exceeding traffic is one common technology installed in the structure of advanced switching systems. Realistically, spacing between incoming packets is irregular. This irregularity may cause congestion in many cases. Congestion control is explained in Chapters 7, 8, and 12.

3.4.2. Switch Fabric

In the switch fabric of a router, packets are routed from input ports to the desired output ports. A packet can also be multicast to more than one output. Finally, in the output port processors, packets are buffered and resequenced in order to avoid packet misordering. In addition, a number of other important processes and functions taken place in each of the mentioned blocks.

Figure 3.14 shows an abstract model of a virtual-circuit switching router, another example of switching systems. This model can work for ATM technology: Cells (packets) arrive at n input ports and are routed out from n output ports. When a cell carrying VCI b arrives from a given link i , the cell's VCI is used to index a virtual-circuit translation table (VXT) in the corresponding input port processor to identify the output link address j and a new VCI c . In the switching network, cells are routed to the desired outputs. As shown in Figure 3.14, a cell can also be multicast to more than one output. Finally, in output port processors, cells are buffered; in some switch architectures, cells are resequenced in order to avoid misordering.

Figure 3.14. Interaction between an IPP and its switch fabric in a virtual-circuit switching router

3.4.3. Switch Controller

The controller part of a switching system makes decisions leading to the transmission of packets to the requested output(s). The details of the controller are illustrated in Figure 3.15. The controller receives packets from an IPP, but only the headers of packets are processed in the controller. In the controller, the header decoder first converts the control information of an arriving packet into an initial requested output vector. This bit vector carries the information pertaining to the replication of a packet so that any bit of 1 represents a request for one of the corresponding switch outputs.

Figure 3.15. Overview of a switching system controller

The initial request vector ultimately gets routed to the buffer control unit, which generates a priority value for each packet to enable it for arbitration. This information, along with the request vector, enters an array of arbitration elements in the contention resolution unit . Each packet in one column of an arbitration array contends with other packets on a shared bus to access the switch output associated with that column. After a packet wins the contention, its identity (buffer index number) is transmitted out to an OPP. This identity and the buffer-control bit explained earlier are also transferred to the switching fabric (network), signaling them to release the packet. This mechanism ensures that a losing packet in the competition remains in the buffer. The buffer-control unit then raises the priority of the losing packet by 1 so that it can contribute in the next round of contention with a higher chance of winning. This process is repeated until eventually, the packet wins.

The identities of winning packets are transmitted to the switch fabric if traffic flow control signals from downstream neighboring nodes are active. The upstream grant processor in turn generates a corresponding set of traffic flow control signals, which are sent to the upstream neighboring nodes. This signal is an indication that the switch is prepared to receive a packet on the upstream node. This way, network congestion comes under control.

3.4.4. Output Port Processors (OPP)

Implementing output port processors in switches includes parallel-to-serial multiplexing, main buffer, local packet resequencer, global packet resequencer, error checker, and packet reassembler, as shown in Figure 3.16. Similar to IPP, OPP also contributes to congestion control. Parallel-to-serial multiplexing converts the parallel-packet format into serial packet format.

Figure 3.16. Overview of a typical OPP in routers


Main Buffer

The buffer unit serves as the OPP central shift register. The purpose of this buffer is to control the rate of the outgoing packets, which impacts the quality of service. After collecting signals serially from the switch fabric, the buffer forwards packets to resequencers. The queue runs on a clock driven by the link interface between the switch and an external link. This buffer must have features that support real-time and non-real -time data.

Reassembler and Resequencer

The output port processor receives a stream of packet fragments and has to identify and sort out all the related ones. The OPP reassembles them into a single packet, based on the information obtained from the fragment field of headers. For this process, the OPP must be able to handle the arrival of individual fragments at any time and in any order. Fragments may arrive out of order for many reasons. Misordered packets can occur because individual fragments, composed of a fairly large number of interconnections with different delay times, are independently routed through the switch fabric.

A packet reassembler buffer is used to combine fragments of IP packets. This unit resequences receiving packet fragments before transmitting them to external circuits, updates the total-length field of the IP header, and decapsulates all the local headers. The resequencer's internal buffer stores misordered fragments until a complete sequence is obtained. The in-sequence fragments are reassembled and transmitted to the external circuit. A global packet resequencer uses this same procedure to enforce another reordering , this time on sequences, not fragments, of packets that belong to a single user .

Error Checker and CRC

When a user sends a packet or a frame, a cyclic redundancy check (CRC) field is appended to the packet. The CRC is generated from an algorithm and is based on the data being carried in the packet. The CRC algorithms divide the message by another fixed-binary number in a polynomial form, producing a checksum as the remainder. The message receiver can perform the same division and compare the remainder with the received checksum. The error checker applies a series of error-checking processes on packets to ensure that no errors are on the packets and creates a stream of bits of a given length, called frames. A frame produces a checksum bit , called frame check sequence, which is attached to the data when transmitted.



Computer and Communication Networks
Computer and Communication Networks (paperback)
ISBN: 0131389106
EAN: 2147483647
Year: 2007
Pages: 211
Authors: Nader F. Mir

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net