Building a Network Road Map


The PNNI routing discussion commences with a survey of routing components common to both flat and hierarchical PNNI networks. Then we'll discuss the specifics of each network type.

Common PNNI Routing Components

Before we delve into the differences between flat and hierarchical PNNI networks, the following sections discuss node IDs, routing control channels, and the hello protocol. These three routing components are common to both network types.

Node IDs: Combining Levels and Addresses

Concatenating the level indicator byte with a fixed-value byte of decimal 160 and appending the node's 20-byte ATM end system address form an individual node's 22-byte node ID. The format of a logical group node's (LGN's) ID is slightly different. The level indicator now denotes the LGN's peer group level. Following the level indicator is the 14-byte peer group ID of the LGN's child peer group. Next come the 6-byte ESI of the physical node acting as LGN and, finally, a single octet set to 0. Figure 8-14 shows both node ID formats.

Figure 8-14. Two Node ID Formats


Routing Control Channels

Before two PNNI-controlled neighbor nodes can begin exchanging information, a communications channel must be opened between them. PVC or SVC connections called Routing Control Channels (RCCs) serve this purpose. When physical links connect two neighbor nodes, a PVC with VPI=0 and VCI=18 is used for the RCC. However, when a PVP connects two nodes, the RCC's VPI is the same as the PVP, and the VCI remains set at 18. LGNs connect with peer LGNs using SVC-based RCCs. The VPI and VCI values for these SVC-based RCCs are determined during the SVC setup process.

The Hello Protocol

Just as two strangers do not exchange personal information before they meet, PNNI nodes do not exchange topology information before using the hello protocol to negotiate PNNI version and peer group, node, and port IDs. Hello packets are exchanged over RCCs. RCCs are configured as PVCs if two nodes reside at the lowest layer of the routing hierarchy or are configured as SVCs if the nodes peer at any higher layer. If multiple links exist between two nodes, each link has its own RCC and hello exchange.

Nodes transmit hello packets periodically based on the expiration of an interval timer. Each link has its own timer, and that timer is reset each time a hello packet is sent. Besides timers, each node keeps a separate per-link database that contains link-state data as well as pertinent information about the neighbor node attached to the other end of the link. Independently, two node interfaces connected to the same link progress through a series of phases or states before topology exchange begins. The interfaces go from down to two-way inside (for nodes in common peer groups) or common outside (for nodes in different peer groups). The complete hello protocol Finite State Machine (FSM) is shown in Figure 8-15.

Figure 8-15. Hello Protocol FSM


In the down phase, no PNNI packets are transmitted across the link. Typically, lower-level link failure forces interfaces into this state. As soon as physical connectivity is established, an interface moves to the one-way inside state when it receives a neighbor's hello packet that contains the same peer group ID but no values for remote port and node IDs. In this case, the received remote node and port IDs refer to the local node and port IDs of the node receiving the hello packet.

When a node receives a hello and sees its own node and port IDs listed in the values for the incoming packet's remote node and port ID, the receiving node is sure that one-way communication with the remote neighbor node is functioning. If this same hello packet contains a different peer group ID, the interface transitions to the one-way outside state instead. If the nodes reside in the same peer group, based on the data in the hello packets received, the transition to full two-way communication occurs when an interface receives a hello with the same peer group ID as well as values for remote node and port IDs. For links in a common peer group, this phase is called two-way inside. See Figure 8-16.

Figure 8-16. Hello Protocol in a Single Peer Group


For links that connect nodes in different peer groups, as soon as a hello packet is received with remote port and node IDs, the two connected node interfaces move to the two-way outside phase, but full two-way communication is still not established. Full communication establishment occurs when the two nodes connected by this link agree to communicate on a common routing hierarchy level above the lowest level. At this point, these two nodes reach the common outside state. As soon as this phase is reached, full two-way communication begins and, in turn, topology exchange commences between these nodes in different peer groups. This is shown in Figure 8-17.

Figure 8-17. Hello Protocol Between Peer Groups


Routing Within a Flat PNNI Network

A PNNI network is considered flat when all the nodes in the PNNI domain have equal and complete knowledge of the topology. In short, every node knows the state of every connecting link and the health of every routing node in the network. As the network grows, so do the per-node memory requirements to hold all this link-state and node health information. A hierarchical PNNI network design slows the growth of memory requirements by segmenting the network and limiting a given node's view of the physical network. Hierarchical PNNI is discussed later in the "Routing in a Hierarchical PNNI Network" section.

How Nodes Share Topology Information

Before a flat or hierarchical PNNI network begins operating, each routing node must determine the operational state of each of its directly connected links and then discover all its neighbor nodes attached to its active links. As soon as nodes identify and establish communication with their directly connected neighbor nodes, the exchange of PNNI topology information begins. PNNI topology state packets (PTSPs) act as virtual shipping containers for packaging topology updates that contain link-state and node health information for transport throughout the PNNI routing network.

Besides carrying node and link-state information, PTSPs also carry information regarding the ATM address destinations that each routing node can reach. To identify the node originating the topology update within a PTSP and the scope of the update, the PTSP header contains an originating node ID and the originating node's peer group ID.

PTSPs transport one or more PNNI topology state elements (PTSEs) with each element packaging link, node, or reachable destination information. Each PTSE in a PTSP originates from the same routing node, where the 22-byte node ID inside the PTSP identifies the routing node. The 2-byte Type field in each PTSE header identifies the PTSE. Table 8-6 lists the types.

Table 8-6. PTSE Types

PTSE Name

PTSE Value

Nodal information group

96

Nodal information group

97

Outgoing resource availability

128

Incoming resource availability

129

Next-higher-level binding

192

Optional GCAC parameters

160

Internal reachable ATM addresses

224

Exterior reachable ATM addresses

256

Horizontal links

288

Uplinks

289

Transit network ID

304

System capabilities

640


Link, node, and reachability information can change at any moment during the continuous operation of a PNNI network; as such, PTSE contents are considered current only for a finite period. Each PTSE header has fields for sequence number and remaining life that determine which PTSE of a given type and from a given node is most recent and indicate how much longer a given PTSE's contents are valid. The remaining-life counter is uniformly decremented even after the PTSE is inserted into a node's topology database. Because a node cannot build a route using PTSEs with expired remaining-life counters, this constant decrementing and subsequent flushing ensures that routes are built using the most up-to-date information.

PTSE Flooding

All PTSEs must be disseminated throughout a PNNI peer group for every node in the group to have a clear and consistent view of the routing domain. Flooding is the process used to spread each node's PTSEs through the peer group. During flooding, every node acts as a PTSE relay agent for every other node. When a node receives a PTSE from one of its neighbor nodes, it bundles that PTSEalong with other PTSEs from the same neighborinto a PTSP and forwards this PTSP to all its other neighbors. Clearly, the PTSEs originated by each node arrive at every other node in the peer group as long as each node follows this PTSE flooding algorithm. Figure 8-18 shows the flooding process.

Figure 8-18. The Flooding Procedure


Acknowledgment, Aging, and Expiration of PTSEs

To ensure PTSE receipt, all receiving nodes send PTSE acknowledgments to the originating nodes. The identifier field in each PTSE is copied into an acknowledgment list destined for the node that originates all the PTSEs in the list. These acknowledgment lists can be sent immediately or after a delay. Typically, PTSEs received with no remaining life or dated sequence numbers are acknowledged immediately, but acknowledgments for valid PTSEs are often bundled and sent after a delay. This delay is bounded, though. An expiring acknowledgment timer forces a node to check its list of PTSEs needing acknowledgment and to send an update packet to the originating node. After expiration, the timer is reset. Just as any receiving node must keep track of all PTSEs it must acknowledge, transmitting nodes must keep track of all PTSEs sent but not acknowledged. Transmitting nodes periodically resend unacknowledged PTSEs based on internal timer expiration.

PTSE aging and expiration through either periodic or forced decrement of its remaining-life counter cause nodes to request PTSE refreshes or automatically transmit updated PTSE versions. Nodes might force the remaining-life counter to zero, thereby prematurely aging the PTSE in its topology database only if the node originates this PTSE. PTSEs are prematurely aged for a variety of reasons that all stem from changes in a node's hardware or link states that invalidate the contents of the PTSE. Premature aging of a PTSE is often followed by a triggered PTSE update.

Unscheduled Topology Updates

Asynchronously triggered updates are invoked when a node's information groups change. Information groups segment a node's characteristics into classes such as internally or externally reachable addresses, link parameters, resource availability, or the node's hardware state. Note that triggered updates can be encapsulated in PTSPs and flooded into the network at only a limited rate, thus preventing the CPU processing power of receiving nodes from being overwhelmed if a node has multiple neighbors simultaneously sending it triggered updates. Triggered PTSE updates are acknowledged in the same fashion as PTSEs received periodically. Figure 8-19 lists the pertinent parameters per information group that trigger PTSEs. Note that any change in some parameters, such as maximum cell rate in the resource availability information group, triggers a PTSE, whereas other parameters in the same group, such as average cell rate, require the absolute percentage change to reach a certain threshold before a node fires a PTSE update.

Figure 8-19. Significant Events Per Information Group Triggering PTSE Updates


Database Synchronization Between Peer Nodes

Any two neighbor nodes are synchronized when they agree on the state of the network topology. Initial exchange of database summary packets initiates the synchronization process.

Database synchronization begins when the neighbor nodes reach the two-way inside state. The add port event is triggered by this state, and a node commences synchronization. Conversely, if neighbor nodes exit the two-way inside state for any reason, including physical link or RCC failure, the drop port event is launched, and a node removes all database information for the newly disconnected neighbor. Figure 8-20 shows all the states that any pair of neighbor nodes traverses as it moves toward full synchronization.

Figure 8-20. Node States During the Synchronization Process


A database summary (DS) packet contains a complete list of the PTSEs that a node can originate. DS packets include the header portion of each PTSE. Initially, a neighbor node on one end of a two-way inside link ignites the database synchronization process by forwarding a DS packet toward its neighbor on the far end of the link. A single synchronization process exists between two nodes even if multiple two-way inside links exist between the pair. The node starting the process is considered the synchronization master for a given neighbor node pair. The slave acknowledges the master's DS packet by transmitting a DS packet of its own. Master nodes can have only one outstanding unacknowledged packet.

As soon as the DS packet exchange between two neighbor nodes is finished, the two nodes begin requesting all the PTSEs listed in the DS packets they just received. PTSEs are exchanged using PTSPs as previously described. Figure 8-21 shows the database synchronization process between the centrally located switch and its two adjacent neighbor nodes.

Figure 8-21. Database Synchronization Process


Routing in a Hierarchical PNNI Network

A rising number of nodes and increasing alternative paths in the routing domain force the network administrator to make choices to simplify the routing complexity in each peer group. To simplify per-peer group routing complexity, many network administrators choose to implement hierarchical PNNI. Hierarchical PNNI's multilevel routing scheme slows the growth of per-node memory requirements by limiting each node's view of the network topology. Levels of the hierarchy are configured, peer group leaders (PGLs) are elected, and peer groups are represented by LGNs at the next-higher level in the hierarchy. Figure 8-22 depicts a multilevel hierarchy, complete with PGLs and LGNs. As shown in the figure, an LGN can simultaneously act as an LGN for its lower-level peer group and serve as a PGL for its peer group.

Figure 8-22. LGNs and PGLs


Peer Group Leaders

Peer groups elect PGLs to perform the duties of LGN at the next-higher level in the routing hierarchy. Each level of the routing hierarchy above the lowest level has an LGN. An LGN is not an additional physical ATM switch in the network. Instead, it is a software data structure resident in the memory of one of the nodes in the lower-level peer group. The node-elected PGL at the lowest level serves as LGN and represents its lower-level peer group at the next-higher level, but it does not necessarily serve as PGL for this higher-level peer group. PGLs are chosen using a preconfigured leadership priority value in each node.

A node can have a different leadership priority value configured for each level in the hierarchy. The node with the highest leadership priority value becomes the PGL, but a node must exist in a peer group before it attempts to be the group leader. If all nodes in the lowest-level peer group have leadership priority values equal to zero, no PGL is elected. By default, Cisco multiservice switches have leadership priority values equal to zero.

Electing a PGL

Nodes in a peer group elect a leader using the leadership priority values and 22-byte node IDs found in the nodal PTSE they receive from every other peer in the group. As discussed, each node determines its preferred PGL by finding the node in its topology database that has the highest leadership priority. Cisco's multiservice switch family has a configurable leadership priority range of 0 to 200. If there is a tie for PGL based on leadership priorities, a node converts the 22-byte node IDs of the nodes that tied into unsigned integers and then selects the node with the highest integer value to become its preferred PGL. If a node calculates that it is PGL, it sets the "I am leader" bit in all transmitted nodal information group PTSEs. As soon as a node is elected PGL, it increments its leadership priority value by a network-wide constant. This action promotes stability within the peer group when multiple peers have similar leadership priority values.

Identifying the Hierarchy Levels

The most significant 13 bytes of the ATM address comprise all the possible levels in the PNNI routing hierarchy. From a routing perspective, these 13 bytes allow possibly 104 levels of network partitioning. The 104th level represents the lowest possible level, and the first level signifies the highest. The level indicator specifies, in bits, how many of the 104 most significant ATM end system address or node ID bits signify the routing hierarchy level. For example, if a node's level indicator equals 56, the most significant 56 bits in the ATM end system address identify this node's peer group ID.

The 1-byte level indicator combined with the first 13 bytes of the ATM address form the 14-byte peer group identifier (ID). Note that the first 13 bytes of the ATM address constitute all the possible 104 levels in the PNNI routing hierarchy. Each node generates a peer group ID separate from its 20-byte ATM address. If a node is located above the 104th level at level x, where x is less than 104, the rightmost 104x bits within the peer group ID are set to 1. This peer group ID unambiguously denotes the node's hierarchy level and peer group.

Higher-level or "parent" peer groups must have level indicator values less than the child's level indicator. Conversely, child peer groups always have level indicator values greater than their parents do. For example, a node at level 56 could have a parent peer group above it at level 36 and a child peer group below it at level 80, as shown in Figure 8-23.

Figure 8-23. Sample Hierarchy Levels


Note that peer group levels do not have a predefined starting point within the 104 possible choices. Likewise, peer group levels do not need to be consecutive in a given PNNI network. In short, no requirement states that the middle level in a routing hierarchy must start at x, where x is a number between 1 and 104, and lower levels must start at x+1, whereas higher levels must begin at x1.

In fact, because PNNI node IDs, such as ATM end system addresses, are commonly displayed in hexadecimal format, it is easier to discern the peer group ID if the level indicators are always multiples of 4. This numbering convention ensures that the peer group ID, which is the number of significant bits of the node ID dictated by the level indicator, is always an integral number of hexadecimal characters.

LGN Hellos

Just as lowest-level PNNI nodes have RCCs connecting them to their peers, each LGN has SVCC-based RCCs that connect it to the other LGNs in its peer group. To recap, Figure 8-24 shows RCCs between physical nodes as well as LGNs.

Figure 8-24. RCC Types


These RCCs serve the same purpose regardless of level: They provide passage for neighbor-to-neighbor hellos and PNNI routing exchanges via PTSPs. If a link fails in one of the paths connecting any two LGNs, the SVCC-based RCC might fail as well. However, the LGNs do not acknowledge this loss of connectivity until their own intranode timers expire without hello packet or PTSP receipt. As such, the SVCC might fail and be rebuilt with no significant loss of connectivity between the two LGNs.

Unlike hello packet exchanges between any two nodes at the lowest level of the routing hierarchy, hello packets between LGNs do not contain port IDs, and the remote port ID field is set to all 1s in these hello packets. An LGN does not send hello packets to another LGN until the hello packet contains a PTSE with the node ID of the remote LGN. An LGN receiving hello packets must discard them all until it has a PTSE from the LGN of its neighbor border node containing a remote node ID. At this point, the receiving node accepts the hello packet if the remote node ID field corresponds to the one listed in its uplink PTSE. (Uplink is defined in the next section.)

Information Exchange Across the Different Link Types

PNNI introduces three link types:

  • Horizontal link A link between any pair of logical or physical nodes in the same peer group

  • Outside link A link between any pair of physical nodes in different peer groups

  • Uplink A link between a border node and the LGN of its neighbor border node

Figure 8-25 depicts the three link types.

Figure 8-25. Link Types


Horizontal links, often called inside links, exist in either flat or hierarchical topologies, whereas outside links are unique to hierarchical PNNI. These outside links exist only between pairs of physical border nodes. If the two lowest-level nodes determine that they reside in different peer groups during the hello exchange, they are both border nodes. Figure 8-26 shows border nodes. The entry and exit terms that preface two of the border nodes in Figure 8-26 are in relation to the call setup message traversing the peer groups. When traversing a peer group, a call setup message must enter at one border node and exit the peer group at another border node.

Figure 8-26. Border Nodes


Unlike horizontal links, packets full of PTSEs are not exchanged. Instead, as soon as neighbors determine they reside in different peer groups, both neighbors transfer hello packets containing uplink information attributes (ULIAs). ULIAs contain the same topology state parameters as PTSPs, but these parameters contain the link state and condition of the border node at the far end of the outside link as well as the complete list of hierarchy levels above and below the remote border node's peer group. In addition to the list of hierarchy levels, a border node also advertises the connectivity and reachability information for all the hierarchy levels above it.

Using each other's list of hierarchy levels, both border nodes determine the nearest higher-level peer group they have in common. Next, each border node advertises a logical link to its neighbor border node's LGN that resides in this common higher-level peer group. This logical link is called an uplink. A border node only advertises uplinks in its own peer group. Lower-level nodes use the connectivity and reachability information advertised in uplinks to route connections to end systems in remote peer groups.

Link Aggregation

When several outside links connect two border nodes, the PNNI routing algorithm summarizes the links into a single aggregate link that is placed in the PGL's topology database if the component links have the same aggregation token values. These 32-bit tokens, exchanged via the hello protocol on outside links, correlate multiple parallel links connecting two border nodes. Because the default aggregation token for all links is 0, the PNNI routing algorithm automatically begins summarizing parallel links unless the administrator intervenes by configuring different token values on opposite ends of each parallel link.

Aggregate link information is fed upward to each border node's LGN. The LGN advertises only the aggregate logical link within its peer group. Additionally, if multiple logical links with the same aggregate token exist between two border LGNs, those links are summarized by LGNs in the next-higher layer of the routing hierarchy using the method just described. Link aggregation lets PNNI support expansive networks because upper-level or parent nodes do not know the detailed topology of their lower-level child peer groups.

Address Summarization

Besides links, LGNs representing the lowest-level peer groups must aggregate and summarize the ATM addresses of end systems directly attached to the PNNI-controlled switches inside those peer groups. By default, an LGN advertises a summary address equal to the peer group ID of the peer group it represents. This summary address is advertised in the "internally reachable ATM address" information group PTSEs distributed by the LGN. "Internally reachable" implies that the addresses originate from end systems or nodes within the same PNNI domain as the LGN. Addresses learned through connections to other PNNI networks are summarized just like internal addresses but use the "exterior reachable ATM address" information group PTSEs for distribution.

In most instances, all end systems and nodes in a peer group have the same peer group ID, so the summary address accurately accounts for all devices in the peer group. As such, all SVCs or SPVCs initiated to addresses inside the summary are sent to the LGN that advertises that summary address. If an end system has a unique AESA that does not align with its peer group's address format, that AESA cannot be summarized.

The address summarization process just described is repeated at every level in the routing hierarchy above the lowest level. This continual process of constructing summaries of summaries ensures that higher-level LGNs do not have to store and distribute unabridged lists of every AESA in the domain.

From the address summarization process, you can infer the importance of a consistent and structured addressing plan for address summarization and nodal aggregation, as well as for avoiding service affecting address changes down the road.

Another process called scoping limits a summary address's range of advertisement. Scope is a 1-byte field inside the internally reachable ATM address information group (IG) PTSE that dictates the highest PNNI routing level at which the summary addresses in that PTSE can be advertised. For example, if a PNNI domain has four hierarchy levels80, 56, 40, and 36and the PTSE generated by the LGN at level 56 has a scope of 40, the summary addresses in the same PTSE cannot be advertised by the parent LGN residing at level 36. Each summary address has a scope, and LGNs typically group summary addresses with equal scopes into the same PTSEs. A scope value of zero means that the address can be advertised at the highest level of the routing hierarchy.

Passing Down Higher-Level Topology Information

As each LGN receives PTSEs from its peer LGNs, it sends this topology information downward to the PGL of the lower-level peer group it represents. Remember, in most instances, an LGN is also the PGL of that lower-level peer group. The PGL immediately starts flooding this upper-level topology information to its peers. When the PGL is not in the lowest level of the routing hierarchy, the PGL is an LGN for some lower-layer peer group. In this case, the PGL floods the summarized higher-level topology state information it received from its LGN along with the topology information from its peer group down through its child peer groups.




Cisco Multiservice Switching Networks
Cisco Multiservice Switching Networks
ISBN: 1587050684
EAN: 2147483647
Year: 2002
Pages: 149

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net