Setting up Connections in the PNNI Routing Domain


PNNI's routing capabilities are widely known, but its call signaling capabilities are equally important. Whereas PNNI's routing capabilities spread an agreed-upon "road map" to destinations across a PNNI peer group, its call signaling capabilities allocate bandwidth on the paths connecting two or more destinations.

Although PNNI signaling is based on the UNI 4.0 specification, it adds additional features to facilitate expedient call routing through an ATM switch topology previously determined using PNNI's routing function. These additional features are as follows:

  • DTLs that contain hop-by-hop routes through the hierarchical PNNI network to the ATM destination

  • The ability to "crank back" or route around link failures in the PNNI-controlled ATM switch network

  • Support for SPVCs

  • Associated signaling to facilitate SPVC or SVC setup over any VPC logically connecting to PNNI-controlled ATM switches

The major PNNI IEs, including crankback and DTL, are described in Table 8-5.

Table 8-5. Major PNNI Information Elements

Information Element

Maximum Length

Maximum Number of Occurrences

Broadband bearer capability

7

1

QoS parameter

6

1

Calling party number

26

1

Called party number

25

1

ABR setup parameters

36

1

Called party soft PVC/PVCC

11

1

Calling party soft PVC/PVCC

10

1

Crankback

72

1

DTL

546

10


It is important to mention that one limiting factor in the width of a hierarchical PNNI network is the fact that the DTL IE (Designated Transit List Information Element) can have 10 maximum occurrences.

The following sections expound on each of these features.

DTL Construction

A complete source-originated route through a PNNI domain provides an ordered stack of DTLs. An individual DTL contains a sequential list of node ID and port ID pairs that unambiguously identify each succeeding next hop in the path from the source PNNI-controlled switch to the destination switch.

In flat and hierarchical PNNI networks, a PNNI-controlled ATM switchcalled a node for shorthas detailed network topology knowledge of nodes and connecting links within its PNNI peer group. However, a logical group node (LGN) representing a lower-level peer group at the next-higher level in a hierarchical network might or might not have only summary knowledge of other peer group topologies, depending on which node representation scheme is used. Simple node representation depicts a lower-level peer group as a single-cost hop at the next-higher-level peer group. This representation scheme is adequate when a lower-level peer group is small with few paths through the peer group. However, as the number of paths through the lower-level peer group starts to increase and the cost to traverse each path varies significantly, implementing complex node representation allows for more accurate routing.

At higher levels in the hierarchy, a complex node models a lower-level peer group as a nucleus with attached spokes or radii. Each spoke has an associated cost or metric and represents an exit point from the lower-level peer group. The complex logical node advertises the cost of traversing its radii to its fellow complex logical nodes within its higher-level peer group. Typically, a complex logical node advertises the same metric for all its radii unless some have significantly different costs. The complex node advertises an exception metric for spokes that have significantly different costs.

During source route calculation, at least two radii must be crossed to traverse a lower-level peer group represented as a complex node unless the complex node advertises exception bypass links. Exception bypass links directly connect two spokes in the complex logical node and, if available, enable routes that traverse a complex node in a single hop instead of passing through the nucleus.

Regardless of the node representation type implemented at upper layers of the routing hierarchy, if a node in one peer group must build a virtual circuit to a node in another peer group, the source routing node constructs a stack of DTLs with the most general DTL on the bottom of the stack and the most detailed on the top. The most general DTL needs nothing more than the node ID of the gateway switch used to reach the destination peer group, whereas the top DTL in the stack contains the detailed information necessary to traverse the source originating node's own peer group. Figure 8-13 shows an example of such a DTL using simple node representation.

Figure 8-13. DTL Stack Manipulation Across Multiple Peer Groups


As soon as a virtual circuit setup message reaches the last node in the current DTL, that DTL is popped off the top of the DTL stack, and the message is forwarded according to the DTL now at the top of the stack. A stack of DTLs means that the PNNI network is hierarchical and that the required route to the destination traverses multiple PNNI peer groups. If the DTL at the top of the stack does not contain the detailed information to navigate through the next peer group, the entry node into the new peer group must generate a new DTL and place it on top of the stack. This new DTL must contain the requisite detailed informationnode and port ID pairsto reach the destination node if this node resides in the same peer group as the entry node. Otherwise, the new DTL must contain a list of node and port ID pairs that let the VC setup message cross the current peer group and exit at the proper node.

DTLs are always constructed in a dynamic routing and switching environment. Any of the ATM switches and their associated links might inadvertently fail after the DTL is built. Likewise, other PNNI border nodes might route virtual circuits through a given interior transit switch and occupy resources previously thought to be free.

Either case can result in virtual circuit setup failure while trying to transit the hop-by-hop route inside a given DTL. When a setup failure occurs, the setup and accompanying DTL are returned or cranked back to the originating node. The originating node is responsible for finding a new path through peer groups and revising the routing information in the DTL. If a new route cannot be found, the PNNI node notifies the attached ATM end device initiating the virtual connection that this particular VC setup failed.

Crankback: Getting Around the Problem in the Network

When calls fail to reach their destination because of unforeseen faults in the PNNI routing domain, the routing nodes that originated the DTL containing the faulty node or link must find an alternative path to the destination or called party. The crankback information element (CBIE) is contained in release, release complete, and add party reject (solely for multipoint connections) call-clearing messages. The CBIE contains blocked node or blocked link information. The DTL-originating node receiving one of these three call-clearing messages containing a CBIE must take appropriate action and reroute the call setup message around the failure and toward the destination. Note that calls rejected by the called party are not cranked back and rerouted toward the called party.

Reasons for crankback are grouped into two main areas: reachability failures and resource errors:

  • Reachability failures occur when a transit path to the destination cannot be found. These failures include destination unreachable, transit network unreachable, and next node unreachable.

  • Resource errors occur when some of the call setup requirements cannot be fulfilled along the path to the called party. Resource errors include service category not supported, traffic or quality of service parameters not supported, and requested VPI/VCI not available.

Soft Virtual Circuits

Establishing soft PVPs and soft PVCs using PNNI routing and signaling lets network administrators specify two static endpoints of a point-to-point ATM connection without having to manually set up each switch cross-connect that is part of the connection. These connections are called soft because PNNI signaling software in the master node containing the calling endpoint must set up, release, and reestablish the path to the called endpoint. By avoiding the laborious task of building each ATM cross-connect by hand, soft PVPs and PVCs can take advantage of PNNI's rerouting and crankback mechanism and dynamically reroute around network failures that might occur in the network infrastructure connecting the two static endpoints.




Cisco Multiservice Switching Networks
Cisco Multiservice Switching Networks
ISBN: 1587050684
EAN: 2147483647
Year: 2002
Pages: 149

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net