Peer Group Design Considerations


Providing adequate user services requires the PNNI network designer to estimate address range, UNI bandwidth, and NNI bandwidth requirements. If the network designer expects the PNNI network to expand into a hierarchical structure, he must consider the computing power and memory capacity of the switches designated to assume peer group leader (PGL) status.

Constructing a peer group requires considering the following:

  • Volume of topology updates As the number of nodes in the peer group increases, the volume of PNNI topology state packets (PTSPs) broadcast through the group begins consuming significant bandwidth and per-node processing power.

  • Convergence The process of generating, evaluating, and synchronizing all the PNNI topology state elements (PTSEs) sent and received in PTSPs lengthens the time it takes for all nodes to agree or converge on a common view of the network topology.

  • Span The PNNI standard dictates that the maximum number of entries in a Designated Transit List (DTL) be no more than 20, which implies that nodes within a peer may be no more than 20 hops apart. That limits the peer group diameter. It also mandates that the maximum number of DTL IEs in a setup message should not exceed ten, limiting the number of hierarchical levels.

  • Growth If a network is constructed as a single peer group and the number of contained nodes is near the per-peer group maximum, how will the addition of extra nodes be handled? On the other hand, if a hierarchical network begins at the lowest level, 104, what happens if lower levels need to be added in the future?

  • Resiliency What happens if a PGL fails? The network operator should ensure the outcome of the peer group leader election is deterministic by configuring proper leadership priorities. Network stability should also be sought by preventing PGL re-elections.

  • Administrative A peer group might need to exist within an administrative boundary in order to obey geographical, political, business, or security considerations.

  • Redundancy Redundancy considerations include minimizing single points of failure, provisioning parallel links, and creating multiple paths.

  • Clocking Synchronization and timing distribution must be designed carefully. You should choose automatic clock distribution protocols such as Network Clock Distribution Protocol (NCDP) in MGX-8850 and MGX-8950 nodes and AutoRoute (AR) in BPX-SES nodes whenever possible.

During the hierarchical design process, administrators must choose between the higher routing granularity offered by a peer group's complex node representation and the simplified routing and path selection inherent in simple node representation. Complex node representation allows the logical group node (LGN) to inject node and link-state information from the lower-level peer group it summarizes into the LGN's peer group. This additional information carried in PTSEs from LGNs configured for complex node representation lets nodes in other lower-level peer groups compute DTLs that better adhere to the traffic parameters requested by the end system. Simple node representation applies a single, constant cost to traverse any foreign peer group on the way to the destination.

You should do the following when selecting a PGL to carry out the duties of an LGN at all higher levels of the hierarchy:

  • Select a switch that has more processing power and memory.

  • Select a switch that is not destined to be a border node. Performing both roles puts undue strain on a single switch's processing power and memory limits. Border nodes perform DTL and uplink origination duties already.

  • Select a switch that is a transit node and not an edge node that builds connections and associated DTLs.

  • Select a switch with a lower number of signaling links (links running Service-Specific Connection-Oriented Protocol (SSCOP), such as PNNI and UNI).

  • Select a switch with a lower number of connections based on traffic engineering (TE) or empirical results.

  • Select a switch that has links to multiple nodes in the peer group. This minimizes the risk of the PGL's separating from the peer group if one of the PGL's links fails.

  • Select a switch that is not an NMS gateway.

NOTE

It is worth noting that running ILMI does not impose any CPU load on the controller card. ILMI is a distributed application that runs in the line cards (AXSM or BXM cards), not on the controller card.


These same considerations apply when you choose peers to act as border nodes.

A single PGL is also a single point of failure. Depending on resiliency requirements, it may be desirable to configure primary and secondary peer group leaders (each chosen based on the above mentioned selection criteria).

Optimizing PNNI Network Parameters

Resource restrictions, routing summarization, and Quality of Service (QoS) parameters are all adjustable. Here are the some of the commonly tuned parameters:

  • Administrative weight (AW) This is a per-link routing metric used by routing nodes that construct DTLs to determine the cost to traverse a given path through the PNNI network. Lower total cost paths are preferred. By manipulating the AW of individual links, the administrator can influence switched virtual circuits (SVC) and Soft Permanent Virtual Path (SPVC) path selection. AW can be configured per ATM class of service. Other metrics and attributes can also be tuned, such as cell transfer delay (CTD) and cell loss ratio.

  • Aggregation token A default link aggregation token of 0 results in all parallel horizontal links appearing as one at higher levels of the hierarchy. The administrator may force the PNNI routing algorithm to differentiate between multiple parallel links by applying different aggregation tokens to each link or combinations of links.

  • Bandwidth overbooking factor This factor directly controls the available cell rate (AvCR). The booking factor determines the AvCR that a node advertises for a particular PNNI port. A booking factor of 1 percent reserves only 1 percent of the bandwidth requested in the SVC setup. This is overbooking because two connections could request bandwidth of 100 percent, but a configured booking factor percentage less than 100 percent, such as 50 percent, would allow both these connection to built along the same route even though they both need 100 percent of the bandwidth. In turn, connection setups prefer this link. Cisco multiservice switches support overbooking per interface or per Class of Service (CoS), such as constant bit rate (CBR) or Variable Bit Rate-Nonreal Time (VBR-NRT). Although this raises link utilization, over time more connections might crank back because of a lack of resources.

  • Cell delay variation (CDV) This represents a measurement of the cell transfer delay (CTD) variation over links and through nodes. The PNNI route selection algorithm does not choose routes with lower CDV. The CDV is used to determine which routes are eligible for selection. It is important not to mistake CDV with the GCRA term cell delay variation tolerance (CDVT).

  • Node transit restrictions To force a particular distribution of SVCs or SPVCs through the PNNI domain, the administrator can enable the node transit restriction on any physical or virtual node, denying it participation in virtual circuit (VC) routing. The node with transit restriction enabled may still terminate and originate VCs. This restriction is encapsulated in nodal PTSEs and is flooded to other peers.

  • Link selection Different algorithms for parallel link selection as well as policies can be configured to perform TE tasks.

MGX and service expansion shelf (SES) PNNI switches select routes through the PNNI network using the following parameters:

  • Destination address

  • Administrative weight (AW)

  • Maximum cell rate (MaxCR)

  • AvCR

  • CTD

  • CDV

  • CLR0

  • CLR0+1

Table 9-1 shows the connection parameters used by the MGX and SES to fulfill end-user service requests.

Table 9-1. Connection Parameters Per Service Class

Service Class

Destination Address

AW

MaxCR

AvCR

CTD

CDV

CLR0

CLR0+1

CBR

Required

Required

Optional

Required

Required

Required

Required

Required

VBR-RT

Required

Required

Optional

Required

Required

Required

Required

Required

VBR-NRT

Required

Required

Optional

Required

Required

 

Required

Required

ABR

Required

Required

Required

Required

    

UBR

Required

Required

Required

     





Cisco Multiservice Switching Networks
Cisco Multiservice Switching Networks
ISBN: 1587050684
EAN: 2147483647
Year: 2002
Pages: 149

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net