The provisioning of traditional Layer 2 VPNs involves setting up PVCs or soft PVCs (SPVC) between customer sites or CEs. If the network supports only PVCs, the PVC must be configured on every switch/network element along the path of the PVC connecting the two CE sites. If the network supports dynamic signaling, SPVCs can be set up using point-and-click provisioning. In the ATM SPVC setup, the network operator clicks the two end points to which the PVC must be built. The network elements dynamically signal a path using the PNNI signaling protocol and set up the SPVC between the end points. This eases the provisioning of ATM networks at the expense of network control. Manual provisioning has the advantage of explicit control of the network w.r.t placement of PVCs. PVCs can be routed along explicit paths, and resiliency can be built using diverse path routing. In dynamic provisioning, some level of diverse path routing can be achieved using constraint-based routing mechanisms, although this is not as accurate or comprehensive as manual provisioning is. However, the advantage is that while provisioning an SPVC, there is no need to touch all the network elements along the path because the TBDs are computed by the network elements and set up dynamically via the signaling protocol. MPLS-based Layer 2 VPN provisioning is similar to that of ATM SPVC provisioning. Using network management applications, pseudowire and attachment circuit provisioning can be done. Similar to the traditional Layer 2 VPNs, the PVC is configured between the CPE and the PE. As stated earlier, the configuration of the PE triggers signaling in the network core for the establishment of the pseudowire. If the standard pseudowire wire emulation edge to edge (PWE3) signaling method is used, LDP signaling is initiated, labels are exchanged, and the pseudowire is set up without any additional configuration on the network core routers. This is equivalent to the concept of the dynamic signaling of SPVC in the ATM PNNI network. In the MPLS case, there is no explicit placement of pseudowires in the network core. In fact, the network core devices do not have any knowledge of the pseudowires. As explained earlier, this is achieved by label stacking, where the network core routers know only about the IGP label and have no visibility inside the payload unless it is destined for them. Comparing the MPLS-based provisioning to traditional provisioning models for other Layer 2 transport types, such as Ethernet, Frame Relay, PPP, and HDLC, we see that provisioning these circuits in an IP/MPLS network is easier than a traditional network. The SPVC model of dynamic signaled pseudowire applies to all Layer 2 transport types, including Ethernet, PPP, and HDLC. For example, in a traditional PPP or HDLC setup, the TDM channel is manually set up along the entire path (from one CE through the network and to another CE) on ADMs, DACs, and NTUs. Then a PPP or HDLC encap is enabled on the two end points of the PPP or HDLC link. In IP/MPLS-based Layer 2 VPNs, the circuit is provisioned only in the access layer, between the PE and CE. The edge network provisioning follows the same procedure as described in the ATM or Frame Relay case. Although the gain realized might not be significant, delivering a PPP or HDLC service is simpler on the IP/MPLS network. At the time of writing this book, a hot debate is raging in the industry about which signaling protocol is better suited for pseudowire signaling. There are two proposals in the IETF. One uses LDP, supported by Cisco and a host of other vendors, and another uses Border Gateway Protocol (BGP), supported by Juniper Networks. The pros and cons of each are discussed in the following sections. LDP Signaling LDP is a simple protocol for label exchange to set up pseudowires. As described in the operation of pseudowire reference model, directed LDP is used for signaling and exchanging labels between the pseudowire end points. LDP exchanges Layer 2 FEC type and sets up the label binding of the pseudowire. The Layer 2 FEC type determines the type of pseudowire, such as Frame Relay, ATM, Ethernet, PPP, or HDLC. The labels are used to identify the pseudowires and determine the Layer 2 circuits where the frames must be forwarded. LDP is by nature a point-to-point protocol. The information exchanged between the LDP end points is relevant to the two peers that exchange the information. With LDP signaling, the PE routers that form the end of the pseudowire have a peering relationship with each other. This one-to-one relationship between PE routers has several pros and cons: Scalability LDP signaling uses a directed LDP session between a pair of routers, between which a pseudowire must be established. This can imply the following: If the number of routers is large, say n, in a random environment, potentially n-1 LDP sessions are required from any router to all other routers. If n is large, a large number of LDP sessions can become a bottleneck depending on the platform. However, this might not be a concern because the network sizes seen today are about 700 PE devices, which requires a PE to support approximately 700 LDP sessions. This is easily achievable. QoS Provisioning QoS is easy due to simple point-to-point configuration. Pseudowires are easily identifiable and are explicitly provisioned. The network operator has explicit control of the pseudowires and their QoS characteristics. Failure notifications LDP signaling can be easily tied into the LMI/ILMI signaling or OAM capability on the attachment circuit. This ensures prompt notification of the signaling status to the end points. For example, a circuit might be down on the remote end. In such as case, the PE can immediately withdraw the label with LDP and the local PE can take one of three actions: send an OAM cell in case of ATM VC/VP, use LMI/ILMI in case of Frame Relay/ATM, or use carrier shutoff for Ethernet to signal to the local CE device that the remote end of the Layer 2 circuit is down. This signaling is immediate with LDPunlike BGP, which relies on a damper, scanner, and timer. Simple paradigm with low overhead LDP signaling follows a simple and well-understood paradigm of traditional Layer 2 VPNs. Only the two end devices that need to communicate on a particular pseudowire exchange messages, without any third device hearing or snooping it. The fact that no unnecessary information is sent or received makes LDP signaling efficient with a low CPU or memory overhead for an operational network. Each device must maintain an LDP session (directed LDP) with which it needs to build one or more pseudowires. Thus, the number of directed LDP sessions required on any given network element is equal to the number of devices this network element must talk to in order to build the pseudowire. Thousands of directed LDP sessions can be created on the Cisco routers today, allowing for a large-scale setup of pseudowire service. No broadcast of information In this case, no information is replicated across all the directed LDP sessions. There are no broadcast messages. Label management Labels are dynamically allocated and withdrawn by LDP. No preconfiguration of label information is required. The network element dynamically manages labels in the most efficient manner, reusing unused labels. The results are better scalability and easier management. Contiguous labels or attachment circuit values Neither the label values nor the attachment circuit values need be contiguous. Any random AC number can be bound to any label/pseudowire and can be made part of any Layer 2 VPN, making it extremely efficient and flexible. There is no wasting of label, DLCI, or VPI/VCI space. Hub-and-spoke topology Most Layer 2 VPNs are hub and spoke. By using a point-to-point setup procedure, building a hub and spoke or an arbitrary mesh topology is much easier because the operator has explicit control over the pseudowire setup. BGP Signaling An alternative way of signaling Layer 2 information to the PEs is using BGP. However, because BGP is great at taking a piece of information and communicating that to everyone, optimizing Layer 2 signaling or the pseudowire setup is possible. For example, you can use BGP signaling with VPLS. In this method of signaling, all attachment circuit information and label information is preallocated and sent to the PEs in the BGP update. An ordered list of ACs is created in each PE, and a label block is allocated. Each PE then uses its site ID as the index, retrieves the label information from the label block, and programs the hardware or forwarding tables. No exchange of information between PEs on a per-pseudowire basis occurs. However, using BGP signaling of pseudowires for VPLS only does not require sending an AC list because only one VSI services all the ACs within a VPLS domain. Here are some pros and cons of this method of signaling: Scalability Although there are no PE-PE LDP sessions in this case, the pseudowire information (label bindings and AC list) is carried in the extended community attribute in the BGP signaling. This can be made to work with PE-PE iBGP sessions or via route reflectors. The number of PEs supported in this configuration scale with the number of BGP sessions (in case of iBGP full mesh) or RR capability, when route reflectors are used. However, the scalability is not just about BGP sessions. It is also about the memory needed to store information and the CPU needed to process BGP updates in addition to the number of BGP sessions. The information about any pseudowire (AC list and its label binding) is sent to everyone, so each PE must filter the BGP update to look only at the pertinent information. If an L2VPN contains a large number of sites, the entire AC list and the label information are sent to all the PEs in the network, regardless of whether the AC list applies to PEs. In a random environment with a large number of Layer 2 VPNs and a large number of sites per VPN, this causes a huge processing overhead, resulting in less overall scalability of the equipment. QoS As we explained earlier, in Layer 2 VPN QoS guarantees are almost taken for granted. This means the requirements of QoS and bandwidth guarantees on the pseudowire must be met. Moreover, there might be a requirement for different QoS characteristics on each pseudowire. With BGP signaling, the QoS information must also be sent into the BGP extended community. This can unnecessarily increase the information sent, thereby reducing the overall scalability. Moreover, there is no way of dynamically updating the QoS on a pseudowire. The BGP signaling model assumes that there is no requirement for different QoS characteristics on the pseudowires. It works with difficulty on the assumption that the same QoS characteristics are applicable to all pseudowires. Ways to make it work with different QoS characteristics are available, but it can become extremely complicated with a large number of sites and a large number of VPNs. So, the savings it brings in terms of provisioning are offset by the management requirements. Failure notifications Failure notifications must be sent across with the BGP update. If the updates are triggered each time a failure occurs, a significant churn in the network can occur. This occurs because failure information is propagated to everyone because they are all peering with each other or with a common route reflector. Given any large operational network, link or VC failures occur often, especially when networks span large geographical regions. This means that the overhead the failure notification provides is significant and becomes worse as the failure rate increases. Moreover, it is not clear why the failure information must be sent to everyone. For example, if the AC or pseduowire fails between router A and B, the only affected components are routers A and B in a Layer 2 service; other routers in the network, such as routers C, D, or E, therefore, do not need to know about the failure notification or the changes in Layer 2 service affecting A and B. However, with BGP signaling, all routers are sent the failure updates, resulting in high processing overhead and poor scalability. Complicated paradigm One of the arguments made in favor of BGP signaling is that it is used for Layer 3 VPNs, and with some small set of changes, Layer 2 VPNs can be easily added to the network. However, as networks grow larger, the number of policies and filtering that must be met to accommodate both Layer 2 and Layer 3 VPNs becomes complex. Managing all the changesespecially when new sites are added and removedbecomes an issue as the number of sites and the number of VPNs grows. QoS policies, therefore, cannot be applied easily and uniformly. Label Management To avoid a flooding of individual labels of all the pseudowires to every PE in the network, a handy "hack" was developed. It involves sending a block of labels, with each site using the site ID as an index and setting up the pseudowire. This requires operators to statically allocate a block of labels for a given number of sites. If the number of sites changes and no more labels are left in the label block, either a new label block must be allocated and stitched to the previous block or the label block must be renumbered. This can also mean an interruption of service to the entire VPN when a new site is added. This can result in a fragmentation of label space, resulting in management overhead to maintain and manage the label space. This overhead can easily offset any gains achieved through the ease of provisioning full-mesh pseudowires. Contiguous VC values Contiguous VC values must be allocated to attachment circuits to keep track of which AC connects to which site. Sifting through the list of VC/DLCI values each time a problem must be traced becomes difficult. With a large number of sites, separate external applications must be used to track the DLCIs, sites, and VPNs. It is not impossible, but it is certainly cumbersome. Much like label block assignment, when contiguous labels run out, VC blocks must be stitched or reallocated. Thus, over a period of time with many VPNs added and deleted and sites added and deleted, provisioning the DLCI/VC values without collisions becomes difficult. Automated tools must be used that keep track of the changes for provisioning such values. Hub-and-spoke topology BGP-based signaling makes the setup of full mesh of pseudowires easier. To set up hub-and-spoke VPNs, more policies must be applied to filter advertisements of BGP updates, so that pseudowires are not built between spoke sites. This gets more complicated as the number of sites grows larger. Management Irrespective of the signaling protocol used, after the pseudowires are set up, the management of them is similar in both cases. PWE MIB provides indices and tables so that packet/byte counts can be obtained in addition to pseudowire types and pseudowire parameters. However, debugging pseudowires signaled using LDP might be easier than debugging pseudowires set up using BGP. As explained earlier, the tracking of BGP-signaled pseudowires is done using the labels in the preallocated block, whereas the tracking of pseudowires signaled using LDP is done by looking up the LDP table. Additionally, OAM message mapping and status signaling have not been defined in the standards bodies yet. Some of the previously mentioned drawbacks disappear if BGP signaling is used solely for the purpose of VPLS setup. For VPLS, there are no requirements for point-to-point pseudowires; therefore, the complexity of contiguous VC assignment and label block allocation is unnecessary. With VPLS, one or more Ethernets that are attached to the PE are usually part of the same bridging domain. Hence, the VC allocation is typically one or more VLANs that are part of the same bridging domain. The label assignment is also done per VSI. So, no label blocks are needed because a single label can now be broadcast to all PEs belonging to the same bridging domain. This results in a considerable reduction in complexity. However, other issues, such as failure notification and OAM message mapping, still remain with this method of signaling. |