What Happens When There Is No MVPN Support?


In the event that the SP cannot support MVPN for MPLS, the following configuration is required for WAN links using MPLS VPN-based services where the MPLS carrier is unable to provide a native multicast solution, such as MVPN. This workaround solution provides multicast to sites connected with MPLS VPNs by creating a GRE tunnel dedicated to carrying multicast traffic between sites. Although MPLS VPN-based clouds typically support any-to-any communication within the cloud, tunnels must be set up in a point-to-point manner.

Example 6-3 shows that a tunnel configuration should be applied to both the hub site and remote-site routers terminating MPLS VPN-based connectivity.

Example 6-3. Remote-Site Configuration

! REMOTE SITE / SPOKE router configuration: ! ! Configure a tunnel to the hub site ! interface Tunnel0  description IP multicast tunnel for MPLS VPN  ip address <tunnel-/30-address> 255.255.255.252  ip pim sparse-dense-mode  multicast boundary <acl name>  ip multicast rate-limit out <max-mcast-bw-in-kbps>  tunnel source <loopback0>  tunnel destination <hubsite-router-loopback0-ip> ! ! Ensure that tunnel interface is not used for unicast routing ! router eigrp 109  passive-interface Tunnel0 ! ! mroutes for local office networks must point to proper LAN interfaces ! to source multicast from remote site ! default mroute is set to point to the Tunnel interface ! ip mroute <lan-subnet> <lan-mask> <lan-interface> ip mroute 0.0.0.0 0.0.0.0 Tunnel0 Hub site Configuration ! HUB SITE / HUB router configuration: ! ! Create a tunnel interface to each of the remote offices ! interface Tunnel<tunnel#>  description IP multicast tunnel for <remote-router-name>  ip address <tunnel-/30-address> 255.255.255.252  ip pim sparse-dense-mode  multicast boundary <acl name>  ip multicast rate-limit out <max-mcast-bw-in-kbps>  tunnel source <loopback0>  tunnel destination <remotesite-router-loopback0-ip> ! ! Ensure that tunnel interface is not used for unicast routing ! router eigrp 109  passive-interface Tunnel<tunnel#> ! ! Add a static mroute for each site's subnets pointing to the tunnel ! so that IP multicast will be directed to flow through the tunnel ! ip mroute <remote-subnet> <remote-mask> Tunnel<tunnel#>

Other Considerations and Challenges

Acme and its service provider found a problem with MVPN and the MTU of multicast packets during implementation. To solve this problem, you must reduce the packet size on the IPTV servers from the default size of 1452 to 1280. The reasons for this decision are as follows:

PE routers running MVPN encapsulate multicast traffic in an MDT. This encapsulation technique can be either GRE or IP-in-IP. The default is GRE. Using GRE, you essentially add 24 bytes to the multicast packet. However, if you receive a multicast packet that is greater than 1476 bytes, by the time you have added the 24-byte GRE header, you have a packet greater than 1500 bytes. Service providers may use Gigabit Ethernet between the PE and player; hence, you need to fragment the packet.

Fragmentation is done on the route processor. In MVPN, a number of things also need to be handled by the route processor, such as PIM registers, asserts, state creation, and so on. SPD allows you to rate-limit the number of packets that are punted to the route processor from the VIP/PXF. The default is 1 kbps. The objective of SPD is simpledon't trash the route processor.

The issue the service provider identifies is that, by default, the IPTV application generates 1480-byte packets. Of course, by the time the PE adds the 24-byte GRE header, you need to fragment, which is done by the route processor. It will not take long before the SPD punt limit is reached and the PE starts to discard packets to protect the route processor.

The service provider found it has three options to address this problem:

  1. Modify the MTU of the Ethernet interfaces on PE and P routers.

    Increasing the interface MTU of the Ethernet interfaces to 1524 bytes would resolve the problem. However, although Gigabit Ethernet cards support a 1524-byte interface MTU, the SP also uses 7200VXR (route reflector, sub-AS BGP border router) and 3600 (terminal server) that do not support this. Hence, they would start to lose IS-IS adjacencies and LSPs to these routers. So this option is not a viable solution.

  2. Use IP-in-IP encapsulation for MDT.

    Using IP-in-IP encapsulation for the MDTs, the additional 20-byte header would mean that 1480-byte packets generated by IPTV would not require fragmentation. The problem with this approach is that currently IP-in-IP encapsulation is not done in distributed Cisco Express Forwarding (dCEF), but is fast-switched. Also, IP-in-IP would not solve the problem for packets that are 1481 bytes long. This is also not a viable option.

  3. Reduce the size of packets sourced by IPTV.

    The last option is to reduce the packet size generated by IPTV source servers. Within the application, a file called iptv.ini allows you to configure the packet size that will be generated. By default, it is set to 1452 bytes. This allows for 20 bytes for the IP tunnel, 20 bytes for the IP header, and 8 bytes for UDP. So, excluding the 20-byte IP tunnel, the 1452-byte payload + 20-byte IP header + 8-byte UDP = 1480 bytes.




Selecting MPLS VPN Services
Selecting MPLS VPN Services
ISBN: 1587051915
EAN: 2147483647
Year: 2004
Pages: 136

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net