Chapter 9 - Multicast Support Commands

Chapter 9: Multicast Support Commands  
  Overview  
  The previous chapters have covered the operation and configuration of Cisco-supported IP multicast protocols. In this chapter, we will look at a number of multicast scenarios and multicast support commands. The support commands are not specific to any multicast routing protocols but are used to fine-tune your network.
Multicast Boundaries  
  The unicast IP address allocation reserved three sets of IP addresses for private use. An address block was reserved in each of the IP classes A, B, and C, as shown.  
  10.0.0.0  
   
 
  10.255.255.255  
 
  172.16.0.0  
   
 
  172.31.255.255  
 
  192.168.0.0  
   
 
  192.168.255.255  
 
  If these networks are used in a private intranet, then care must be taken not to advertise these networks on the Internet. Because multiple intranets may be using the same private IP address space, advertising them globally would cause confusion (see Figure 9-1). To prevent such confusion, private addresses should not be advertised outside the local intranet. Company A and Company B in Figure 9-1 would have to use Network Address Translation on their border routers to allow internal users Internet access. What has effectively been done is to form a boundary around the private addressed networks to prevent these addresses from being accessed through the Internet.  
   
  Figure 9-1: If private IP addresses are advertised over the Internet, then routing confusion can occur. For this reason, private IP addresses should not be advertised globally.  
  The multicast address space has a block of addresses assigned that are analogous to the private IP unicast address blocks. The block of Class D addresses from 239.0.0.0 to 239.255.255.255 are referred to as administratively scoped; the block is further subdivided, as shown in Table 9-1. Assume that in your company each department (finance, engineering, and marketing) wants to deploy multicasting, but they do not want to receive multicast traffic from the other departments. For this scenario, a multicast boundary will need to be set up around each department to prevent multicast traffic from crossing departmental boundaries (see Figure 9-2).  
  Table 9-1: Administratively Scoped Multicast Address Block  
 
 
  239.0.0.0 239.255.255.255  
Administratively Scoped  
 
  239.0.0.0 239.63.255.255  
Reserved  
 
  239.64.0.0 239.127.255.255  
Reserved  
 
  239.128.0.0 239.191.255.255  
Reserved  
 
  239.192.0.0 239.251.255.255  
Organization Local Scope  
 
  239.252.0.0 239.252.255.255  
Site-Local Scope (Reserved)  
 
  239.253.0.0 239.253.255.255  
Site-Local Scope (Reserved)  
 
  239.254.255.255 239.254.255.255  
Site-Local Scope (Reserved)  
 
  239.255.0.0 239.255.255.255  
Site-Local Scope  
 
 
 
  To configure a multicast boundary, use the interface command  
  ip multicast boundary access-list-number  
  no ip multicast boundary access-list-number  
  Standard IP access-list (1 99).  
  When configured on an interface, the ip multicast border command prevents multicast packets identified by the access list from flowing into or out of the interface. Each of the interfaces that connect border routers in Figure 9-2 would have the configuration as shown on the following page.  
   
  Figure 9-2: Multicast boundaries need to be established on the department border routers.  
  interface serial n  
  ip multicast boundary 1  
     
  access-list 1 deny 239.0.0.0 0.255.255.255  
  access-list 1 permit 224.0.0.0 15.255.255.255  
  The permit statement in the access list is required because every access list has an implicit deny any at the end of the list. In Chapter 7, we used the interface command ip pim border to prevent Bootstrap messages from passing through the interface, but allowed all other multicast traffic to pass. The ip multicast border command can be used in the same manner with regards to Auto-RP.  
  interface serial n  
  ip multicast boundary 1  
     
  access-list 1 deny 224.0.1.39  
  list 1 deny 224.0.1.40  
  access-list 1 permit 224.0.0.0 15.255.255.255  
  The ip multicast border command blocks Auto-RP and Mapping Agent messages from crossing the interface but allows all other multicast traffic. Although the ip multicast boundary command is usually used in conjunction with the administratively scoped block of multicast addresses, it can be used to block any multicast address on an interface.
Broadcast/Multicast Conversion  
  Assume that you have an application on a host that does not support IP multicast, only IP unicast and broadcast. Further assume that the application wants to send to a receiver or multiple receivers on a different subnet. We have seen in Chapter 2, Internet Protocol (IP) Addresses, that this is not possible, at least not yet. Using IP unicast only allows the sender to send to one host, and IP broadcast only allows the sender to send to hosts on the same subnet. What we need is a way to turn a broadcast into a multicast for delivery to the receivers. Now if the receivers cannot receive multicast traffic, then the multicast stream would need to be converted back to a broadcast stream on the receiving subnet (see Figure 9-3).  
   
  Figure 9-3: A broadcast-to-multicast-to-broadcast conversion is needed to enable a non-mulitcast sender to send to a non-multicast receiver.  
  To enable the broadcast-to-multicast conversion and the multicast-to-broadcast conversion, use the following interface configuration command on the router attached to the sender, or first hop router:  
  ip multicast helper-map broadcast multicast-address extended-acl  
  broadcast  
  Specifies the traffic is being converted from broadcast to multicast.  
 
  multicast-address  
  Multicast group address of the traffic that is to be converted to broadcast traffic.  
 
  extended-acl  
  IP extended access list used to determine which broadcast packets are to be converted to multicast. Based on the UDP port number.  
 
  Use the following form of the command on the router attached to the receiver or last hop router:  
  ip multicast helper-map group-address IP-broadcast-address extended-acl  
  group-address  
  Multicast group address of traffic to be converted to broadcast traffic.  
 
  IP-broadcast-address  
  IP broadcast address to which broadcast traffic is sent.  
 
  extended-acl  
  IP extended access list used to determine which multicast packets are to be converted to broadcast. Based on the UDP port number.  
 
  For the network in Figure 9-3, the first hop and last hop routers would have the configuration listed below:  
  Router A First Hop Router.  
     
  interface Ethernet 0  
  ip directed-broadcast  
  ip multicast helper-map broadcast 239.1.2.3 100  
  ip pim dense-mode  
     
  access-list 100 permit any any udp 2000  
  access-list 100 deny any any udp  
     
  ip forward-protocol udp 2000  
  Router D Last Hop Router  
     
  interface ethernet 0  
  ip directed-broadcast  
  ip igmp join-group 239.1.2.3  
  ip multicast helper-map 239.1.2.3 172.16.1.255 100  
  ip pim dense-mode  
     
  access-list 100 permit any any udp 2000  
  access-list 100 deny any any udp  
     
  ip forward-protocol udp 2000  
  As configured, router A translates broadcasts to udp port 2000 to the multicast address 239.1.2.3, while router D translates traffic for multicast group 239.1.2.3 to the IP broadcast address for the subnet. The command ip igmp join-group on the last hop router is automatically configured when the ip multicast helper-map command is used. The ip forward-protocol command is necessary to disable fast-switching, which does not perform the conversion from broadcast to multicast and multicast to broadcast.
Session Directory  
  Session Directory (SDR) is an MBONE scheduling system used to announce and schedule multimedia conferences. SDR uses the Session Directory Announcement Protocol (SDAP) that will periodically multicast a session announcement packet describing a particular session. SDAP announcement packets can be received by a multicast receiver by joining the well-known group 224.2.127.254. A user can then select to receive traffic for a multicast group using the SDR tool (see Figure 9-4).  
   
  Figure 9-4: Sample output for the Session Directory  
  To enable the reception of Session Directory Protocol announcements on an interface, use the interface command  
  ip sdr listen  
  This command enables the router to accept SDAP packets on the interface, and the router joins the multicast group 224.2.127.254. SDR entries are cached on the router and the time that an SDR remains in the cache is configured using the global configuration command:  
  ip sdr cache-timeout minutes  
  minutes  
  The amount of time an SDR cache entry stays active in the cache. A value of 0 indicates the entry will never time-out. The default value is 24 hours.  
 
  The remaining commands pertaining to SDR are listed below.  
  debug ip sdr  
  The above command enables logging of received SDR announcements.  
  show ip sdr [group | session-name | detail]  
  no parameters given  
  A sorted list of cached sessions names are displayed.  
 
  group  
  Detailed information is displayed for the multicast group.  
 
  detail  
  Displays sessions in detailed format.  
 
  This command displays the entries in the SDR cache if the router is configured to listen to SDR announcements.  
  clear ip sdr [group-address | session-name ]  
  no parametersClears the SDR cache.  
  Clears all sessions associated with the given group-address.  
  Clears the cache entry for the given session name.
IP Multicast Rate Limiting  
  The amount of bandwidth that multicast traffic uses on a link can be controlled using the interface command.  
  ip multicast rate-limit in | out [video] | [whiteboard] [group-list access-list]  
  [source-list access-list] [kbps]  
  in
  Only packets at the rate of kbps or slower are accepted on the interface.
 
  out  
  Only a maximum of kbps is transmitted on the interface.  
 
  video  
  Optional. Rate-limiting is performed based on the UDP port number used by video traffic, which is identified by consulting the SDR cache.  
 
  whiteboard  
  Optional. Rate limiting is performed based on the UDP port number used by whiteboard traffic, which is identified by consulting the SDR cache.  
 
  group-list
access-list
 
  Optional. An access list that is used to determine which multicast groups will be constrained by the rate limit.  
 
  source-list
access-list
 
  Optional. An access list that is used to determine which senders will be constrained by the rate limit.  
 
  Kbps  
  Rate limit in kilobits per second. Packets sent at a rate greater than kbps are discarded. If no value is given, then the default rate is 0 kilobits per seconds. In this case, no multicast traffic is permitted.  
 
  This command requires that ip sdr listen be enabled so port numbers can be obtained from the SDR cache. If SDR is not enabled, then no limiting occurs.
Stub Multicast Routing  
  Networks that have remote sites connected in a hub and spoke arrangement over lower speed links can benefit by configuring the spoke routers as stub networks (see Figure 9-5). If PIM-Dense or Sparse-Dense mode is configured on the main campus network, then without additional configuration, multicast traffic would periodically be flooded to the stub network. PIM-Dense mode can also flood multicast traffic on links where a PIM neighbor has been discovered. To prevent this periodic flooding of traffic, the PIM neighbor relationship must be prevented and an IGMP proxy needs to be configured. If PIM-Sparse mode is being employed on the campus, a stub network would not need to know RP-group mappings.  
   
  Figure 9-5: A stub multicast network is configured with an IGMP proxy because the PIM neighbor relationship has been prevented from forming.  
  The configurations for the routers in Figure 9-5 that are needed to create a stub network are listed below:  
  Router A  
     
  ip multicast-routing  
     
  interface serial 0  
  ip address 172.16.1.1 255.255.255.0  
  ip pim dense-mode  
  ip pim neighbor-filter 5  
     
  access-list 5 deny host 172.16.1.2  
  Router stub  
     
  ip multicast-routing  
     
  interface e0  
  ip address 172.16.2.1 255.255.255.0  
  ip pim dense-mode  
  ip igmp helper-address 172.16.1.1  
     
  interface serial 0  
  ip address 172.16.1.2 255.255.255.0  
  ip pim dense-mode  
  The stub router forwards IGMP messages from hosts on the ethernet network to router A, which has an access list that blocks the PIM neighbor relationship from forming between the two routers. Only multicast traffic for a group that has been joined on the stub router is forwarded by router A, reducing the multicast traffic on the link.
Load Balancing  
  When two equal cost paths exist for a destination, an IP unicast routing protocol, such as OSPF, will load-balance unicast traffic over the two links. Load-balancing, without additional configuration, is not possible with multicast routing protocols. The reason that load-balancing does not occur for multicast traffic over equal cost links is because of the selection of the RPF interface. Only one RPF interface can be selected for a multicast source and therefore all multicast traffic must flow over that link. Multicast traffic flowing on the other link will be rejected because it does not arrive on the RPF interface (see Figure 9-6).  
   
  Figure 9-6: Multicast traffic is only accepted on one link.  
  In order to achieve multicast load-balancing, we need to configure a tunnel between routers A and B in Figure 9-6. All multicast traffic will flow across the tunnel and the unicast routing protocols will load-balance across the actual physical links (see Figure 9-7). Load-balancing occurs because we are encapsulating the multicast traffic in unicast IP packets. Multicasting needs to be disabled on the physical interfaces and enabled on the tunnel interface.  
   
  Figure 9-7: Load-balancing multicast traffic using a tunnel.  
  The configurations for routers A and B are listed below:  
  Router A  
     
  interface ethernet 0  
  ip address 172.16.2.1 255.255.255.0  
     
  interface serial 0  
  ip address 172.16.1.1 255.255.255.252  
  bandwidth 200  
  clock rate 200000  
     
  interface serial 1  
  ip address 172.16.1.5 255.255.255.252  
  bandwidth 200  
  clock rate 200000  
     
  interface tunnel 0  
  ip unnumbered ethernet 0  
  ip pim dense-mode (or sparse or sparse-dense mode)  
  tunnel source ethernet 0  
  tunnel destination 172.16.3.1  
  Router B  
     
  interface ethernet 0  
  ip address 172.16.3.1 255.255.255.0  
     
  interface serial 0  
  ip address 172.16.1.2 255.255.255.252  
  bandwidth 200  
     
  interface serial 1  
  ip address 172.16.1.6 255.255.255.252  
  bandwidth 200  
     
  interface tunnel 0  
  ip unnumbered ethernet 0  
  ip pim dense-mode (or sparse or sparse-dense mode)  
  tunnel source ethernet 0  
  tunnel destination 172.16.2.1  
  Load-balancing will now occur over the two serial links, but the mechanisms will be different, depending on whether the routers are process-switching or fast-switching. For process-switching, the load-balancing occurs with each packet using a round-robin method. Also, the packet counts on each link will be the same. For fast-switching, load-balancing occurs with each multicast flow because an (S,G) flow will be assigned to one of the physical interfaces.
Multicast Static Routes  
  When using PIM, unicast and multicast routes are congruent. In other words, the unicast and multicast packets follow the same path. This makes sense because PIM uses the unicast routing table to make multicast routing decisions. Occasions can arise where you may want the unicast and multicast routing tables to diverge. For whatever reason, to accomplish this route divergence, use a static multicast route (mroute).  
  ip mroute source mask [protocol process-number] rpf-address | interface [distance]  
  source  
  IP address/mask of the multicast source.mask  
 
  protocol  
  Optional. The unicast routing (OSPF, EIGRP, and so on).  
 
  process-number  
  Optional. The process number of the routing protocol that is being used.  
 
  rpf-address  
  The incoming interface for the mroute. If the Reverse Path Forwarding address, rpf-address, is a PIM neighbor, PIM Joins, Grafts, and Prunes are sent.  
 
  interface  
  The interface type and number for the mroute (ethernet 0, serial 1, and so on).  
 
  distance  
  Optional. This determines whether a unicast route, a DVMRP route, or a static mroute should be used for the RPF lookup. The lower distances have better preference. If the static mroute has the same distance as the other two RPF sources, the static mroute takes precedence. The default is 0.  
 
  Static multicast routes are not exported or redistributed; they are local to the router on which they were configured. The first example of a static mroute is in a network in which a tunnel is used to maneuver around a non-multicast capable router (see Figure 9-8).  
   
  Figure 9-8: A static mroute is needed to direct multicast traffic over the tunnel.  
  Routers A and C would be configured with an mroute that directs multicast traffic to the tunnel.  
  ip mroute 0.0.0.0 0.0.0.0 tunnel 0  
  The next example involves a tunnel that drops multicast traffic right in the middle of your network from an external source (see Figure 9-9).  
   
  Figure 9-9: Static mroute needed for multicast traffic not originating in the internal network  
  When the RPF check is made, routes are looked up in the unicast and the static mroute tables. If we use a simple default mroute like we did in the last example, all RPF checks would point to the tunnel. We may also have internal multicast sources in our network and we would want the RPF interface to be determined from the unicast routing table and not the static mroute table. The way to accomplish this is with the following router commands:  
  ip mroute 172.16.0.0 255.255.0.0 null0 255  
  ip mroute 0.0.0.0 0.0.0.0 tunnel 0  
  For sources in the 172.16.0.0 network, we will have an RPF route from the unicast routing table and the mroute table. The administrative distance for the mroute is greater than that for the unicast routing table, so the unicast route will be used as the RPF. Because there is a match in the mroute table, there is no need to check any other mroutes, so the default mroute will not take affect.  
  For external sources, there is no route in the unicast routing table and the first mroute does not match, so the default mroute will be used. This technique is a bit strange, but it does come in handy. If you only wanted to check a particular unicast (OSPF, EIGRP, IGRP, RIP) routing protocol, use the following form:  
  ip mroute 0.0.0.0 0.0.0.0 ospf 100 null0 255  
  ip mroute 0.0.0.0 0.0.0.0 tunnel 0.  
  Be careful, because if you reverse the order of the ip mroute statements, then the default route will always be taken.
Multicasting and Non-Broadcast Multi-Access Networks  
  A non-broadcast multi-access (NBMA) network, such as frame relay, needs special consideration in regards to multicast traffic. The network in Figure 9-10 is a partially meshed frame relay network configured as a hub and spoke arrangement.  
   
  Figure 9-10: Partially meshed Non-Broadcast Multi-Access (NBMA) network  
  If the hub router needs to send a broadcast to every spoke router, then the broadcast packet needs to be replicated and sent four times, once to each spoke router. This is not a problem with an occasional broadcast packet, yet with multicast traffic this method of operation can dramatically affect the bandwidth utilization on the frame relay network. For example, assume the hub router receives multicast traffic for groups that only router B and C have joined. The multicast traffic would be replicated and sent to routers A, B, C, and D, even though A and D do not have receivers. We also assume here that all four spoke routers are running PIM. To override this behavior, configure the interface in NBMA mode.  
  interface serial 0  
  ip pim nbma-mode  
  ip pim sparse-mode  
  When the hub router receives a Join from one of the spoke routers, the router records the group and the address of the joiner. Therefore, when the hub router receives a multicast packet to be forwarded over the frame relay network, the packet is only sent to the spoke routers that have joined the group. When a spoke router sends a Prune to leave the group, the forwarding entry is then deleted on the hub router. This command only works with PIM-Sparse Mode.
Multicast over ATM  
  If the frame relay network in Figure 9-10 is replaced by an ATM network, then we can use multipoint virtual circuits (VC) to limit the replication f multicast packets. By default, PIM establishes a static multipoint VC that provides a connection to each PIM neighbor. If the hub receives a multicast packet that only one PIM neighbor needs, it is sent to all PIM neighbors.  
  Let s say, for instance, we would like to modify this behavior so that the multicast packet is only forwarded to those neighbors that want to receive it. Assume the routers in the network are all running PIM Sparse-Mode and the Hub router is the RP. When router A sends a Join for a multicast group to the hub, the hub router sets up a multipoint VC for the group. If another spoke router joins the same group, the hub router just adds the spoke router to the multipoint VC. When traffic for the group is received by the hub, the router only needs to send one copy on the multipoint VC that was established for the group. Then the ATM switches between the hub and spoke routers are responsible for replicating and delivering the packets. This feature is configured using the interface command:  
  ip pim multipoint-signaling  
  This command can only be used on an ATM interface. To limit the maximum number of VCs that PIM can open for multicast traffic, use the interface command  
  ip pim vc-count number  
  number  
  Maximum number of VCs PIM can open. Default value is 200.  
 
  If the router needs to open another VC that causes the router to exceed the configured maximum VC count, then the VC with the least amount of activity is deleted. If there are multiple VCs with the same minimum amount of activity, then the VC that is connected to the fewest neighbors is deleted first. The activity level is measured in packets per second and by default all activity levels are considered when a VC needs to be deleted. To configure the activity level that determines whether VCs will be considered for deletion, use the interface command  
  ip pim minimum-vc-rate pps  
  pps  
  Set the minimum packets per second rate to the value given by pps.  
 
  If the number of VCs open already equals the maximum number allowed, then packets for new groups are sent over the static multicast VC.  

 


 
 


Cisco Multicast Routing and Switching
Cisco Multicast Routing & Switching
ISBN: 0071346473
EAN: 2147483647
Year: 1999
Pages: 15

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net