Configuring Your RPs

Because your RPs are responsible for forwarding multicast traffic to segments with participating end stations, they play the most critical role. Choosing an inappropriate multicast routing protocol can have serious performance ramifications for your campus network. To help you with these issues, Cisco's IOS on its RPs support IGMP, PIM, and CGMP. The following sections discuss their configuration.

Basic PIM Configuration

To configure PIM, you'll need to enable multicast routing as well as PIM on each interface, like this:

 Switch(config)# ip multicast-routing Switch(config)# interface interface_type [slot_#/]port_# Switch(config-if)# ip pim dense-mode|sparse-mode|sparse-dense-mode 

graphics/note_icon.gif

Remember that an RP can be a traditional router or Layer 3 switch. The configurations in this chapter assume the latter. Also, remember that a Layer 3 switch can operate its interfaces using either a Layer 2 or Layer 3 process. This chapter focuses on the latter. For switches with Layer 2 physical interfaces, use logical VLAN interfaces to place your configuration commands.


graphics/alert_icon.gif

The ip multicast-routing command allows the RP to perform multicast operations. However, note that you must enable multicasting on an interface with the ip pim command to have an interface process and forward multicast traffic. The execution of the ip pim command on the interface also enables IGMP. There's no default mode setting on the interface multicast traffic is disabled on all interfaces. To view the multicast routing table, use the show ip mroute command.


dense-mode

If you choose dense-mode, the RP adds the interface to its multicast routing table and forwards multicast traffic out of all interfaces with PIM dense mode enabled. Through a discovery process, segments without any participating end stations are eventually pruned from the distribution tree.

sparse-mode

If you enter sparse-mode, interfaces are included in the table only if they receive downstream join messages from other PIM RPs or if IGMP report responses to the RPs' IGMP queries. Forwarding will occur for multicast traffic only if a rendezvous point is known. When the rendezvous point is known, the RP connected to the multicast server encapsulates the multicast packets into unicast packets and forwards them to the rendezvous point. On receiving these encapsulated multicasts, the rendezvous point strips off the encapsulation and forwards the multicast traffic. The rendezvous point is essentially acting as a central point of distribution for the multicast traffic. If there's no known rendezvous point, the RP will act in a dense-mode fashion. Therefore, when you configure interfaces in sparse-mode, you'll need to set up at least one rendezvous point.

sparse-dense-mode

When you're configuring the mode on the interface, specifying sparse-mode or dense-mode forces the interface to act accordingly. However, this might not be very efficient in some campus networks. There might be certain parts of your campus where dense mode is appropriate and other parts where sparse mode is more desirable. If you configure the interface in sparse-dense-mode, the interface is set up in dense-mode if the multicast group is operating in dense mode or sparse-mode if it's operating in sparse mode. Note that for you to use sparse mode, you must configure a rendezvous point.

Designated Routers

PIM uses designated routers (DRs) on a segment to reduce the number of IGMP queries created and the number of IGMP reports sent back in response. Each PIM-enabled interface on an RP periodically generates a PIM router-query message. The PIM RP on a LAN segment with the highest IP address is automatically elected as the DR. If the DR fails, a new DR will be elected using the same election process. As mentioned with the show ip pim interface command, there's no need to have DRs on point-to-point links such as serial connections. They're needed only for multiaccess segments such as Ethernet. Show commands are discussed in more depth later in this chapter.

The DR's responsibility is to generate IGMP queries to determine which, if any, end stations are participating in any multicast applications. Note that only the DR will generate IGMP queries, but all RPs on the segment will process the responding IGMP reports from participating clients. To view the list of neighbors for a PIM RP, use the show ip pim neighbor command.

Configuring Rendezvous Points

In sparse-mode configurations, you need at least one rendezvous point RP to disseminate your multicast traffic. All your leaf and branch RPs must know the IP address of the rendezvous point. Leaf RPs are RPs connected to multicast end stations. Branch RPs make up the distribution tree.

To provide a more efficient distribution of your multicast traffic, you can have more than one rendezvous point for a multicast group. This also provides redundancy. You have two ways to specify which RP is the rendezvous point RP: You can hard-code this on your RPs or you can use the auto-discovery process.

Specifying Rendezvous Points Manually

If you choose to hard-code the rendezvous point on your RPs, use the following command:

 Switch(config)# ip pim rp-address rendevous_point's_IP_address                          [multicast_group_access_list_number]                          [override] 

The multicast_group_access_list_number optional parameter is a standard IP access list number. In this access list, you list the multicast application addresses that you want this rendezvous RP to be responsible for. If you've hard-coded these addresses and also have learned of a rendezvous point RP for the same multicast group via auto-discovery, the override parameter forces the RP to use the hard-coded rendezvous point RP.

One problem of manually configuring these IP addresses is that they're prone to error in large-scale campuses. Also, to efficiently propagate your multicast traffic requires a lot of configuration and management on your part.

Here's a simple example of specifying a rendezvous point:

 Switch(config)# ip pim rp-address 192.168.1.1 

In this example, 192.168.1.1 is the rendezvous point for all sparse-mode multicast streams.

Auto-Discovery of Rendezvous Points

For auto-discovery to work correctly, you'll have to determine which RPs will be rendezvous points for which multicast addresses and configure auto-discovery on the RPs in your campus network. By default, auto-discovery is disabled. You'll have to configure it on your RPs in your campus. The first step is to configure the following command on the RPs you've chosen as rendezvous points:

 Switch(config)# ip pim send-rp-announce                          interface_type interface_number                          scope time_to_live                          [group-list access_list_number] 

The interface_type and interface_number fields indicate which IP address on the specified interface of the RP will be used in its announcements of its capability to perform as a rendezvous point. All messages sent to or from the rendezvous point use this IP address. The scope parameter specifies the number of hops that the announcement messages the rendezvous point creates can propagate. You can use the group-list parameter to specify which multicast addresses this RP can perform rendezvous point functions for. This is a standard IP access list.

With this command, you can have an RP be responsible for a range of multicast addresses, where different RPs can be rendezvous points for different multicast addresses, thereby more efficiently disseminating your multicast traffic. Likewise, you can have primary and backup rendezvous points for redundancy. The auto-rendezvous point will periodically send out announcement messages on the Cisco reserved multicast address (224.0.1.39), announcing its candidacy for becoming a rendezvous point; this is called a Cisco RP announce message.

Here is a simple example:

 Switch(config)# ip pim send-rp-announce ethernet 0/1 scope 4 

In this example, the rendezvous point generates announcement messages on Ethernet0/1, which travel no more than four hops. For each interface that you want to include in the multicast tree, specify it with the preceding command.

A mapping agent is an RP that can dynamically discover who the rendezvous point is and the multicast addresses for which it's responsible. It does this by listening for announcement messages by the rendezvous point(s). This information can be passed downstream to other mapping agents and, eventually, to designated RPs on segments that have multicast clients. The mapping agents do this by creating a multicast message called a Cisco RP discovery message. The multicast address used for this is 224.0.1.40. DRs listen for this message to determine which rendezvous point(s) they can use to get their multicast information from. They then go through the process of building a branch back to the rendezvous point to become part of the distribution tree.

When configured, mapping agents listen for announcements that candidate rendezvous points generate on 224.0.1.39. The mapping agent then takes this information and forwards it in a discovery message on 224.0.1.40 so that designated RPs can learn about the rendezvous points. By default, RPs are not mapping agents. To configure an RP as a mapping agent, execute the following command:

 Router(config)# ip pim send-rp-discovery scope time_to_live 

The scope parameter is used to keep the discovery messages within a certain hop count, perhaps preventing these messages from leaving the campus to a remote location. For example, to restrain the messages from traveling more than four hops, use this configuration:

 Switch(config)# ip pim send-rp-discovery scope 4 

Configuring PIMv2

PIMv2 is an extension of PIMv1 and is currently on track to becoming an IETF standard. It has the following enhancements:

  • Sparse and dense modes are defined per group, not per interface.

  • PIM uses its own packet format instead of IGMP to transport routing information.

  • Dynamic rendezvous point discovery is provided by a bootstrap router (BSR), which also provides fault tolerance.

  • Uses hellos instead of queries.

Auto-discovery of rendezvous points (auto-RP) and BSR in PIMv2 are mutually exclusive. Auto-RP is Cisco-proprietary, whereas BSR will shortly be an IETF standard. Using BSR is recommended if you have only PIMv2 routers; otherwise, use auto-RP.

Interoperability

If you have a mixture of PIMv1 and v2 RPs in the same network, the v2 RPs downgrade themselves to v1. This enables you to slowly migrate from v1 to v2. During this process, you should perform the following:

  • Use sparse-dense mode for PIM.

  • Use auto-RP.

  • For a rendezvous point, use a v2 or v1 PIM RP; however, in a mixed environment, Cisco recommends using a v2 RP.

Configuration

Use the following configuration to set up PIMv2:

 Switch(config)# interface type [slot_#/]port_# Switch(config-if)# ip pim version 1|2 

You can either specify version 1 or 2. For example, if you want to run PIM version 1 on Ethernet0/1, use this configuration:

 Switch(config)# interface ethernet 0/1 Switch(config-if)# ip pim version 1 

After you've configured the version, you need to configure the following rendezvous point as well as the BSR:

 Switch(config)# ip pim rp-candidate                          interface_type interface_number                          time_to_live                          [group-list access_list_number] Switch(config)# ip pim bsr-candidate                          interface_type interface_number                          [priority] 

For example, if you had only PIMv2 routers, use the latter command, like this:

 Switch(config)# ip pim bsr-candidate ethernet0/1 Switch(config)# ip pim bsr-candidate ethernet0/2 

This enables PIMv2 rendezvous points (BSR) for both Ethernet interfaces on this RP.

Configuring CGMP

Configuring CGMP on an RP is a simple process. On your RP, configure the following:

 Switch(config)# interface type [slot_#/]port_# Switch(config-if)# ip cgmp 

Here is a simple example of setting up CGMP on an RP's Ethernet interface:

 Switch(config)# interface ethernet 0/1 Switch(config-if)# ip cgmp 

To verify the RP's configuration, use show ip igmp interface, which we discussed in the previous section.

You do not need to configure CGMP on your Catalyst switch it's already enabled by default.

Verifying Your Multicast Configuration

To verify that the commands you entered actually enabled PIM, use the following show command:

 Switch# show ip pim interface Address       Interface  Mode    Neighbor  Query    DR                                  Count     Interval 192.168.1.1   VLAN10     Dense   1         30       192.168.1.2 192.168.3.1   VLAN20     Dense   1         30       192.168.3.2 192.168.4.1   VLAN30     Dense   1         30       192.168.4.2 

The first IP address listed is the IP address of the next-hop RP off of the interface listed after it. The Mode field describes the mode that the interface on the RP is operating as. The Neighbor Count field lists the number of down/upstream neighbors off this interface. The Query Interval field lists the interval, in seconds, of how often the RP generates PIM router-query messages on the interface. The default is 30 seconds. The last field, DR, lists the designated RP for the LAN segment. This is important for determining which RP on a LAN segment will be generating IGMP query messages. Serial links do not have DRs; therefore, you would see an IP address of 0.0.0.0.

To see a list of PIM neighbors, use the show ip pim neighbor command. Here is an example:

 Switch# show ip pim neighbor PIM Neighbor Table Neighbor    Interface   Uptime/Expires     Ver DR Address Prio/Mode 192.168.1.2 Ethernet0/1 00:02:20/250 msec  v2             1 / S 

In this example, the router has one PIM neighbor (192.168.1.2) off of Ethernet0/1. This neighbor has been reachable for more than two hours. The version of PIM is 2 (v2) and is running in sparse mode (S).

To verify that PIM is learning about multicast groups and updating its routing table correctly, use the show ip mroute command. In the following example, you'll look at a multicast routing table to examine an RSM that has dense-mode interfaces:

 Switch# show ip mroute IP Multicast Routing Table Flags: D - Dense, S - Sparse, C - Connected, L - Local,               P - Pruned, R - RP-bit set, F - Register flag,               T - SPT-bit set Timers: Uptime/Expires Interface state: Interface, Next-Hop, State/Mode (*, 224.0.252.1), uptime 1:37:38, expires 0:01:43,                   RP is 0.0.0.0, flags: DC   Incoming interface: Null, RPF neighbor 0.0.0.0   Outgoing interface list:     VLAN10, Forward/Dense, 0:15:31/0:00:00     VLAN20, Forward/Dense, 0:15:45/0:00:00     VLAN30, Forward/Dense, 0:16:37/0:00:00 (192.168.1.1/32, 224.0.252.1), uptime 2:00:21, expires 0:03:32,                   flags: C   Incoming interface: Vlan10, RPF neighbor 192.168.3.17   Outgoing interface list:     VLAN20, Forward/Dense, 20:20:00/0:02:52     VLAN30, Forward/Dense, 20:19:37/0:03:21 

There are two entries in parentheses for each multicast route. The first entry is the IP address of the source RP, followed by the IP address of the multicast application. If you see an asterisk (*) as in the first entry, it means that all interfaces are sources. This basically means that the source router is unknown at this point, and will flood the multicast traffic out all its interfaces. The second listing knows of the source router, which is 192.168.1.1.

There are two types of timers. The uptime timer displays the amount of time since the multicast application has been discovered, and the expires timer displays how long until the entry in the routing tabled will be removed without receiving information from a downstream RP or IGMP-capable end station. The RP field after the expired timer represents the rendezvous point RP, if known. This will more than likely contain an entry if the mode specified is sparse-mode. The flags following this describe the type of route. In the case of the first one, DC, the route is dense mode and is directly connected to the RP.

The Incoming Interface field describes the expected source interface for the multicast packet, given the listed multicast application address. If the packet is not received on this interface, it's discarded. The RP assumes in this instance that the incoming interface is where the multicast server is located and that any other interfaces are branches from this root interface. The RPF neighbor field is the IP address of the next upstream RP that's closest to the source multicast server.

The outgoing interface field lists the interfaces to which the multicast packets will be forwarded. The fields listed contain the outgoing interface, the forwarding mode, and the update and expiration timers.



BCMSN Exam Cram 2 (Exam Cram 642-811)
CCNP BCMSN Exam Cram 2 (Exam Cram 642-811)
ISBN: 0789729911
EAN: 2147483647
Year: 2003
Pages: 171
Authors: Richard Deal

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net