IBGP Route Reflection

Route reflection is used to eliminate the need for a full IBGP mesh by allowing routers with cluster IDs configured in a given peer group to readvertise routes to clients associated with that group. A Juniper Networks route reflector can have multiple cluster IDs configured, and can act as a reflector for one group of clients and as a client for a higher-level route reflector as part of a hierarchical route reflection topology.

Configure Route Reflection

To complete this configuration example, you will need to configure a route reflection topology that complies with the following criteria:

  • You must configure at least three clusters and at least two route reflectors.

  • You must use physical address peering in at least one of your clusters.

  • The failure of any link must not break the route reflection topology.

  • The route reflection topology must not impose suboptimal routing or black holes.

  • Authentication and logging settings from the previous section must remain in effect.

Before starting your configuration, it is suggested that you first design a route reflection topology that meets all specified criteria. Figure 5.2 provides an example of such a topology.

click to expand
Figure 5.2: Suggested route reflection topology

The three cluster IDs shown meet the required number of route reflection clusters, and having r3, r4, and r5 act as route reflectors also complies with the example's design requirements. The tricky aspect to this design is the requirement that you IBGP-peer to physical addresses in one of your clusters, while also ensuring that the failure of a single link will not break the reflection topology. Having r1 and r2 IBGP-peer with both r3 and r4 allows them to maintain one of their IBGP sessions in the event of an interface failure at either r3 or r4. But placing both r3 and r4 in the same cluster would be a mistake in this case, because doing so will cause r3 and r4 to ignore updates that carry their common cluster ID. This will result in missing routes on one of the reflectors should a peering interface fail on one of the two route reflectors that serve clients r1 or r2, which would violate the redundancy aspects of your design requirements.

In addition to the above points, you will need to maintain a full mesh of IBGP sessions between the route reflectors (r3, r4, and r5) while eliminating the unneeded IBGP sessions among route reflector clients. The arrows in Figure 5.2 represent the required IBGP peering relationships for this particular design.

While r5 obviously represents a single point of failure for cluster 2.2.2.2, the requirements state that your network must survive only the failure of individual links. If r4 and r5 are both configured with a cluster ID of 2.2.2.2, the presence of a cluster ID attribute with value 2.2.2.2 in route updates sent from r5 will cause the corresponding route to be ignored by r4, which in turn means that r4 will not be able to reflect these routes into clusters 1.1.1.1 or 3.3.3.3. The "no black holes" aspect of this example would be difficult, if not impossible, to achieve with this particular design, should r4 be configured with a cluster ID of 2.2.2.2 with the IBGP peering sessions shown, because this would result in black holes for the IBGP routes originated by r6 and r7 from the perspective of r4. This problem could be resolved by configuring r6 and r7 to also IBGP-peer with r4, but why bother with additional peering sessions when it is far simpler to just omit the cluster-id statement from r4?

start sidebar
Physical Interface Peering for IBGP?

The requirement that you peer to physical addresses in one of the clusters, while still providing tolerance for interface failures, is a bit unrealistic, in that the current best practice for IBGP designs would recommend that you always use loopback-based peering. This restriction is present so that an evaluation can be made regarding the candidate's understanding of route reflection design, and the different ways that cluster IDs can be assigned to route reflectors that serve a common set of clients. Being able to design a reflection topology that compensates for a less-than-ideal choice of peering interface is a valid skill, and one that all true IBGP 'experts' should possess.

end sidebar

The design shown in Figure 5.2 is perhaps the more straightforward solution given these requirements, but other designs are certainly possible. For example, Figure 5.3 provides an alternative, if somewhat more complex, topology that also meets all specified restrictions as part of a hierarchical reflection design.

click to expand
Figure 5.3: An alternative route reflection topology

In the alternative topology, cluster 1.1.1.1 clients r1 and r2 have a single lo0-based peering session to one of the route reflectors that serve cluster 1.1.1.1. While this design does not tolerate a failure of either r3 or r4, it does provide the requisite link failure tolerance due to the use of lo0 peering. r5 provides reflection services for the network's core, which eliminates the need for an IBGP peering session between r3 and r4. You must use lo0-based IBGP peering in cluster 2.2.2.2 to meet the link failure redundancy requirements.

Clusters 3.3.3.3 and 4.4.4.4 are where things get interesting. The need to accommodate interface-based peering in at least one of your clusters forces a somewhat complex design, in which r6 acts as a reflector for client r7 using cluster ID 3.3.3.3 while r7 acts as a reflector for client r6 using cluster ID 4.4.4.4. r5 views both r6 and r7 as non-clients, while r6 and r7 both view r5 as a non-client.

While a design that pairs two route reflectors in such fashion is a bit contrived, it is a viable route reflection topology that meets all provided restrictions. Besides, you have to admit that the operator who is capable of devising such a solution has more than proven their understanding of route reflector operation through a demonstration of their ability to design around aspects of a network that may be beyond their direct control.

Configure Clusters 1.1.1.1 and 3.3.3.3

The following commands correctly configure r4 as a route reflector for cluster ID 3.3.3.3, eliminate the now-unneeded IBGP peering definitions, and reconfigure the remaining neighbor statements to reflect interface-based peering according to the topology shown earlier in Figure 5.2. You begin by renaming the existing peer group to cluster-3333 to avoid possible confusion down the road:

[edit protocols bgp] lab@r4# rename group internal to group cluster-3333 [edit protocols bgp] lab@r4# edit group cluster-3333 [edit protocols bgp group cluster-3333] lab@r4# delete neighbor 10.0.3.3 [edit protocols bgp group cluster-3333] lab@r4# delete neighbor 10.0.3.5 [edit protocols bgp group cluster-3333] lab@r4# delete neighbor 10.0.9.6 [edit protocols bgp group cluster-3333] lab@r4# delete neighbor 10.0.9.7 

After deleting the unneeded neighbor statements, you use the CLI's rename function to configure the interface-based peering definitions for r1 and r2:

[edit protocols bgp group cluster-3333] lab@r4# rename neighbor 10.0.6.1 to neighbor 10.0.4.5 [edit protocols bgp group cluster-3333] lab@r4# rename neighbor 10.0.6.2 to neighbor 10.0.4.10 

And now the cluster ID is assigned:

[edit protocols bgp group cluster-3333] lab@r4# set cluster 3.3.3.3 

The configuration of r4's cluster 3.3.3.3 peer group is shown. Note that the local- address option is no longer required due to the use of interface peering; in fact, leaving the local-address set to the r4's lo0 address would now prevent the establishment of IBGP sessions in cluster 3.3.3.3:

[edit protocols bgp group cluster-3333] lab@r4# show type internal; traceoptions {    file r4-bgp;    flag state detail;  }  authentication-key "$9$G3jkPpu1Icl"; # SECRET-DATA export ibgp; cluster 3.3.3.3; neighbor 10.0.4.5; neighbor 10.0.4.10;

The following stanza correctly configures r3 for operation as a route reflector for cluster 1.1.1.1 and also specifies interface-based peering to r1 and r2:

[edit protocols bgp group cluster-1111] lab@r3# show type internal; traceoptions {    file r3-bgp;    flag state detail;  }  authentication-key "$9$G3jkPpu1Icl"; # SECRET-DATA export ibgp; cluster 1.1.1.1; neighbor 10.0.4.14; neighbor 10.0.4.2;

The interface peering definitions differ slightly between r3 and r4 due to the desire that each router peer to the "closest" interface possible while avoiding the use of the same physical interface for both peering sessions. To provide the required redundancy for single interface or link failures, r4 is told to peer to r1's fe-0/0/2 address while r3 peers to the address associated with r1's fe-0/0/1.200 address in this example.

With both of the cluster's route reflectors configured, you can now delete the unneeded neighbor statements from cluster 1.1.1.1 and 3.3.3.3 clients. A working configuration for r1 is shown here. Note that the only changes required on the route reflector clients are the deletion of neighbor statements that do not relate to the client's route reflectors, and the need to redefine the remaining peering sessions to evoke the required interface-based peering:

[edit] lab@r1# show protocols bgp group internal {      type internal;      traceoptions {         file r1-bgp;          flag state detail;     }     authentication-key "$9$oGZDk9Cu0Ic"; # SECRET-DATA     export ibgp;     neighbor 10.0.4.13;     neighbor 10.0.4.9; }

The configuration of r2 is similar to that of r1, but you will need to configure r2's peering definitions so they are compatible with the interface peering statements in place on r3 and r4.

Configure Cluster 2.2.2.2

The following commands correctly reconfigure r5 as a route reflector for cluster ID 2.2.2.2, and eliminate the now-unneeded IBGP peering definitions. Once again we start by renaming the existing group to cluster-2222 to avoid confusion down the road:

[edit protocols bgp] lab@r5# rename group internal to group cluster-2222 [edit protocols bgp] lab@r5# edit group cluster-2222 [edit protocols bgp group cluster-2222] lab@r5# delete neighbor 10.0.3.3 [edit protocols bgp group cluster-2222] lab@r5# delete neighbor 10.0.3.4 [edit protocols bgp group cluster-2222] lab@r5# delete neighbor 10.0.6.1 [edit protocols bgp group cluster-2222] lab@r5# delete neighbor 10.0.6.2 [edit protocols bgp group cluster-2222] lab@r5# set cluster 2.2.2.2 

The cluster-2222 peer group is now displayed:

[edit protocols bgp group cluster-2222] lab@r5# show type internal; traceoptions {    file r5-bgp;    flag state detail;  }  local-address 10.0.3.5; authentication-key "$9$km5FIRSyK8"; # SECRET-DATA export ibgp; cluster 2.2.2.2; neighbor 10.0.9.6; neighbor 10.0.9.7;

The only change needed for clients in cluster 2.2.2.2 is the removal of the now-unnecessary neighbor statements. A working configuration for r6 is shown:

[edit protocols bgp group internal] lab@r6# show type internal; traceoptions {    file r6-bgp;    flag state detail;  }  local-address 10.0.9.6; authentication-key "$9$6xvJCpBW87Nb2"; # SECRET-DATA export ibgp; neighbor 10.0.3.5;

r7 requires a similar configuration before you proceed to the next section.

Configure IBGP in the Core

With all three clusters correctly configured, you now define a new IBGP group for the full mesh of IBGP connections needed in the core. The following commands correctly create and configure the core peer group on r3:

[edit protocols bgp] lab@r3# edit group core [edit protocols bgp group core] lab@r3# set type internal local-address 10.0.3.3 [edit protocols bgp group core] lab@r3# set neighbor 10.0.3.4 [edit protocols bgp group core] lab@r3# set neighbor 10.0.3.5 [edit protocols bgp group core] lab@r3# set authentication-key jni 

You will also need to apply the BGP export policy to the core peer group, as failing to do so will cause the 192.168.x/24 routes owned by the core routers to be omitted in the updates they send to the other core routers:

[edit protocols bgp] lab@r3# set group core export ibgp 
Tip 

Do not forget to add the required IBGP state transition tracing to the newly created core group, because you must still log all IBGP state changes. Details of this type are easy to overlook while pounding away on the keyboard, so always be on your guard.

The new peer group for IBGP core routers is shown next:

[edit protocols bgp] lab@r3# show group core type internal; traceoptions {    file r3-bgp;    flag state detail;  } local-address 10.0.3.3; authentication-key "$9$tOrr01h7Nbs2a"; # SECRET-DATA export ibgp; neighbor 10.0.3.4; neighbor 10.0.3.5;

Before moving to the following verification section, you should create and commit similar core router IBGP group definitions on r4 and r5.

Verify Route Reflection

The following commands are used to verify your route reflection topology. We start by verifying that all BGP sessions are established:

[edit] lab@r3# run show bgp summary Groups: 2 Peers: 4 Down peers: 0 Table          Tot Paths  Act Paths Suppressed    History Damp State     Pending inet.0                 8          6          0          0          0           0 Peer               AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn  State|#Active/Received/Damped... 10.0.3.4        65412        237        236       0       0     1:52:42 1/3/0 0/0/0 10.0.3.5        65412        228        234       0       0     1:52:38 3/3/0 0/0/0 10.0.4.2        65412        116        124       0       0       57:04 1/1/0 0/0/0 10.0.4.14       65412         49         55       0       2       23:08 1/1/0 0/0/0

The output confirms that all of r3's IBGP sessions have been correctly established, which is a good sign to be sure. Though the results are not shown here, you should issue the same command on all of the routers to verify that each shows the expected number of sessions, and that the sessions are in the established state.

You next confirm that no routes have been lost as a result of the reconfiguration and deployment of route reflection, beginning with r1:

[edit] lab@r1# run show route protocol bgp 192/8 | match 192.168 | count Count: 6 lines

The results indicate that r1 has learned six 192.168-related prefixes through the BGP protocol. Considering that there are six other routers in the test bed, and that they should all be sending a single 192.168.x/24 static route through IBGP, the output confirms that r1 is not missing routes due to the deployment of route reflection. The same command is now issued on a few other routers. It is recommended that you test all routers before deciding your network is healthy:

[edit protocols bgp] lab@r5# run show route protocol bgp 192/8 | match 192.168 | count Count: 6 lines 

r5 also has all the IBGP routes. You now perform a quick check on r6 and r7:

[edit protocols bgp group internal] lab@r6# run show route protocol bgp 192/8 | match 192.168 | count Count: 6 lines [edit protocols bgp group cluster-2222] lab@r7# run show route protocol bgp 192/8 | match 192.168 | count Count: 6 lines

Great, all routers now display the expected number of BGP routes. You then confirm that the failure of r3's fe-0/0/0 interface will not result in black holes because of your route reflection design. We begin by verifying that r3 is receiving two IBGP advertisements for routes that belong to clusters 1.1.1.1 and 3.3.3.3:

[edit] lab@r3# run show route protocol bgp 192.168.10/24 inet.0: 31 destinations, 33 routes (31 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 192.168.10.0/24    *[BGP/170] 00:33:59, MED 0, localpref 100                        AS path: I                      > to 10.0.4.14 via fe-0/0/0.0                     [BGP/170] 00:04:14, MED 0, localpref 100, from 10.0.3.4                        AS path: I                      > to 10.0.4.14 via fe-0/0/0.0

r3 is receiving advertisements for the 192.168.10/24 prefix from r1 directly, and through the route reflection services of r4. Had you assigned the same cluster ID to both r3 and r4, you would only see the route learned directly from r1.

[edit] lab@r3# run show route protocol bgp 192.168.20/24 inet.0: 31 destinations, 33 routes (31 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 192.168.20.0/24    *[BGP/170] 01:07:58, MED 0, localpref 100                        AS path: I                      > to 10.0.4.2 via fe-0/0/1.0                     [BGP/170] 00:04:17, MED 0, localpref 100, from 10.0.3.4                        AS path: I                      > to 10.0.4.2 via fe-0/0/1.0 

The 192.168.20/24 route owned by r2 also correctly shows two viable IBGP advertisements. You next deactivate r3's fe-0/0/0 interface to break the r1-r3 IBGP peering session, with the goal of confirming that r3 will install the 192.168.10/24 route learned from r4 as the active route:

[edit] lab@r3# deactivate interfaces fe-0/0/0 [edit] lab@r3# commit commit complete [edit] lab@r3# run show route protocol bgp 192.168.10/24 detail inet.0: 30 destinations, 31 routes (30 active, 0 holddown, 0 hidden) 192.168.10.0/24 (1 entry, 1 announced)         *BGP Preference: 170/-101              Source: 10.0.3.4              Nexthop: 10.0.4.2 via fe-0/0/1.0, selected              Protocol Nexthop: 10.0.4.5 Indirect nexthop: 83b9198 28              State: <Active Int Ext>              Local AS: 65412 Peer AS: 65412              Age: 4:32 Metric: 0 Metric2: 10              Task: BGP_65412.10.0.3.4+1070              Announcement bits (3): 0-KRT 2-BGP.0.0.0.0+179 3-Resolve inet.0              AS path: I (Originator)Cluster list: 3.3.3.3              AS path: Originator ID: 10.0.6.1              Localpref: 100              Router ID: 10.0.3.4

As planned, r3 still has a route to 192.168.10/24. Because the protocol next hop of 10.0.4.5 is unchanged by the reflection activities of r4, we expect that a traceroute to the 192.168.10/ 24 route will succeed, albeit with an extra hop through r2:

[edit] lab@r3# run traceroute 192.168.10.1 traceroute to 192.168.10.1 (192.168.10.1), 30 hops max, 40 byte packets  1 10.0.4.2 (10.0.4.2) 0.646 ms 0.454 ms 0.402 ms  2 10.0.4.5 (10.0.4.5) 0.493 ms 0.492 ms 0.464 ms  3 10.0.4.5 (10.0.4.5) 0.485 ms !H 0.495 ms !H 0.470 ms !H 

Good, the use of two different cluster IDs has resulted in the required network redundancy, despite the use of IBGP interface-based peering, which is normally not recommended due to the very lack of fault tolerance that this design compensates for. You should activate r3's fe-0/0/0 interface before proceeding.

You now verify that both r3 and r4 are reflecting routes from cluster 2.2.2.2 to the clients in clusters 1.1.1.1 and 3.3.3.3. Recall that a decision was made earlier to not associate r4 with cluster 2.2.2.2 to produce this very behavior:

[edit] lab@r2# run show route protocol bgp 192.168.70.1 inet.0: 22 destinations, 29 routes (22 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 192.168.70.0/24    *[BGP/170] 03:37:10, MED 0, localpref 100, from 10.0.4.1                        AS path: I                      > to 10.0.4.9 via fe-0/0/1.0                     [BGP/170] 02:33:24, MED 0, localpref 100                        AS path: I                      > to 10.0.4.9 via fe-0/0/1.0

Excellent! Both r3 and r4 are advertising routes from cluster 2.2.2.2 to the clients in clusters 1.1.1.1 and 3.3.3.3. In this topology, the definition of cluster ID 2.2.2.2 on r4 would have resulted in r2 seeing only one IBGP advertisement for the 192.168.70/24 prefix, which would be coming from r3, and a complete lack of cluster 2.2.2.2 routes on r4. You now deactivate r5's ATM interface to confirm that your route reflection topology does not produce suboptimal routing:

[edit] lab@r5# deactivate interfaces at-0/2/1 [edit] lab@r5# commit commit complete

Once again, we examine r2's route to 192.168.70/24:

[edit] lab@r2# run show route protocol bgp 192.168.70.1 inet.0: 22 destinations, 29 routes (22 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 192.168.70.0/24    *[BGP/170] 03:37:10, MED 0, localpref 100, from 10.0.4.1                       AS path: I                    > to 10.0.4.9 via fe-0/0/1.0                    [BGP/170] 02:33:24, MED 0, localpref 100                      AS path: I                    > to 10.0.4.9 via fe-0/0/1.0 

Hmm. Nothing has changed. With the ATM link down between r3 and r5, and r3's advertisement selected as the active route, you might anticipate that traffic sourced from r2 and destined to 192.168.70/24 destinations will now experience an extra hop through r3.

Does this mean there is a flaw in your reflection topology or is this situation a non-issue? It is not uncommon to see test requirements that may cause a candidate's attention to be focused on what appears to be a problem with the current task, when in reality the issue may relate to a completely different aspect of your configuration, or might even be perfectly normal behavior, and therefore not an issue at all.

This 'symptom' appears because both r3 and r4 are generating equal-cost IS-IS default routes (through the attached bit in their LSPs) into area 49.0002. Since backbone routes are not leaked into area 49.0002, the loss of r5's ATM interface goes undetected because r1 and r2 rely on the IS-IS default route to reach all inter-area destinations anyway. With two equal-cost default routes directing all traffic out of area 49.0002, you would expect to find that r2 cannot select the best IBGP path based on IGP metric differences, which causes the tie to be broken by selecting the BGP speaker with the lowest RID as the active route. The fact that r3's RID is lower than r4's explains why r2 (and r1) will always select r3's BGP advertisements over r4's for destinations that lie outside of their IS-IS level 1 area. Displaying detailed information for the 192.168.70/24 route confirms this theory:

[edit] lab@r2# run show route protocol bgp 192.168.70.1 detail inet.0: 22 destinations, 29 routes (22 active, 0 holddown, 0 hidden) 192.168.70.0/24 (2 entries, 1 announced)                 *BGP Preference: 170/-101                      Source: 10.0.4.1                      Nexthop: 10.0.4.9 via fe-0/0/1.0, selected                      Protocol Nexthop: 10.0.9.7 Indirect nexthop: 83d83b8 51                      State: <Active Int Ext>                      Local AS: 65412 Peer AS: 65412                      Age: 3:48:08 Metric: 0 Metric2: 5                      Task: BGP_65412.10.0.4.1+179                      Announcement bits (2): 0-KRT 2-Resolve inet.0                      AS path: I (Originator)Cluster list: 1.1.1.1 2.2.2.2                      AS path: Originator ID: 10.0.9.7                      Localpref: 100                      Router ID: 10.0.3.3                      BGP Preference: 170/-101                      Source: 10.0.4.9                      Nexthop: 10.0.4.9 via fe-0/0/1.0, selected                      Protocol Nexthop: 10.0.9.7 Indirect nexthop: 83d83b8 51                      State: <NotBest Int Ext>                      Inactive reason: Router ID                      Local AS: 65412 Peer AS: 65412                      Age: 2:44:22 Metric: 0 Metric2: 5                      Task: BGP_65412.10.0.4.9+1069                      AS path: I (Originator)Cluster list: 3.3.3.3 2.2.2.2                      AS path: Originator ID: 10.0.9.7                      Localpref: 100                      Router ID: 10.0.3.4 

The real irony here is that forwarding from r2 to the 192.168.70/24 prefix is unaffected by r2's choice of which BGP update to install as active because both updates indicate the same protocol next hop of 10.0.9.7! This is one reason why extra thought should be given to rewriting the BGP next hop for IBGP routes. Doing so on r3 in this example would have resulted in r2 forwarding packets to r3, and this behavior can result in suboptimal forwarding paths. The results of a traceroute from r2 to 192.168.70/24 confirm that r3 is not actually in the forwarding path, despite its advertisement being selected as the active route. So, like the old saying goes, "if it's not broken, don't fix it!"

[edit] lab@r2# run traceroute 192.168.70.1 traceroute to 192.168.70.1 (192.168.70.1), 30 hops max, 40 byte packets  1 10.0.4.9 (10.0.4.9) 0.348 ms 0.248 ms 0.219 ms  2 10.0.2.9 (10.0.2.9) 0.288 ms 0.247 ms 0.235 ms  3 10.0.8.10 (10.0.8.10) 0.235 ms 0.203 ms 0.191 ms  4 10.0.8.10 (10.0.8.10) 0.194 ms !H 0.204 ms !H 0.193  ms !H

With r5's ATM interface still deactivated, we confirm that all routers still show the required number of BGP routes, and test forwarding with a traceroute or two:

[edit protocols bgp group cluster-2222] lab@r7# run show route protocol bgp 192/8 | match 192.168 | count Count: 6 lines [edit] lab@r3# run show route protocol bgp 192/8 | match 192.168 | count Count: 6 lines [edit] lab@r1# run show route protocol bgp 192/8 | match 192.168 | count Count: 6 lines 

The routers still show all the IBGP routes, which is good. You now issue some traceroutes to determine the forwarding paths through your network:

[edit] lab@r7# run traceroute 192.168.10.1 traceroute to 192.168.10.1 (192.168.10.1), 30 hops max, 40 byte packets  1 10.0.8.9 (10.0.8.9) 0.337 ms 0.250 ms 0.218 ms  2 10.0.2.10 (10.0.2.10) 0.276 ms 0.246 ms 0.235 ms  3 10.0.4.10 (10.0.4.10) 0.182 ms 0.168 ms 0.156 ms  4 10.0.4.5 (10.0.4.5) 0.716 ms 0.686 ms 0.361 ms  5 10.0.4.5 (10.0.4.5) 0.357 ms !H 0.679 ms !H 0.371 ms !H [edit] lab@r7# run traceroute 192.168.20.1 traceroute to 192.168.20.1 (192.168.20.1), 30 hops max, 40 byte packets  1 10.0.8.9 (10.0.8.9) 0.361 ms 0.248 ms 0.219 ms  2 10.0.2.10 (10.0.2.10) 0.287 ms 0.251 ms 0.238 ms  3 10.0.4.10 (10.0.4.10) 0.183 ms 0.169 ms 0.157 ms  4 10.0.4.10 (10.0.4.10) 0.161 ms !H 0.171 ms !H 0.157 ms !H

You are likely to think that the 'extra' hop encountered in the traceroute to the 192.168.10/24 prefix is the result of the ATM interface outage on r5, but reactivating the interface will not change the forwarding path to cluster 1.1.1.1 and 3.3.3.3 destinations due to r5 seeing a lower metric for the 10.0.4/22 summary through its aggregated SONET link to r4. You next trace the route to 192.168.30/24, which is owned by r3:

[edit] lab@r7# run traceroute 192.168.30.1 traceroute to 192.168.30.1 (192.168.30.1), 30 hops max, 40 byte packets  1 10.0.8.9 (10.0.8.9) 0.352 ms 0.247 ms 0.217 ms  2 10.0.2.10 (10.0.2.10) 0.279 ms 0.249 ms 0.235 ms  3 10.0.2.5 (10.0.2.5) 0.308 ms 0.266 ms 0.252 ms  4 10.0.2.5 (10.0.2.5) 0.260 ms !N 0.267 ms !N 0.257 ms !N

The extra hop encountered when tracing to the 192.168.30/24 route originated by r3 is attributable to the ATM interface deactivation, however. Results like these indicate that you have met the specified requirements for the route reflection example.

Tip 

Do not forget to reactivate r5's ATM interface when you are satisfied that your design meets the stated redundancy requirements, because its absence may cause problems in subsequent lab steps. Forgetting to re-enable an interface in the actual lab could lead to massive point loss and a resulting lack of joy at the conclusion of your day. You might try using the confirmed option when you commit these types of changes, as this will cause the router to automatically roll back and commit the rollback 1 file. This technique can be a real lifesaver in the event that you get distracted and forget to issue the rollback yourself.




JNCIP. Juniper Networks Certified Internet Professional Study Guide Exam CERT-JNCIP-M
JNCIP: Juniper Networks Certified Internet Professional Study Guide
ISBN: 0782140734
EAN: 2147483647
Year: 2003
Pages: 132

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net