Case Study: IGBP

The IBGP case study is designed to emulate a typical JNCIP-level IBGP configuration scenario. In the interest of 'mixing things up,' you will be adding the IBGP-related case study configuration to the interface and OSPF-related case study configurations produced at the end of Chapters 2 and 3, respectively. The multi-area OSPF topology is shown in Figure 5.7 so you can reacquaint yourself with it.

click to expand
Figure 5.7: OSPF case study topology

Because you will now be using OSPF as your IGP, you should load and commit your saved OSPF case study configurations from Chapter 3 to ensure that your routers will look and behave like the examples shown here. Before starting the IBGP case study, you should verify the correct operation of all routers, interfaces, OSPF, and the RIP router, using the confirmation steps outlined in the case study at the end of Chapter 3. You may also want to review the OSPF case study requirements to refresh your memory as to the specifics of the IGP that will now support your IBGP configurations.

You will need to refer to the criteria listing and the information contained in Figure 5.8, the case study topology, for the information you will need to complete this case study. It is expected that a JNCIP exam candidate will be able to complete this case study in approximately one hour, with the result being an IBGP design and configuration that exhibits no serious operational problems. Sample configurations from all seven routers are provided at the end of the case study for comparison with your own configurations. Because multiple solutions are usually possible, differences between the provided examples and your own configurations do not always indicate that mistakes have been made. Because you are graded on the overall functionality of your network and its conformity to the specified configuration criteria, various operational mode commands are included so that you may compare the behavior of your network to a known good example.

click to expand
Figure 5.8: IBGP case study topology

To complete this case study, your IBGP configuration must meet the following criteria:

  • Your IBGP design must be added to the OSPF case study configuration from Chapter 3.

  • Deploy two confederations using AS 65000 and 65001.

  • Use route reflection in each confederation. Your design must use exactly two unique cluster IDs and no more than three route reflectors.

  • Suppress reflection in one of the clusters.

  • Ensure that your design will tolerate the failure of any single link/interface, and the failure of either r3 or r4.

  • You must use loopback-based IBGP peering within each cluster, and interface peering for C-BGP links.

  • Authenticate all IBGP sessions in one of the clusters with key jnx.

  • Redistribute the static routes shown earlier in Figure 5.7 into IBGP, and using communities, ensure that all routers prefer r2's 192.168.100/24 IBGP route. You must not alter the default local preference of this route.

  • Your IBGP design cannot result in any black holes or suboptimal routing.

  • Redistribute a summary of the RIP routes into IBGP from both r6 and r7.

  • r5 must IBGP load-balance to the summary route representing the RIP prefixes.

  • Make sure that r1 and r2 receive the RIP route summary from r3 and r4 through IBGP. You cannot change any protocol preference values on r3 or r4, and the BGP protocol next hop as seen by r1 and r2 must be the same as that seen by r5.

  • Configure r7 to be passive, and ensure that its IBGP sessions operate with a 45-second keepalive interval. No other session keepalive intervals should be modified.

In this case study, the RIP router is preconfigured to advertise the 192.168.0-3/24 routes to both r6 and r7. Please refer back to Chapter 3 for the details of its configuration, if you are curious.

IBGP Case Study Analysis

Each configuration requirement for the case study will now be matched to one or more valid router configurations and commands that can be used to confirm whether your network is operating within the specified case study guidelines. We begin with these criteria because they serve to establish a baseline for your BGP connectivity:

  • Deploy two confederations using AS 65000 and 65001.

  • Use route reflection in each confederation. Your design must use exactly two unique cluster IDs and no more than three route reflectors.

  • Suppress reflection in one of the clusters.

  • Ensure that your design will tolerate the failure of any single link/interface, and the failure of either r3 or r5.

  • You must use loopback-based IBGP peering within each cluster, and interface peering for C-BGP links.

Before you start pounding away on your routers, it is suggested that you take a few moments to think through your various design alternatives, and that you document your design on paper so that confusion and mistakes are less likely once you begin the act of configuring the network. Figure 5.9 illustrates a workable design that meets all requirements posed for this case study.

click to expand
Figure 5.9: Suggested IBGP design

Figure 5.9 shows that r1, r2, r3, and r4 will belong to confederation 65000, and that both r3 and r4 will act as route reflectors for this cluster. r5, r6, and r7 will be in confederation 65001 with r5 acting as the cluster's route reflector. The extra IBGP peering session shown between r6 and r7 will accommodate the lack of reflection in this cluster. Lastly, the C-BGP sessions between r3, r4, and r5 will connect the two subconfederations with the required level of redundancy, despite the fact that your C-BGP connections must use interface peering.

The two C-BGP connections in the core provide the required redundancy due to the way a route reflector handles the reflection of routes from non-clients. Because a route reflector only adds its cluster ID when it is reflecting routes received from one of its clients, r4 will not add its cluster ID when it reflects the routes received from r5 into cluster 1.1.1.1. This means that r3 will not lose any BGP routes if the r3-r5 C-BGP session should fail because it will receive the same routes through the reflection services of r4, and the lack of cluster ID 1.1.1.1 in these routes allows them to be accepted by r3. Routes received from cluster 1.1.1.1 clients will have cluster ID 1.1.1.1 attached when reflected between r3 and r4, but this is not an issue because each of the cluster's route reflectors will also receive these routes directly over its IBGP peering sessions to the cluster's clients.

Although other logical BGP topologies that meet these requirements almost certainly exist, the solution shown in Figure 5.9 is one of the most straightforward designs.

start sidebar
C-BGP Peering to Physical Interfaces?

While there is nothing 'illegal' about configuring C-BGP sessions to peer with physical addresses, this is normally not done due to the added redundancy benefits one gains from loop- back peering. A previous configuration example in this chapter demonstrated lo0-based C-BGP peering, so in the interest of full coverage and in keeping you on your toes, this case study requires that you use interface-based C-BGP peering. Requirements such as these are designed to force a candidate into deviating from 'run-of-the-mill' network designs, which in turn provides a mechanism that allows proctors to objectively gauge the candidate's depth of understanding with regard to a given protocol's operation and network design alternatives.

end sidebar

We begin our analysis with the basic IBGP configuration for r1:

[edit] lab@r1# show routing-options static {     route 10.0.200.0/24 {          next-hop 10.0.1.102;         no-readvertise;     } } autonomous-system 65000; confederation 65412 members [ 65000 65001 ]; [edit] lab@r1# show protocols bgp group 65000 {     type internal;     local-address 10.0.6.1;      neighbor 10.0.3.3;      neighbor 10.0.3.4; } 

There is nothing real surprising here. r1 must IBGP-peer with both r3 and r4 using lo0 peering, and r1 has been correctly configured to peer between loopback addresses with the correct local-address and neighbor definitions. r1 belongs to AS 65000, which has been properly configured as a subconfederation of AS 65412. The configuration of r2 is virtually identical and is therefore not shown here. We next look at the basic IBGP and C-BGP configuration for r4:

[edit] lab@r4# show routing-options static {     route 10.0.200.0/24 {          next-hop 10.0.1.102;          no-readvertise;      } } autonomous-system 65000; confederation 65412 members [ 65000 65001 ]; [edit] lab@r4# show protocols bgp group 65000 {      type internal;     local-address 10.0.3.4;     cluster 1.1.1.1;      neighbor 10.0.3.3;      neighbor 10.0.6.1;      neighbor 10.0.6.2; } group c-bgp {      type external;      neighbor 10.0.2.9 {          peer-as 65001;      } }

The configuration of r4 shows that it has an internal IBGP group called 65000, and that it will function as a route reflector for this group using cluster ID 1.1.1.1. You can also see that r4 will have three IBGP peering sessions, one each to r1, r2, and r3. The c-bgp group has been correctly set as an external peer group, and the neighbor definitions accommodate the interface peering required in this case study. The use of interface-based peering for C-BGP means that the local-address and multihop options are not needed. In fact, configuring a local address that differs from the default behavior of using the egress interface's IP address will prevent the C-BGP sessions from being established! The correct AS number has also been configured for peer 10.0.3.5 in this example. Because the configuration of r3 is virtually identical, it is not shown here.

r5's basic IBGP and C-BGP configuration is examined next:

[edit] lab@r5# show routing-options static {    route 10.0.200.0/24 {       next-hop 10.0.1.102;       no-readvertise;    }  }  autonomous-system 65001; confederation 65412 members [ 65000 65001 ]; [edit] lab@r5# show protocols bgp group 65001 {    type internal;    local-address 10.0.3.5;    cluster 2.2.2.2;    no-client-reflect;    neighbor 10.0.9.6;    neighbor 10.0.9.7;  }  group c-bgp {    type external;    neighbor 10.0.2.2 {       peer-as 65000;    }    neighbor 10.0.2.10 {       peer-as 65000;    }  }

The confederation-related configuration of r5 is similar to those of r3 and r4. The primary difference is the use of 65001 as r5's autonomous system number. Note that all routers in the test bed have been configured with a common list of AS 65412 confederation members. The 65001 internal peer group has been assigned cluster ID 2.2.2.2, making r5 a route reflector. The no-client-reflect option has been enabled to prevent r5 from reflecting routes to the clients in this group. The c-bgp group lists the interface addresses of both r3 and r4, which provides the necessary redundancy should a failure of r3, r4, or one of the peering interfaces occur.

Finally, we look at the basic IBGP configuration for r6. The configuration of r7 will be nearly identical.

[edit] lab@r6# show routing-options static {      route 10.0.200.0/24 {          next-hop 10.0.1.102;         no-readvertise;     } } aggregate {      route 192.168.0.0/22; } autonomous-system 65001; confederation 65412 members [ 65000 65001 ]; [edit] lab@r6# show protocols bgp group 65001 {     type internal;     local-address 10.0.9.6;      neighbor 10.0.3.5;      neighbor 10.0.9.7; }

Because cluster 2.2.2.2's route reflector has been configured to not reflect routes, you must include an IBGP peering session between r6 and r7, as highlighted, or you could have black holes. Before proceeding, you should confirm that all IBGP and C-BGP peering sessions have been correctly established. This capture, taken from r3, shows the expected number of IBGP and C-BGP sessions and confirms that they are in the established state. The lack of BGP export policy results in a lack of route advertisements over the BGP peering sessions at this time, however.

[edit] lab@r3# run show bgp summary Groups: 2 Peers: 4 Down peers: 0 Table          Tot Paths  Act Paths Suppressed    History Damp State     Pending inet.0                22          9          0          0          0           0 Peer               AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn  State|#Active/Received/Damped... 10.0.2.1        65001         13         14       0       2           9 1/4/0                 0/0/0 10.0.3.4        65000       5493       5501       0       0        53:12 3/5/0 0/0/0 10.0.6.1        65000        108       3389       0       0        53:04 1/2/0 0/0/0 10.0.6.2        65000        112       3384       0       0       53:00 4/11/0 0/0/0 

You will now add authentication to your network in accordance with the following requirement:

  • Authenticate all IBGP sessions in one of the clusters with key jnx.

In this case, this author has opted to add authentication to cluster 2.2.2.2, simply because it will involve less typing overall. The following capture highlights the authentication information that has been added to r5's 65001 IBGP group; similar statements will be needed on r6 and r7. You must use caution to ensure that you do not inadvertently add authentication to the c-bgp group, such as would occur if you carelessly apply the authentication-key statement at the global level.

[edit protocols bgp] lab@r5# show group 65001 {      type internal;     local-address 10.0.3.5;     authentication-key "$9$GsjkPpu1Icl"; # SECRET-DATA     cluster 2.2.2.2;      no-client-reflect;      neighbor 10.0.9.6;      neighbor 10.0.9.7; } group c-bgp {      type external;     neighbor 10.0.2.2 {          peer-as 65000;      }     neighbor 10.0.2.10 {          peer-as 65000;      }

You can determine that authentication is working properly when you see that the IBGP sessions have been correctly reestablished in subconfederation 65001. You will now address the following case study requirement:

  • Redistribute the static routes shown earlier in Figure 5.9 into IBGP, and using communities, ensure that all routers prefer r2's 192.168.100/24 IBGP route. You may not alter the default local preference of this route.

You begin this task by defining each router's static route(s), and by defining a basic BGP export policy such as the example shown here:

[edit policy-options] lab@r1# show policy-statement ibgp term 1 {    from {       protocol static;       route-filter 192.168.0.0/16 longer;    }    then accept;  }

After defining all static routes and applying your BGP export policy on all routers, verifying the presence of the 192.168.x/24 and 192.168.100/24 routes quickly assesses the general health of your network. In this case study, this author has opted to apply the export policy once, at the global level, which causes static route redistribution for all IBGP and C-BGP peers:

[edit] lab@r4# run show route protocol bgp | match 192.168 | count Count: 7 lines

The indication that there are seven 192.168/16 BGP routes confirms your static route definitions and the proper operation of your basic IBGP export policy. To quickly validate r3's redistribution of its static route, and to verify the particulars of what routes are being advertised, the terse switch is used on r5 as shown next:

[edit] lab@r5# run show route protocol bgp terse inet.0: 34 destinations, 41 routes (34 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both A Destination        P Prf   Metric 1   Metric 2  Next hop        AS path * 192.168.10.0/24    B 170        100          0 >as1.0            (65000) I                                                    at-0/2/1.35                      B 170        100          0 >as1.0            (65000) I                                                    at-0/2/1.35 * 192.168.20.0/24    B 170        100          0  as1.0            (65000) I                                                   >at-0/2/1.35                      B 170        100          0  as1.0            (65000) I                                                   >at-0/2/1.35 * 192.168.30.0/24    B 170        100          0 >10.0.2.2         (65000) I                      B 170        100          0 >at-0/2/1.35      (65000) I * 192.168.40.0/24    B 170        100          0 >10.0.2.10        (65000) I                      B 170        100          0 >as1.0            (65000) I * 192.168.60.0/24    B 170        100          0 >10.0.8.5         I * 192.168.70.0/24    B 170        100          0 >10.0.8.10        I * 192.168.100.0/24   B 170        100          0 >as1.0            (65000) I                                                    at-0/2/1.35                      B 170        100          0 >as1.0            (65000) I                                                    at-0/2/1.35 

With basic redistribution working, you now tackle the 'routers must prefer r2's advertisement using communities" issue, without modifying the route's local preference. This is accomplished by defining a unique community tag at r2 according to the algorithm <asnumber:router-number>, and by instructing r2 to attach this community only to the 192.168.100/24 route through the ibgp export modifications shown with highlights. Other routers will match on this community tag to set the route's preference to a value lower (and therefore more preferred) than the default BGP preference setting of 170, as described later in this section.

[edit] lab@r2# show policy-options policy-statement ibgp term 1 {      from {         protocol static;         route-filter 192.168.0.0/16 longer;         route-filter 192.168.100.0/24 exact next term;      }     then accept; } term 2 {      from {         route-filter 192.168.100.0/24 exact;      }      then {         community add r2;         accept;      } } [edit] lab@r2# show policy-options community r2 members 65412:2; 

Next, you define the r2 community on r3 and r4, and you write a BGP import policy that will match on this community with the intent of setting a preference setting less than 170 in these routers. Because r1 and r2 will use their local static route in lieu of any learned BGP routes, there is no need to address their configurations further. Similarly, r6 and r7 will fall into "line" with whatever route is chosen as active by r5, and because r5 will receive updates only for the active routes as selected by r3 and r4, you will not need the community definition or prefer-2 import policy on r5, r6, or r7. Defining the r2 community and prefer-2 import policy on all routers should not cause any operation impact, however. The modifications to r3's configuration are shown next. Similar changes are needed for r4.

[edit] lab@r3# show policy-options policy-statement prefer-2      from community r2;     then {          preference 20;     } } [edit] lab@r3# show policy-options community r2 members 65412:2; [edit] lab@r3# show protocols bgp import import prefer-2; 

In this example, the prefer-2 policy has been applied globally, even though it is needed only on the peering session to r2. This global application should cause no harm, because only r2 is attaching this community. To confirm the correct behavior, we display r3's route table entry for the 192.168.100/24 route:

[edit] lab@r3# run show route 192.168.100/24 inet.0: 38 destinations, 50 routes (37 active, 0 holddown, 2 hidden) + = Active Route, - = Last Active, * = Both 192.168.100.0/24   *[BGP/20] 00:44:33, MED 0, localpref 100, from 10.0.6.2                        AS path: I                     > to 10.0.4.2 via fe-0/0/1.0                     [BGP/170] 00:45:51, MED 0, localpref 100, from 10.0.6.1                        AS path: I                     > to 10.0.4.14 via fe-0/0/0.0 

These results confirm that r2's route is preferred, despite r1 having the lower, and therefore more preferred, RID assignment. Also note that both routes correctly display the default local preference setting, as required by your case study restrictions. The results from r5 confirm that both r3 and r4 have selected r2's route, which means r5 will automatically fall into line, which is good because it really has little other choice in the matter:

[edit] lab@r5# run show route 192.168.100/24 inet.0: 34 destinations, 41 routes (34 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 192.168.100.0/24   *[BGP/170] 00:09:27, MED 0, localpref 100, from 10.0.2.2                        AS path: (65000) I                      > via as1.0                        via at-0/2/1.35                     [BGP/170] 00:09:23, MED 0, localpref 100, from 10.0.2.10                        AS path: (65000) I                      > via as1.0                        via at-0/2/1.35

Though not shown in this display, the protocol next hop for both routes is 10.0.6.2, which proves that r2's advertisement has beaten out r1's. The correct protocol next hop is confirmed on r7 with use of the detail switch:

[edit] lab@r7# run show route 192.168.100/24 detail inet.0: 37 destinations, 38 routes (37 active, 0 holddown, 0 hidden) 192.168.100.0/24 (1 entry, 1 announced)         *BGP Preference: 170/-101              Source: 10.0.3.5              Nexthop: 10.0.8.9 via fe-0/3/1.0, selected              Protocol Nexthop: 10.0.6.2 Indirect nexthop: 846e198 51              State: <Active Int Ext>              Local AS: 65001 Peer AS: 65001              Age: 10:37 Metric: 0 Metric2: 46              Task: BGP_65001.10.0.3.5+1055              Announcement bits (2): 0-KRT 6-Resolve inet.0              AS path: (65000) I              Communities: 65412:2              Localpref: 100              Router ID: 10.0.3.5 

Before considering this task complete, you should temporarily remove r2's 192.168.100/24 static route to confirm that packets start going to r1:

[edit] lab@r2# delete routing-options static route 192.168.100.0/24 [edit] lab@r2# commit confirmed 5 commit complete

With r2 no longer advertising the 192.168.100/24 route (at least for the next five minutes or so), you reissue the show route command on r7:

[edit] lab@r7# run show route 192.168.100/24 detail inet.0: 37 destinations, 38 routes (37 active, 0 holddown, 0 hidden) 192.168.100.0/24 (1 entry, 1 announced)         *BGP Preference: 170/-101              Source: 10.0.3.5              Nexthop: 10.0.8.9 via fe-0/3/1.0, selected              Protocol Nexthop: 10.0.6.1 Indirect nexthop: 846e110 50              State: <Active Int Ext>              Local AS: 65001 Peer AS: 65001              Age: 12:11 Metric: 0 Metric2: 46              Task: BGP_65001.10.0.3.5+1055              Announcement bits (2): 0-KRT 6-Resolve inet.0              AS path: (65000) I              Localpref: 100              Router ID: 10.0.3.5 

The output confirms that failures of r2 will cause the route advertised by r1 to become active. You may notice some extra hops if you trace the route from r6 or r7 to the 192.168.100/24 prefix, as shown here:

[edit] lab@r7# run traceroute 192.168.100.1 traceroute to 192.168.100.1 (192.168.100.1), 30 hops max, 40 byte packets  1 10.0.8.9 (10.0.8.9) 0.339 ms 0.247 ms 0.216 ms  2 10.0.2.10 (10.0.2.10) 0.286 ms 0.245 ms 0.231 ms  3 10.0.4.10 (10.0.4.10) 0.173 ms 0.166 ms 0.169 ms  4 10.0.4.5 (10.0.4.5) 0.483 ms 0.700 ms 0.379 ms  5 10.0.4.5 (10.0.4.5) 0.363 ms !H 0.700 ms !H 0.372 ms !H

This behavior can be attributed to r5's OSPF load-balancing behavior to the 10.0.4/22 summary, which in this case has resulted in the use of r4 and the aggregated SONET interface when forwarding to the protocol next hop 10.0.6.1, as shown next:

[edit] lab@r5# run show route 192.168.100/24 inet.0: 35 destinations, 44 routes (35 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 192.168.100.0/24  *[BGP/170] 00:28:54, MED 0, localpref 100, from 10.0.3.3                     AS path: (65000) I                   > via as1.0                     via at-0/2/1.35                     [BGP/170] 00:11:19, MED 0, localpref 100, from 10.0.3.4                     AS path: (65000) I                   > via as1.0                     via at-0/2/1.35

These results confirm that the extra hops in the traceroute are not the result of your IBGP design or its implementation. The next criterion to be addressed in the case study is:

  • Redistribute a summary of the RIP routes into IBGP from both r6 and r7.

This requirement is easily accomplished with the highlighted modifications to the ibgp export policy on r6 and r7. Note that a 192.168.0/22 aggregate was already defined as part of the OSPF case study for redistribution into OSPF:

[edit policy-options policy-statement ibgp] lab@r6# show term 1 {     from {         protocol static;         route-filter 192.168.0.0/16 longer;      }      then accept; } term 2 {     from {         protocol aggregate;         route-filter 192.168.0.0/22 exact;      }      then accept; } 

We confirm the results at r5, where we expect to see BGP advertisements from both r6 and r7 reporting the summary route for the RIP prefixes:

[edit] lab@r5# run show route 192.168.0/22 inet.0: 34 destinations, 44 routes (34 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 192.168.0.0/22     *[OSPF/150] 00:00:06, metric 0, tag 0                      > to 10.0.8.5 via fe-0/0/0.0                        to 10.0.8.10 via fe-0/0/1.0                     [BGP/170] 00:03:34, localpref 100, from 10.0.9.6                        AS path: I                      > to 10.0.8.5 via fe-0/0/0.0                     [BGP/170] 00:03:30, localpref 100, from 10.0.9.7                        AS path: I                      > to 10.0.8.10 via fe-0/0/1.0

With r6 and r7 correctly sending the RIP summary to r5 through IBGP, the next requirement in the case study is the load-balancing behavior at r5 in accordance with this requirement:

  • r5 must IBGP load-balance to the summary route representing the RIP prefixes.

With equal-cost IGP paths already in place between r5 and the source of the RIP routes, you will need to enable multipath in the 65000 peer group to facilitate the desired load balancing for IBGP. You will also need to adjust protocol preferences so that r5 prefers the BGP routing source to OSPF externals because the default preference assignments result in r5 choosing the OSPF routing source. The necessary changes to r5's configuration are highlighted here:

[edit] lab@r5# show protocols bgp group 65001 type internal; local-address 10.0.3.5; authentication-key "$9$GsjkPpu1Icl"; # SECRET-DATA cluster 2.2.2.2; no-client-reflect; multipath; neighbor 10.0.9.6; neighbor 10.0.9.7; [edit] lab@r5# show protocols ospf external-preference external-preference 171; 

Note that you could have achieved identical results by adjusting the BGP preference lower instead of making the OSPF protocol's higher. Proper operation is confirmed by verifying that the BGP route is now active, and that it has installed both of the IGP next hops that result from the recursive lookup of the protocol next hops 10.0.9.6 and 10.0.9.7 as advertised by r6 and r7, respectively:

[edit] lab@r5# run show route 192.168.0/22 inet.0: 34 destinations, 43 routes (34 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 192.168.0.0/22     *[BGP/170] 00:06:31, localpref 100, from 10.0.9.6                        AS path: I                      > to 10.0.8.5 via fe-0/0/0.0                        to 10.0.8.10 via fe-0/0/1.0                     [BGP/170] 00:06:03, localpref 100, from 10.0.9.7                        AS path: I                      > to 10.0.8.10 via fe-0/0/1.0                     [OSPF/171] 00:06:15, metric 0, tag 0                      > to 10.0.8.5 via fe-0/0/0.0                        to 10.0.8.10 via fe-0/0/1.0

Compared to the previous display, the additional IGP next hops used to support IBGP load balancing are clearly evident. Your next challenge is to accommodate this stipulation:

  • Make sure that r1 and r2 receive the RIP route summary from r3 and r4 through IBGP. You cannot change any protocol preference values on r3 or r4, and the BGP protocol next hop as seen by r1 and r2 must be the same as that seen by r5.

Because you cannot alter the global preference of BGP or OSPF on r3 and r4, you will have a problem getting the 192.168.0/22 BGP route received from r5 to be considered active. Simply redistributing the route from OSPF would result in r3 and r4 listing their own RIDs in the protocol next-hop field, thereby causing the protocol next hop at r1 and r2 to differ from the true protocol next hop as advertised by r6 and r7.

The best way out of this dilemma is to deploy the advertise-inactive knob to goad r3 and r4 into advertising a BGP route that is not active due to route preference. Policy will not be needed here, because the default IBGP export policy on the route reflectors causes BGP routes received from non-clients to be advertised to their clients, as listed in the BGP groups that are enabled for route reflection via the cluster-id keyword. The default BGP export behavior will therefore get the route to r1 and r2 as required. Besides, no amount of policy can ever be used to force the advertisement of a non-active route anyway.

The highlighted change to r3's configuration should also be added to r4's configuration:

[edit] lab@r3# show protocols bgp group 65000 type internal; local-address 10.0.3.3; advertise-inactive; cluster 1.1.1.1; neighbor 10.0.6.1; neighbor 10.0.6.2; neighbor 10.0.3.4;

Confirmation of this task is quite easy. It is suggested that you first verify that r3 and r4 still prefer the OSPF route, which confirms that global preference values have not been altered (or at least that such alterations have had no net effect on your network), and then confirm that the routes are being advertised to r1 and r2 with the protocol next hop unchanged:

[edit] lab@r3# run show route 192.168.0/22 detail inet.0: 37 destinations, 43 routes (37 active, 0 holddown, 0 hidden) 192.168.0.0/22 (2 entries, 1 announced)          *OSPF Preference: 150                Nexthop: via at-0/1/0.35, selected                State: <Active Int Ext>                Local AS: 65000                Age: 3:38 Metric: 0 Tag: 0                Task: OSPF                Announcement bits (3): 0-KRT 3-BGP.0.0.0.0+179 4-Resolve inet.0                AS path: I            BGP Preference: 170/-101                Source: 10.0.2.1                Nexthop: via at-0/1/0.35, selected                Protocol Nexthop: 10.0.9.6 Indirect nexthop: 842b220 59                State: <Int Ext>                Inactive reason: Route Preference                Local AS: 65000 Peer AS: 65001                Age: 10:45      Metric2: 26                Task: BGP_65001.10.0.2.1+179                AS path: (65001) IAggregator: 65001 10.0.9.6                Localpref: 100                Router ID: 10.0.3.5 

You can see that the BGP route is inactive in deference to the lower preference value associated with OSPF, and that the protocol next hop identifies r6 as the originator of the route. You now confirm that the 192.168.0/22 summary route is present in r1 as a BGP route with the protocol next hop as originally advertised by r6:

lab@r1> show route 192.168.0/22 detail inet.0: 26 destinations, 36 routes (26 active, 0 holddown, 0 hidden) 192.168.0.0/22 (2 entries, 1 announced)           *BGP Preference: 170/-101                Source: 10.0.3.3                Nexthop: 10.0.4.13 via fe-0/0/1.200, selected                Protocol Nexthop: 10.0.9.6 Indirect nexthop: 83e1198 44                State: <Active Int Ext>                Local AS: 65000 Peer AS: 65000                Age: 13:32 Metric2: 11                Task: BGP_65000.10.0.3.3+179                Announcement bits (2): 0-KRT 2-Resolve inet.0                AS path: (65001) IAggregator: 65001 10.0.9.6                Localpref: 100                Router ID: 10.0.3.3            BGP Preference: 170/-101                Source: 10.0.3.4                Nexthop: 10.0.4.13 via fe-0/0/1.200, selected                Protocol Nexthop: 10.0.9.6 Indirect nexthop: 83e1198 44                State: <NotBest Int Ext>                Inactive reason: Cluster list length                Local AS: 65000 Peer AS: 65000                Age: 3:54       Metric2: 11                Task: BGP_65000.10.0.3.4+179                AS path: (65001) I (Originator)Aggregator: 65001 10.0.9.6                AS path: Cluster list: 1.1.1.1                AS path: Originator ID: 10.0.3.3                Localpref: 100                Router ID: 10.0.3.4 

Good going-both r3 and r4 are advertising the route to client r1 in accordance with the specified restrictions. To complete the case study, you must now address the last requirement:

  • Configure r7 to be passive, and ensure that its IBGP sessions operate with a 45-second keepalive interval. No other session keepalive intervals should be modified.

The changes needed on r5, r6, and r7 are highlighted next. You will need to adjust the BGP hold-time for all routers that IBGP-peer with r7, being careful to not modify the hold-time at either the group or global levels on r5 and r6, because you are not supposed to modify the hold-time parameters for sessions that do not involve r7. You need to modify the hold parameters on both sides of the peering session because BGP will choose the lesser of the hold times proposed during session negotiations with other routers:

[edit] lab@r7# show protocols bgp export ibgp; group 65001 {    type internal;    local-address 10.0.9.7;    hold-time 135;    passive;    authentication-key "$9$.fQnEhrlMX"; # SECRET-DATA    neighbor 10.0.3.5;    neighbor 10.0.9.6;  }

The required change to r6's configuration is highlighted next:

[edit protocols bgp] lab@r6# show export ibgp; group 65001 {    type internal;    local-address 10.0.9.6;    authentication-key "$9$TF6ArlMWxd"; # SECRET-DATA    neighbor 10.0.3.5;    neighbor 10.0.9.7 {        hold-time 135;    }  } 

The correct session hold time is easy enough to verify:

[edit protocols bgp] lab@r7# run show bgp neighbor | match hold   Options: <Preference LocalAddress HoldTime AuthKey Refresh Confed>   Local Address: 10.0.9.7 Holdtime: 135 Preference: 170   Peer ID: 10.0.3.5        Local ID: 10.0.9.7             Active Holdtime: 135   Options: <Preference LocalAddress HoldTime AuthKey Refresh Confed>   Local Address: 10.0.9.7 Holdtime: 135 Preference: 170   Peer ID: 10.0.9.6        Local ID: 10.0.9.7             Active Holdtime: 135 

You should compare these results to those displayed for sessions that do not involve r7 to make sure you have not mistakenly altered the hold-time of other sessions. The hold-time for all other BGP sessions should remain at the 30-second default.

IBGP Case Study Configurations

The modified configuration stanzas needed to complete the IBGP case study are in Listings 5.1 through 5.7, for all routers in the test bed, with changes highlighted.

Listing 5.1: r1 IBGP-Related Configuration

start example
[edit] lab@r1# show protocols bgp export ibgp; group 65000 {     type internal;     local-address 10.0.6.1;      neighbor 10.0.3.3;      neighbor 10.0.3.4; } [edit] lab@r1#  show routing-options static {     route 10.0.200.0/24 {          next-hop 10.0.1.102;         no-readvertise;     }     route 192.168.10.0/24 reject;     route 192.168.100.0/24 reject; } autonomous-system 65000; confederation 65412 members [ 65000 65001 ]; [edit] lab@r1# show policy-options policy-statement external {     term 1 {          from {              route-filter 10.0.5.0/24 exact;          }          then {              metric 10;              tag 420;              external {                 type 1;             }             accept;          }      }     term 2 {          then reject;      } } policy-statement ibgp {     term 1 {          from {             protocol static;             route-filter 192.168.0.0/16 longer;          }          then accept;      } } 
end example

Listing 5.2: r2 IBGP-Related Configuration

start example
[edit] lab@r2# show protocols bgp export ibgp; group 65000 {     type internal;     local-address 10.0.6.2;      neighbor 10.0.3.3;      neighbor 10.0.3.4; } [edit] lab@r2#  show routing-options static {     route 10.0.200.0/24 {          next-hop 10.0.1.102;         no-readvertise;      }     route 192.168.20.0/24 reject;     route 192.168.100.0/24 reject; } autonomous-system 65000; confederation 65412 members [ 65000 65001 ]; [edit] lab@r2# show policy-options policy-statement external {      term 1 {          from {             route-filter 10.0.5.0/24 exact;          }          then {              metric 10;              tag 420;              external {                 type 1;             }             accept;          }      }      term 2 {          then reject;      } } policy-statement ibgp {      term 1 {          from {             protocol static;             route-filter 192.168.0.0/16 longer;             route-filter 192.168.100.0/24 exact next term;          }          then accept;      }     term 2 {          from {             route-filter 192.168.100.0/24 exact;          }          then {             community add r2;             accept;          }      } } community r2 members 65412:2; 
end example

Listing 5.3: r3 IBGP-Related Configuration

start example
[edit] lab@r3# show protocols bgp import prefer-2; export ibgp; group 65000 {      type internal;     local-address 10.0.3.3;      advertise-inactive;      cluster 1.1.1.1;      neighbor 10.0.6.1;      neighbor 10.0.6.2;      neighbor 10.0.3.4; } group c-bgp {      type external;     multihop;      neighbor 10.0.2.1 {          peer-as 65001;      } } [edit] lab@r3#  show routing-options static {     route 10.0.200.0/24 {          next-hop 10.0.1.102;         no-readvertise;      }     route 192.168.30.0/24 reject; } autonomous-system 65000; confederation 65412 members [ 65000 65001 ]; [edit] lab@r3# show policy-options policy-statement ibgp {     term 1 {          from {              protocol static;             route-filter 192.168.0.0/16 longer;          }         then accept;      } } policy-statement prefer-2 {     term 1 {          from community r2;          then {              preference 20;          }      } } community r2 members 65412:2; 
end example

Listing 5.4: r4 IBGP-Related Configuration

start example
[edit] lab@r4# show protocols bgp import prefer-2; export ibgp; group 65000 {     type internal;     local-address 10.0.3.4;     advertise-inactive;      cluster 1.1.1.1;      neighbor 10.0.3.3;      neighbor 10.0.6.1;      neighbor 10.0.6.2; } group c-bgp {     type external;     neighbor 10.0.2.9 {          peer-as 65001;      } } [edit] lab@r4#  show routing-options static {     route 10.0.200.0/24 {          next-hop 10.0.1.102;         no-readvertise;      }     route 192.168.40.0/24 reject; } autonomous-system 65000; confederation 65412 members [ 65000 65001 ]; [edit] lab@r4# show policy-options policy-statement ibgp {      term 1 {          from {              protocol static;             route-filter 192.168.0.0/16 longer;         }          then accept;      } } policy-statement prefer-2 {      term 1 {         from community r2;          then {              preference 20;          }     } } community r2 members 65412:2; 
end example

Listing 5.5: r5 IBGP-Related Configuration

start example
[edit] lab@r5# show protocols bgp {     export ibgp;      group 65001 {          type internal;          local-address 10.0.3.5;         authentication-key "$9$GsjkPpu1Icl"; # SECRET-DATA          cluster 2.2.2.2;          no-client-reflect;          multipath;          neighbor 10.0.9.6;          neighbor 10.0.9.7;     }      group c-bgp {          type external;          neighbor 10.0.2.2 {              peer-as 65000;          }          neighbor 10.0.2.10 {              peer-as 65000;          }     } } ospf {      external-preference 171;      reference-bandwidth 1g;      area 0.0.0.0 {         authentication-type md5; # SECRET-DATA          interface lo0.0;          interface at-0/2/1.35 {             authentication-key "$9$LaA7dskqf5F/" key-id 1; # SECRET-DATA          }         interface as1.0 {             metric 6;             authentication-key "$9$LBa7dskqf5F/" key-id 1; # SECRET-DATA          }      }     area 0.0.0.20 {         area-range 172.16.40.0/28;         area-range 10.0.8.0/21;          interface fe-0/0/0.0;          interface fe-0/0/1.0;      } } [edit] lab@r5#  show routing-options static {     route 10.0.200.0/24 {          next-hop 10.0.1.102;         no-readvertise;      }     route 192.168.50.0/24 reject; } autonomous-system 65001; confederation 65412 members [ 65000 65001 ]; [edit] lab@r5# show policy-options policy-statement ibgp {     term 1 {         from {              protocol static;             route-filter 192.168.0.0/16 longer;          }         then accept;      } } 
end example

Listing 5.6: r6 IBGP-Related Configuration

start example
[edit] lab@r6# show protocols bgp export ibgp; group 65001 {     type internal;     local-address 10.0.9.6;     authentication-key "$9$TF6ArlMWxd"; # SECRET-DATA      neighbor 10.0.3.5;      neighbor 10.0.9.7 {          hold-time 135;      } } [edit] lab@r6#  show routing-options static {     route 10.0.200.0/24 {          next-hop 10.0.1.102;          no-readvertise;      }     route 192.168.60.0/24 reject; } aggregate {     route 192.168.0.0/22; } autonomous-system 65001; confederation 65412 members [ 65000 65001 ]; [edit] lab@r6# show policy-options policy-statement rip-in {      term 1 {         from {             protocol rip;             route-filter 192.168.0.0/22 orlonger;         }          then accept;      }      term 2 {          then reject;      } } policy-statement ospf-out {      term 1 {          from {              protocol aggregate;              route-filter 192.168.0.0/22 exact;          }          then accept;      }      term 2 {          then reject;      } } policy-statement rip-out {      term 1 {          from {             protocol ospf;              route-filter 10.0.5.0/24 exact;          }          then accept;      }      term 2 {          then reject;      } } policy-statement ibgp {      term 1 {          from {             protocol static;             route-filter 192.168.0.0/16 longer;          }          then accept;      }      term 2 {          from {              protocol aggregate;              route-filter 192.168.0.0/22 exact;          }          then accept;      } } 
end example

Listing 5.7: r7 IBGP-Related Configuration

start example
[edit] lab@r7# show protocols bgp export ibgp; group 65001 {      type internal;      local-address 10.0.9.7;      hold-time 135;     authentication-key "$9$.fQnEhrlMX"; # SECRET-DATA      neighbor 10.0.3.5;      neighbor 10.0.9.6; } [edit] lab@r7#  show routing-options static {     route 10.0.200.0/24 {          next-hop 10.0.1.102;         no-readvertise;      }     route 192.168.70.0/24 reject; } aggregate {     route 192.168.0.0/22; } autonomous-system 65001; confederation 65412 members [ 65000 65001 ]; [edit] lab@r7#  show policy-options policy-statement rip-in {     term 1 {         from {             protocol rip;             route-filter 192.168.0.0/22 orlonger;         }          then accept;      }     term 2 {          then reject;      } } policy-statement ospf-out {      term 1 {          from {              protocol aggregate;              route-filter 192.168.0.0/22 exact;          }          then accept;      }      term 2 {          then reject;      } } policy-statement rip-out {      term 1 {          from {             protocol ospf;             route-filter 10.0.5.0/24 exact;          }          then accept;      }      term 2 {          then reject;      } } policy-statement ibgp {      term 1 {          from {             protocol static;             route-filter 192.168.0.0/16 longer;          }          then accept;      }      term 2 {          from {              protocol aggregate;              route-filter 192.168.0.0/22 exact;          }          then accept;      } } 
end example




JNCIP. Juniper Networks Certified Internet Professional Study Guide Exam CERT-JNCIP-M
JNCIP: Juniper Networks Certified Internet Professional Study Guide
ISBN: 0782140734
EAN: 2147483647
Year: 2003
Pages: 132

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net