EBGP and Policy

Various policy applications have been demonstrated throughout this book. Chapter 5 provided an overview of the default BGP policy and the rules of applying policies at the global, group, and neighbor levels of the BGP configuration hierarchy. You should recall that the default policy for BGP is to accept all received BGP routes that pass incoming sanity checks (no AS path loops), and to advertise all active BGP routes to all BGP peers while obeying IBGP rules that prevent a BGP speaker from readvertising IBGP-learned routes to other IBGP speakers except where exempted by the use of route reflection.

This chapter will focus on the use of routing policy for common EBGP-related issues such as route damping, setting next hop self, and the filtering and tagging of routes that are learned from other ASs.

Note that all of your routers currently have, at a minimum, a simple static route redistribution policy that should be applied only to your IBGP and C-BGP connections. If you have applied your IBGP policy globally, then you will need to apply your EBGP policy at the EBGP group or neighbor levels to ensure that EBGP peers are not subjected to your IBGP policy. In this section, you will create policies for use with your EBGP peers and modify your existing IBGP policy as needed. Bear in mind that policy cannot activate a route that is hidden, nor will it allow you to advertise a hidden route. Also, a given route can be accepted or rejected only once in its policy processing-a fact that can make the ordering of your policies (or policy terms) extremely important because once a route is rejected (or accepted), it is not possible to subject that route to additional policy evaluation unless you are using Boolean policy groupings. The use of Boolean policy groupings is a technique rarely used in production networks and is therefore considered outside the scope of this book.

EBGP Import Policy

This section outlines various requirements that can be achieved with one or more policies that are applied as input to the EBGP peering sessions in your AS. In some cases, a given policy can be applied as import, export, or as both import and export for a particular peering session. Although some of the policies outlined in this section could be applied to EBGP as an export policy, this chapter breaks down the policy assignments into import and export sections to improve the chapter's structure and organization.

Route Damping

Route damping is used to suppress the advertisement of a prefix that has been advertised and withdrawn, or has had its attributes changed, too often in a given period of time. Route damping can be applied only to EBGP due to the need for consistent routing within a given AS. Route damping requires the use of policy when the default damping parameters need to be modified or when you wish to damp only certain routes. To complete this configuration example, you must add EBGP damping according to the following requirements.

  • Damp routes received from T1 and T2 according to these criteria:

    • Damp all prefixes with a mask length equal to or greater than 17 more aggressively than routes with a mask length between 9 and 16, inclusive.

    • Damp routes with a mask length between 0 and 8, inclusive, less than routes with a mask length greater than 8.

    • Do not damp the 17.128.0.0/9 prefix at all.

It would be wise to begin this task with knowledge of the default damping parameters in JUNOS software because the default damping class can be used to save yourself some work. As of JUNOS software release 5.2, the default damping parameters are:

  • Decay half-life (when reachable) = 15 minutes

  • Maximum hold-down time = 60 minutes

  • Reuse threshold = 750

  • Cut-off (suppress) threshold = 3000

By creating three custom damping profiles, one that is more aggressive, one that is less aggressive, and another that disables damping altogether, you can use these default parameters to achieve the four damping classes required in this task. The following commands create the aggressive damping profile:

[edit policy-options] lab@r3# set damping aggressive half-life 30 [edit policy-options] lab@r3# set damping aggressive suppress 2500 

In this example, increasing the half-life parameter or decreasing the suppress parameter, relative to those in the default damping class, is sufficient to achieve your aggressive damping goal. Doing both simply creates a profile that is all the more aggressive. Rather than a lower suppress setting, you could also opt to configure a reuse threshold that is lower (and therefore harder to reach) than the default. The next command creates the timid damping profile that will cause associated routes to have their figure of merit decreased by half every five minutes, which allows their figure of merit to decay below the profile's default reuse setting of 750 significantly faster than routes subjected to the default or aggressive damping classes:

[edit policy-options] lab@r3# set damping timid half-life 5 

This next statement creates a damping profile called dry with damping disabled. This is important because all EBGP routes are subject to damping once it is enabled with the damping keyword, thus making a disabled damping profile the only way to exempt certain EBGP routes from the wrath of the damping daemon (which is technically part of rpd):

[edit policy-options] lab@r3# set damping dry disable 

The three custom damping profiles are now displayed:

[edit policy-options] lab@r3# show | find damp damping aggressive {     half-life 30;     suppress 2500; } damping timid {     half-life 5; } damping dry {     disable; }

You now write a policy called damp that employs route filters to match prefix lengths to one of the custom damping classes. Remember that a prefix can only be subjected to one set of damping parameters; therefore the ordering of your terms can be the difference between success and failure:

[edit policy-options policy-statement damp term 1] lab@r3# set from route-filter 17.128.0.0/9 exact damping dry [edit policy-options policy-statement damp term 1] lab@r3# set from route-filter 0/0 prefix-length-range /0-/8 damping timid [edit policy-options policy-statement damp term 1] lab@r3# set from route-filter 0/0 prefix-length-range /17-/32 damping aggressive 

The damp policy is now displayed:

[edit policy-options policy-statement damp term 1] lab@r3# show from {     route-filter 17.128.0.0/9 exact damping dry;     route-filter 0.0.0.0/0 prefix-length-range /0-/8 damping timid;     route-filter 0.0.0.0/0 prefix-length-range /17-/32 damping aggressive; } 

Note that routes with a prefix length in the range between /9 and /16 will not match any of the route filter statements, causing them to be subjected to the default damping profile. This configuration provides the required low, medium, and high damping behavior relative to a route's prefix length while also providing the necessary exemption for the 17.128/9 prefix. You must now enable EBGP damping, and apply the damp policy as import to all routers that have EBGP peering sessions to routers T1 and T2:

[edit protocols bgp] lab@r3# set damping [edit protocols bgp] lab@r3# set group t1-t2 import damp 

The modified configuration for r3 is shown next with the changes highlighted:

[edit protocols bgp] lab@r3# show damping; import prefer-2; export ibgp; group 65000 {      type internal;     local-address 10.0.3.3;      advertise-inactive;     cluster 1.1.1.1;      neighbor 10.0.6.1;      neighbor 10.0.6.2;      neighbor 10.0.3.4; } group c-bgp {      type external;      neighbor 10.0.2.1 {          peer-as 65001;     } } group t1-t2 {      type external;     import damp;     peer-as 65222;     multipath;      neighbor 172.16.0.14;      neighbor 172.16.0.18; } 

You will need to define the same damping profiles and damping-related policy on r6 before proceeding. Do not forget to enable damping and apply the policy as shown for r3!

Tip 

It is worth noting that the damp policy does not make use of an accept terminating action so that the routes are still candidates for further policy manipulation after being subjected to one of the damping profiles. It is suggested that you avoid the use of accept when writing a BGP policy intended to process BGP routes because all such routes that are not explicitly rejected will be accepted by the default policy after all user policies have been processed. Including an accept action in the damp policy would cause all of the routes to be accepted, thereby making them immune to any additional policies that you later apply as part of a policy chain. If you are predisposed to not relying on default actions, you can always add an explicit "accept all BGP routes" policy at the end of your policy chains with similar effects.

Verify Damping

The verification of damping is easy when you have control over the remote system that is advertising the routes. In this case, you can modify the advertising router's policy to effect the advertisement and withdrawal of a given prefix while monitoring its figure of merit and damping status. Having a live Internet feed almost guarantees that a certain degree of route flap will always be present, and the following command verifies that routes are being hidden due to damping:

[edit] lab@r3# run show route damping suppressed inet.0: 111327 destinations, 222600 routes (111198 active, 0 holddown, 256 hidden) + = Active Route, - = Last Active, * = Both 62.109.0.0/19        [BGP ] 01:59:22, localpref 100                       AS path: 65222 10458 14203 701 6453 8470 15487 I                     > to 172.16.0.14 via fe-0/0/2.0                      [BGP ] 01:59:08, localpref 100 . . . 

The predicted collapse of the Internet due to routing instability must surely be imminent because over 250 routes have been hidden due to damping! Displaying the details of damped routes provides useful information:

[edit] lab@r3# run show route damping suppressed 62.109.0.0/19 detail inet.0: 111328 destinations, 222602 routes (111150 active, 0 holddown, 354 hidden) 62.109.0.0/19 (2 entries, 0 announced)           BGP                       /-101                Source: 172.16.0.14                Nexthop: 172.16.0.14 via fe-0/0/2.0, selected                Protocol Nexthop: 172.16.0.14 Indirect nexthop: c4f3440 76                State: <Hidden Ext>                Local AS: 65000 Peer AS: 65222                Age: 2:02:24 Metric2: 0                Task: BGP_65222.172.16.0.14+179                AS path: 65222 10458 14203 701 6453 8470 15487 I                Localpref: 100                Router ID: 130.130.0.1                Merit (last update/now): 2976/1873                damping-parameters: aggressive                Last update: 00:20:09 First update: 00:45:22                Flaps: 6                Suppressed. Reusable in: 00:39:40 

The highlights in this capture indicate that the displayed route has a mask length that is equal to or greater than a /17, and confirms that it has been correctly mapped to the aggressive damping profile. You can also see the route's current (and last) figure of merit value, and when the route is expected to become active if it remains stable. Locating a damped route with a /16 mask confirms that the default parameters are in effect:

[edit] lab@r3# run show route damping suppressed detail | match 0/16 138.184.0.0/16 (2 entries, 0 announced) 139.179.0.0/16 (2 entries, 0 announced) 146.249.0.0/16 (2 entries, 0 announced) 147.248.0.0/16 (2 entries, 0 announced) 150.184.0.0/16 (2 entries, 0 announced) [edit] lab@r3# run show route damping suppressed 139.179.0.0/16 detail inet.0: 111329 destinations, 222604 routes (111029 active, 0 holddown, 598 hidden) 139.179.0.0/16 (2 entries, 0 announced)           BGP                /-101                 Source: 172.16.0.18                 Nexthop: 172.16.0.18 via fe-0/0/3.0, selected                 Protocol Nexthop: 172.16.0.18 Indirect nexthop: c4f34c8 77                 State: <Hidden Ext>                 Local AS: 65000 Peer AS: 65222                 Age: 2:06:34 Metric2: 0                 Task: BGP_65222.172.16.0.18+179                 AS path: 65222 10458 14203 3967 3561 701 11331 13263 8466 I                 Localpref: 100                 Router ID: 130.130.0.2                 Merit (last update/now): 5199/3761                 Default damping parameters used                 Last update:      00:07:05 First update:      00:50:11                 Flaps: 11                 Suppressed. Reusable in:      00:35:00                 Preference will be: 170                 History entry. Expires in:       01:00:20

You might also try some fancy logical OR groupings or cascaded piping to simplify the determination of what damping profile is being used for routes with a given mask length:

lab@r3> show route damping suppressed detail | match "0 announced|damp" 62.176.64.0/22 (2 entries, 0 announced)                 damping-parameters: aggressive                 damping-parameters: aggressive 62.176.96.0/21 (2 entries, 0 announced)                 damping-parameters: aggressive                 damping-parameters: aggressive 62.181.64.0/18 (2 entries, 0 announced)                 damping-parameters: aggressive                 damping-parameters: aggressive 63.233.200.0/24 (2 entries, 0 announced)                 damping-parameters: aggressive                 damping-parameters: aggressive 64.94.183.0/24 (2 entries, 0 announced)                 damping-parameters: aggressive                 damping-parameters: aggressive . . . 

In a pinch, you can clear your EBGP neighbors a few times to simulate flaps, and watch the damped route count shoot skyward. This capture was taken after clearing r3's BGP neighbors four times:

lab@r3> show bgp summary Groups: 3 Peers: 6 Down peers: 0 Table          Tot Paths  Act Paths Suppressed    History Damp State     Pending inet.0            222585         29     222188        544     222548           0 Peer               AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn  State|#Active/Received/Damped... 172.16.0.14     65222      21493         22       0       4           49 6/ 111275/111092       0/0/0 172.16.0.18     65222      19588         22       0       4           49 6/ 111275/111092       0/0/0 10.0.2.1        65001         13         22       0       4          49 0/4/4                 0/0/0 10.0.3.4        65000         15         20       0       4          49 4/17/0                0/0/0 10.0.6.1        65000          5         23       0       6          49 11/12/0               0/0/0 10.0.6.2        65000          4         23       0       4          49 2/2/0                 0/0/0

This display would indicate that you have managed to damp virtually every route in the global Internet. When satisfied that your EBGP routes are correctly associated with a damping profile, you can issue the clear bgp damping operational mode command to restore an active status to your damped routes, which will return your customer's Internet connectivity to a semblance of normal operation.

Martian Filtering

The next configuration task involves the application of route filters and AS path regx matching to control the routes to be accepted from your EBGP peers. A JNCIP exam candidate is expected to know that RFC 1918 defines a set of well-known, local use-only addresses, and that ISPs filter such routes upon ingress to avoid wasting resources processing routes that no one on the public Internet can ever hope to reach. Further, routes that identify the network with all 0's or 1's, such as 128.0/16 and 128.255/16, are reserved by the IANA and are sometimes called 'guard nets' because they are the first and last network numbers within a particular classfull addressing space. These routes are often referred to as Martians because they are considered un-routable and do not belong in a well-designed network, so their presence indicates some type of alien invasion from the perspective of a service provider!

Some ISPs define a superset of routes they consider bogus, with the individual members of this group termed bogons, which is interpreted as 'the elementary particles of bogosity,' as defined by http://info.astrian.net/jargon/terms/b/bogon.html. The group of bogons will normally include things like RFC 1918 routes, prefixes with masks longer than a /24, prefixes from the 0-127 (class "A") space with masks less than a /8, default routes, and so on.

Juniper Networks routers come from the factory with a Martian table containing well-known bogons such as the all 1's and all 0's guard nets. While the default entries cannot be removed, the operator can override them by adding specific entries associated with an allowed action. The show route table martians command can be used to view the list of predefined Martian routes.

To complete this section, you must address the following filtering requirements.

  • Filter routes from EBGP peers according to these criteria:

    • No default route or any 0.x prefix with a mask length up to /7

    • No RFC 1918 routes

    • No prefixes longer than /24 from peer and transit sites; customer sites may send prefixes up to /28

    • No customer routes that do not originate in that customer's AS

    • No 0-127 routes with prefix lengths less than /8

The following policy will do the job for non-customer-attached routers such as r1 and r3:

[edit] lab@r1# show policy-options policy-statement bogons term 1 {      from {         route-filter 0.0.0.0/0 through 0.0.0.0/7 reject;         route-filter 0.0.0.0/1 prefix-length-range /1-/7 reject;      } } term 2 {      from {         route-filter 0.0.0.0/0 prefix-length-range /25-/32 reject;          route-filter 172.16.0.0/12 orlonger reject;         route-filter 192.168.0.0/16 orlonger reject;         route-filter 10.0.0.0/8 orlonger reject;      } } 

The first term eliminates the default route and any prefix with all 0's in the first 7 bits of the high-order byte with the classic use, and pretty much only recommended use, of the through match type. Term 2 starts by rejecting any routes with mask lengths longer than /25 and also eliminates RFC 1918 routes. The bogons policy must be applied to r1's EBGP peer as an import policy before it can take effect:

[edit] lab@r1# show protocols bgp group p1 type external; local-address 10.0.5.200; import bogons; peer-as 65050; neighbor 10.0.5.254;

A similar policy should be applied to the EBGP peers on r2, r3, and r6 before moving on. The following policy addresses the needs of customer-attached routers like r7. The highlights call out the modifications made to support a customer-attached router:

[edit policy-options policy-statement bogons] lab@r7# show term 1 {      from {         route-filter 0.0.0.0/0 through 0.0.0.0/7 reject;         route-filter 0.0.0.0/1 prefix-length-range /1-/7 reject;      } } term 2 {      from {         route-filter 0.0.0.0/0 prefix-length-range /29-/32 reject;         route-filter 172.16.0.0/12 orlonger reject;         route-filter 192.168.0.0/16 orlonger reject;         route-filter 10.0.0.0/8 orlonger reject;      } } term 3 {     from as-path c2;     then next policy; } term 4 {     then reject; } [edit policy-options] lab@r7# show as-path c2 ".* 65020"; 

The modifications to the bogons filter permits prefixes up to 28 bits in length, and defines an AS regular expression that matches any routes with AS 65020 as the first entry in the AS path attribute, because the originating AS is always listed first in a route's AS path. The regular expression begins with a "match all" wildcard sequence to ensure that AS path prepending at C2 will not result in the rejection of C2's routes. Term 4 rejects all routes not matching term 3. The route- filtering policy is applied as import to r7's EBGP peering session:

[edit protocols bgp] lab@r7# set group c2 import bogons [edit protocols bgp] lab@r7# show group c2 type external; import bogons; peer-as 65020; neighbor 172.16.0.26;

The highlighted changes shown here are needed for the application of the bogons policy to r4:

policy-statement bogons {      term 1 {          from {             route-filter 0.0.0.0/0 through 0.0.0.0/7 reject;             route-filter 0.0.0.0/1 prefix-length-range /1-/7 reject;          }      }      term 2 {          from {             route-filter 0.0.0.0/0 prefix-length-range /29-/32 reject;             route-filter 172.16.0.0/12 orlonger reject;             route-filter 192.168.0.0/16 orlonger reject;             route-filter 10.0.0.0/8 orlonger reject;          }      }      term 3 {         from as-path c1;          then next policy;      }      term 4 {         then reject;      } } as-path c1 ".*65010": 

Do not forget to apply the bogons policy as import to r4's EBGP peering session before moving on to the verification section.

Verify Martian Filters

The operation of your Martian filters can be verified by displaying the routes received from your EBGP peers, both with and without the hidden switch. Routes that are rejected by your import policy will be hidden and listed among any other routes that may be hidden for different reasons. To demonstrate, consider the case of the EBGP peering session to the P1 router, where you start by displaying the received routes that are not hidden:

[edit] lab@r1# run show route source-gateway 10.0.5.254 inet.0: 110887 destinations, 110908 routes (110880 active, 0 holddown, 8 hidden) + = Active Route, - = Last Active, * = Both 3.4.0.0/20           *[BGP/170] 04:19:30, MED 0, localpref 100                         AS path: 65050 I                       > to 10.0.5.254 via fe-0/0/0.0 120.120.0.0/24       *[BGP/170] 04:19:30, MED 0, localpref 100                         AS path: 65050 I                       > to 10.0.5.254 via fe-0/0/0.0 120.120.1.0/24       *[BGP/170] 04:19:30, MED 0, localpref 100                         AS path: 65050 I                       > to 10.0.5.254 via fe-0/0/0.0 120.120.2.0/24       *[BGP/170] 04:19:30, MED 0, localpref 100                         AS path: 65050 I                       > to 10.0.5.254 via fe-0/0/0.0 120.120.3.0/24       *[BGP/170] 04:19:30, MED 0, localpref 100                         AS path: 65050 I                       > to 10.0.5.254 via fe-0/0/0.0 120.120.4.0/24       *[BGP/170] 04:19:30, MED 0, localpref 100                         AS path: 65050 I                       > to 10.0.5.254 via fe-0/0/0.0 120.120.5.0/24       *[BGP/170] 04:19:30, MED 0, localpref 100                         AS path: 65050 I                       > to 10.0.5.254 via fe-0/0/0.0 120.120.6.0/24       *[BGP/170] 04:19:30, MED 0, localpref 100                         AS path: 65050 I                       > to 10.0.5.254 via fe-0/0/0.0 120.120.7.0/24       *[BGP/170] 04:19:30, MED 0, localpref 100                         AS path: 65050 I                       > to 10.0.5.254 via fe-0/0/0.0 

The prefixes listed do not match any of the entries on your AS's bogon list, which indicates a good start. Now display the hidden routes that are received over the P1 EBGP peering session:

[edit] lab@r1# run show route source-gateway 10.0.5.254 hidden inet.0: 110860 destinations, 110881 routes (110853 active, 0 holddown, 8 hidden) + = Active Route, - = Last Active, * = Both 0.0.0.0/0            [BGP ] 04:21:10, MED 0, localpref 100                        AS path: 65050 I                      > to 10.0.5.254 via fe-0/0/0.0 0.0.0.0/4            [BGP ] 04:21:10, MED 0, localpref 100                        AS path: 65050 I                      > to 10.0.5.254 via fe-0/0/0.0 6.0.0.0/7            [BGP ] 04:21:10, MED 0, localpref 100                        AS path: 65050 I                      > to 10.0.5.254 via fe-0/0/0.0 120.120.69.128/25    [BGP ] 04:21:10, MED 0, localpref 100                        AS path: 65050 I                      > to 10.0.5.254 via fe-0/0/0.0 172.17.0.0/24        [BGP ] 04:21:10, MED 0, localpref 100                        AS path: 65050 I                      > to 10.0.5.254 via fe-0/0/0.0 192.168.4.0/24       [BGP ] 04:21:10, MED 0, localpref 100                        AS path: 65050 I                      > to 10.0.5.254 via fe-0/0/0.0

The routes that are hidden include a default route, a 0/4 prefix, some RFC 1918 prefixes, and a 120.120.69.128/25 prefix that exceeds the specified 24-bit mask length. This output confirms the correct operation of the bogons policy at r1.

The same commands should be used to verify the customer-attached router bogons policy at r4 and r7. This output shows the hidden routes on r7. Use of the source-gateway switch ensures that the output will be limited to the routes being advertised by the C2 router:

[edit protocols bgp] lab@r7# run show route source-gateway 172.16.0.26 hidden inet.0: 110882 destinations, 110885 routes (53 active, 0 holddown, 110830 hidden) + = Active Route, - = Last Active, * = Both 0.0.0.0/0           [BGP ] 01:41:44, MED 0, localpref 100                       AS path: 65020 65020 62 39 I                     > to 172.16.0.26 via fe-0/3/2.0 64.0.0.0/7          [BGP ] 01:41:45, MED 0, localpref 100                       AS path: 65020 65020 I                     > to 172.16.0.26 via fe-0/3/2.0 201.201.0.7/32      [BGP ] 01:34:09, MED 0, localpref 100                       AS path: 65020 65020 I                     > to 172.16.0.26 via fe-0/3/2.0 210.210.16.128/26   [BGP ] 01:32:34, MED 0, localpref 100                       AS path: 65020 65020 65010 I                     > to 172.16.0.26 via fe-0/3/2.0

All of the hidden routes match the defined bogon criteria; the 210.210.16.128/26 route has been filtered because of the indication that it originated in AS 65010.

Community Tagging

Your next configuration task requires that you add community strings to the routes learned over your EBGP peering sessions based on the site's designation as transit, peer, or customer. To complete this section, you must configure your routers in accordance with the following requirement:

  • Tag all EBGP routes based on the peer type that advertises them

To minimize the potential for mistakes and confusion down the road, you should, before starting your configuration, establish a plan detailing all the community values you will use. Table 6.1 lists the community values chosen for this example.

Table 6.1: Community Value Assignments

Peer Designation

Community Value

Transit

65412:100

Peer

65412:200

Customer

65412:300

With the community plan firmly established, start by defining all three communities on each router. Do this up front, because only defined communities can be referenced in a policy statement and you will likely need to use policy-based community matches on one or more of the community strings at some future point. The following commands define all three communities on r6:

[edit policy-options] lab@r6# set community transit members 65412:100 [edit policy-options] lab@r6# set community peers members 65412:200 [edit policy-options] lab@r6# set community customers members 65412:300 

The community definitions are shown next:

[edit policy-options] lab@r6# show | find comm community customers members 65412:300; community peers members 65412:200; community r2 members 65412:2; community transit members 65412:100; . . .

These community definitions should be replicated on all remaining EBGP-peering routers before proceeding. The next step is to write an import policy that adds the appropriate community tag based on the EBGP peer's designation. The following policy accommodates routers that connect to customer sites:

[edit policy-options policy-statement community] lab@r7# show term 1 {   from protocol bgp;   then {       community add customers;   }  }

The community policy is applied to r7's EBGP peering session, creating a chain of import policies. Ordering of the individual policies is not significant in this case due to the lack of accept actions in the policies created thus far:

[edit] lab@r7# show protocols bgp group c2 type external; import [ bogons community ]; peer-as 65020; neighbor 172.16.0.26; 

Verify Community Tagging

The verification of community tagging policies is a simple process. The only complicating factor is the fact that you should now have numerous hidden routes on many of your routers, which means that you may have trouble finding all community tags on all routers, at least with the current state of your network. You can check the local routing table of each EBGP peering router to confirm that the routes were correctly tagged upon ingress. It is safe to assume that the community string will be still present when they are exported to your IBGP peers, unless, of course, you decide to explicitly strip the communities using an IBGP export policy. The following command verifies correct route tagging at routers peering with peer site P1:

[edit] lab@r3# run show route source-gateway 172.16.0.14 detail inet.0: 111798 destinations, 223542 routes (110827 active, 0 holddown, 1937 hidden) 3.0.0.0/8 (2 entries, 1 announced)         *BGP Preference: 170/-101               Source: 172.16.0.14              Nexthop: 172.16.0.14 via fe-0/0/2.0, selected              Nexthop: 172.16.0.18 via fe-0/0/3.0              Protocol Nexthop: 172.16.0.14 Indirect nexthop: 849d000 53              Protocol Nexthop: 172.16.0.18 Indirect nexthop: 849d110 56              State: <Active Ext>              Local AS: 65000 Peer AS: 65222              Age: 6:18:44 Metric2: 0              Task: BGP_65222.172.16.0.14+1230              Announcement bits (3): 0-KRT 3-BGP.0.0.0.0+179 4-Resolve inet.0              AS path: 65222 10458 14203 701 1239 80 I              Communities: 65412:100              Localpref: 100              Router ID: 130.130.0.1

You can also use the CLI match function to display community tags and route prefixes as shown next:

[edit] lab@r3# run show route source-gateway 172.16.0.14 detail | match  "announced|comm" 3.0.0.0/8 (2 entries, 1 announced)                 Communities: 65412:100 4.0.0.0/8 (2 entries, 1 announced)                 Communities: 65412:100 6.0.0.0/20 (2 entries, 1 announced)                 Communities: 65412:100 6.3.0.0/18 (2 entries, 1 announced)                 Communities: 65412:100 . . . 

To confirm that your IBGP export policies are not altering the community tags, you can display the routes being advertised or received over IBGP sessions, or simply display the IGP peer's routing table. In this example, the hidden switch is needed on r5 because the routes it has received from r7 are hidden due to next hop problems:

[edit] lab@r5# run show route 201.201/16 hidden detail | match com             Communities: 65412:300             Communities: 65412:300             Communities: 65412:300             Communities: 65412:300             Communities: 65412:300             Communities: 65412:300             Communities: 65412:300

These results confirm that the community tagging at r3 and r7 are working as per your instructions. You should test the remaining routers before moving to the next section.

EBGP (and IBGP) Export Policy

The majority of this section outlines configuration requirements that can be achieved with one or more policies that are applied as export to the various EBGP peering sessions in your AS. Once you have successfully advertised an aggregate for your AS's address space to your EBGP peers, you should be able to conduct ping testing to verify the forwarding paths through your network.

You begin this section by adjusting your IBGP export policy to eliminate the numerous hidden routes present on most routers.

Repair Hidden Routes

To complete this task, you must configure IBGP export policies that meet the following requirements:

  • You can have no black holes.

  • All routers must be able to route to all EBGP destinations.

  • You cannot have suboptimal routing.

  • The 172.16.x.x prefixes must not appear in your IGP.

You start by confirming that you currently have a hidden route issue by displaying r5's route table:

lab@r5# run show route inet.0: 112321 destinations, 224602 routes (43 active, 0 holddown, 224542 hidden) + = Active Route, - = Last Active, * = Both 3.4.0.0/20         *[BGP/170] 00:27:01, MED 0, localpref 100, from 10.0.2.2                        AS path: (65000) 65050 I                      > via as1.0                     [BGP/170] 00:27:53, MED 0, localpref 100, from 10.0.2.10                        AS path: (65000) 65050 I                      > via as1.0 . . .

Yes, with 224,542 hidden routes on r5 it would seem that your network does indeed have some. Currently, the majority of these routes are hidden due to lack of reachability within your AS for the 172.16.x.x prefixes used to support your EBGP peering sessions. This theory is confirmed with the following command:

[edit] lab@r5# run show route resolution unresolved detail Table inet.0 144.160.0.0/24         Protocol Nexthop: 172.16.0.14         Indirect nexthop: 0 . . .

The first entry shows a protocol next hop of 172.16.0.14, which identifies T1 as the source of this particular route. The issue here is that your IGP cannot resolve this prefix, so the route is unusable:

[edit] lab@r5# run show route 172.16.0.14 [edit] lab@r5#

These routes are not hidden on r3 or r6 because they are directly connected to the EBGP peering links and so can resolve the associated EBGP next hop without the need for a recursive IGP lookup. While you can resolve this problem by running a passive IGP on your EBGP interface, or by redistributing the direct EBGP interface routes into your IGP, both approaches would cause 172.16.x.x routes in your IGP and this is not permitted in this example. Static routes could also fix the problem but, being bad form, they too are not allowed in this case. This leaves only one choice, and that is to adjust your IBGP export policy to set next hop self on the routes learned from external peers.

The incorrect use of a next hop self policy can result in suboptimal routing, or hidden routes when incorrectly applied as an import policy, so you should put some thought into your next hop self plan before boldly embarking on this task. For example, consider Figure 6.3, which shows a simplified route reflection topology and a sample policy for setting next hop self.

click to expand
Figure 6.3: Incorrect use of a next hop self policy

In Figure 6.3, a simple next hop self policy has been applied as export to the IBGP group containing r1 and r2. In this example, r3 is a route reflector serving r1 and r2 as clients. The problem with this particular policy is the lack of distinction between BGP routes learned internally vs. those learned externally, with the result being that r3 is now overwriting the BGP next hop in the routes it reflects between clients r1 and r2. This results in the clients having to forward through the route reflector, which is suboptimal routing in the face of the direct link that exists between r1 and r2.

The highlighted changes in the ibgp policy circumvent this problem by including the neighbor keyword as part of the match criteria for term 3 in the ibgp policy. Another workable alternative for this example would be to use community-based match criteria to determine when the BGP next hop should be overwritten. The policy shown next is designed for use by r3:

[edit policy-options policy-statement ibgp] lab@r3# show term 1 {     from {         protocol static;         route-filter 192.168.0.0/16 longer;     }      then accept; } term 2 {      from {         protocol aggregate;         route-filter 10.0.0.0/8 exact;      }      then accept; } term 3 {      from {         protocol bgp;         neighbor [ 172.16.0.14 172.16.0.18 ];      }      then {         next-hop self;      } } 

You should create similar policies on all EBGP-peering routers before proceeding. This example has you modify the existing ibgp policy so there is no need to apply another policy to your IBGP peer groups. Also, because the 10.0.5/24 network used to support the EBGP peering to P1 is being redistributed into your IGP, next hop self policies are not required on r1 and r2. Applying a next hop self policy to r1 and r2 will not impair the operation of your network, however. Make sure that you specify a neighbor value of 200.200.0.1 in r4's next hop self policy to accommodate r4's loopback-based peering session to C1.

Verify Next Hop Self

After committing the new policy on r3, you again examine the number of hidden routes on r5:

[edit] lab@r5# run show route inet.0: 112138 destinations, 336318 routes (112129 active, 0 holddown, 112085 hidden) + = Active Route, - = Last Active, * = Both 3.0.0.0/8          *[BGP/170] 02:36:12, localpref 100                       AS path: (65000) 65222 10458 3944 2914 7018 80 I                     > to 10.0.2.2 via at-0/2/1.35                       [BGP/170] 00:02:13, localpref 100, from 10.0.2.10                       AS path: (65000) 65222 10458 3944 2914 7018 80 I                     > via at-0/2/1.35 

Good, the current count of 112,085 hidden routes is a far sight better than the previous 224,542! Looking at one of the route's details confirms that r3 is advertising its IBGP peering address as the protocol next hop for the routes it is learning from T1 and T2:

[edit] lab@r5# run show route 3/8 detail inet.0: 112138 destinations, 336320 routes (112129 active, 0 holddown, 112087 hidden) 3.0.0.0/8 (3 entries, 1 announced)         *BGP Preference: 170/-101              Source: 10.0.2.2              Nexthop: 10.0.2.2 via at-0/2/1.35, selected              Protocol Nexthop: 10.0.2.2 Indirect nexthop: 8466330 55              State: <Active Int Ext>              Local AS: 65001 Peer AS: 65000              Age: 2:38:18 Metric2: 0              Task: BGP_65000.10.0.2.2+179              Announcement bits (3): 0-KRT 3-BGP.0.0.0.0+179 4-Resolve inet.0              AS path: (65000) 65222 10458 3944 2914 7018 80 I              Communities: 2914:420 3944:380 65412:100              Localpref: 100              Router ID: 10.0.3.3 . . .

After committing the IBGP next hop self policy changes on all routers with EBGP peers, you again look at the number of hidden routes on r5:

[edit] lab@r5# run show route inet.0: 112155 destinations, 336373 routes (112155 active, 0 holddown, 0 hidden) + = Active Route, - = Last Active, * = Both 3.0.0.0/8          *[BGP/170] 02:44:03, localpref 100                       AS path: (65000) 65222 10458 3944 2914 7018 80 I                     > to 10.0.2.2 via at-0/2/1.35                     [BGP/170] 00:10:04, localpref 100, from 10.0.2.10                       AS path: (65000) 65222 10458 3944 2914 7018 80 I                     > via at-0/2/1.35                     [BGP/170] 02:44:01, localpref 100, from 10.0.9.6                       AS path: 65222 10458 3944 2914 7018 80 I                     > to 10.0.8.5 via fe-0/0/0.0 . . . 

r5 is now free of hidden routes, which is a very good sign. You should inspect all routers for hidden routes before moving on to be sure that all of your policy modifications are working as expected. To complete this section, we verify that route reflectors are not overwriting IBGP next hops by examining the 192.168.10/24 route's next hop after it has been reflected through r3:

[edit] lab@r2# run show route 192.168.10/24 detail inet.0: 112162 destinations, 224307 routes (112161 active, 0 holddown, 2 hidden) 192.168.10.0/24 (2 entries, 1 announced)          *BGP   Preference: 170/-101                 Source: 10.0.3.3                 Nexthop: 10.0.4.5 via fe-0/0/3.0, selected                 Protocol Nexthop: 10.0.6.1 Indirect nexthop: 8466198 56                 State: <Active Int Ext>                 Local AS: 65000 Peer AS: 65000                 Age: 2:47:20 Metric: 0 Metric2: 10                 Task: BGP_65000.10.0.3.3+1035                 Announcement bits (2): 0-KRT 4-Resolve inet.0                 AS path: I (Originator)Cluster list: 1.1.1.1                 AS path: Originator ID: 10.0.6.1                 Localpref: 100                 Router ID: 10.0.3.3 . . .

The output confirms that the selective nature of your next hop policies will not result in suboptimal routing through your AS.

Route Filtering

To complete this task, you must configure policies that meet the following requirements:

  • Advertise an aggregate route for the 10/8 space to all peers

  • Advertise customer routes to all EBGP peers

  • Advertise transit and peer routes to all customers

  • Advertise peer routes to transit providers

  • Filter all other routes to EBGP peers

Customer Site Export Policy

To get the ball rolling, you start with the export policy needed by your customer-attached routers, r4 and r7. This policy addresses the route advertisement requirements for routers attached to customer sites:

[edit policy-options policy-statement cust-export] lab@r7# show term 1 {     from {         protocol aggregate;         route-filter 10.0.0.0/8 exact;     }     then accept; } term 2 {     from community [ customers transit peers ];     then next policy; }

The cust-export policy example uses term 1 to match on and explicitly accept a local aggregate route definition. The accept action is needed for this route because the default BGP policy will not accept non-BGP routes. Term 2 leverages your community tagging efforts to simplify the act of ensuring that customer sites will get the routes learned from transit, peer, and customer EBGP peering sessions. Route filters could have also been used, but this approach involves more work and is somewhat prone to error when compared to the community-based matching example shown here.

Note that because you need to advertise all BGP routes to your customers, you really needed only term 1 in this policy. The routes learned from other customer, transit, and peer locations would be advertised to C2 by default.

The cust-export policy is now applied as export to r7's c2 peer group:

[edit] lab@r7# set protocols bgp group c2 export cust-export 

Before committing the changes, you also define the 10/8 aggregate:

[edit] lab@r7# set routing-options aggregate route 10/8 

The cust-export policy can be applied to r4 with no modifications. You may want to use the load merge terminal option to reduce typing requirements. Do not forget to define the aggregate and apply the policy as export on r4 before proceeding.

Verify Customer Site Export Policy

To verify your policy, you display the routes advertised to a customer router. Keep in mind that hidden routes cannot be advertised, so you should make sure that any hidden route problems have been resolved before verifying your EBGP export policies. Start by determining whether the aggregate for your AS is being correctly advertised:

[edit] lab@r7# run show route advertising-protocol bgp 172.16.0.26 10/8 inet.0: 110886 destinations, 110889 routes (54 active, 0 holddown, 110833 hidden) + = Active Route, - = Last Active, * = Both 10.0.0.0/8 Self  I        

The aggregate is present, so the presence of peer routes is now verified:

[edit] lab@r7# run show route advertising-protocol bgp 172.16.0.26 120.120/16 inet.0: 110889 destinations, 110892 routes (54 active, 0 holddown, 110836 hidden) + = Active Route, - = Last Active, * = Both 120.120.0.0/24 120.120.0.0/24 Self                                   (65000) 65050 I 120.120.1.0/24 Self                                   (65000) 65050 I 120.120.2.0/24 Self                                   (65000) 65050 I . . .

Peer routes are going out as required. You now verify that C1's routes are also being sent to C2:

[edit] lab@r7# run show route advertising-protocol bgp 172.16.0.26 200.200/16 inet.0: 112183 destinations, 224301 routes (112179 active, 0 holddown, 4 hidden) + = Active Route, - = Last Active, * = Both 200.200.0.0/16 Self                                  65222 10458 14203 701 4230 I 200.200.0.0/24 Self                                  (65000) 65010 I 200.200.0.0/28 Self                                   (65000) 65010 I 200.200.1.0/24 Self                                   (65000) 65010 I . . . 

C1's routes are being sent to C2 as required. You now verify that transit provider routes are also being advertised:

[edit] lab@r7# run show route advertising-protocol bgp 172.16.0.26 130.130/16 inet.0: 112184 destinations, 224303 routes (112180 active, 0 holddown, 4 hidden) + = Active Route, - = Last Active, * = Both 130.130.0.0/16 Self                                  65222 I

With the advertisement of all EBGP routes and your network's aggregate confirmed, the last check is to make sure that you are not advertising any 192.168/16 prefixes:

[edit] lab@r7# run show route advertising-protocol bgp 172.16.0.26 192.168/16 inet.0: 110843 destinations, 110845 routes (44 active, 0 holddown, 110799 hidden) + = Active Route, - = Last Active, * = Both 192.168.20.0/24 Self                                   (65000) I 192.168.30.0/24 Self                                   (65000) I 192.168.40.0/24 Self                                   (65000) I . . .

Whoa! The 192.168/16 routes are not supposed to be going out to customers! Catching 'simple' mistakes like this before grading commences can make a fair bit of difference in your final score. r7 is advertising the 192.168/16 routes it has learned through its IBGP sessions, because your customer export policy never explicitly rejected them, which results in their acceptance by BGP's default policy. Adding a term quickly rectifies this situation:

[edit policy-options policy-statement cust-export] lab@r7# set term 3 from route-filter 192.168/16 orlonger reject  [edit policy-options policy-statement cust-export] lab@r7# commit commit complete 

Wise operators always take a moment to verify their fixes:

[edit] lab@r7# run show route advertising-protocol bgp 172.16.0.26 192.168/16 [edit] lab@r7#

Ah, perfect. Don't forget to make this change on r4's cust-export policy also!

Peer Site Export Policy

Based on the relative success of your customer export policy, you decide to use the same approach for routers that peer with P1. The following policy looks like it will meet the export requirements for the EBGP peering session to peer router P1:

[edit policy-options policy-statement peer-export] lab@r1# show term 1 {      from {          protocol aggregate;         route-filter 10.0.0.0/8 exact;      }      then accept; } term 2 {      from community transit;      then reject; } term 3 {      from {         route-filter 192.168.0.0/16 orlonger reject;      } }

In this example, term 1 serves to advertise a local aggregate while term 2 explicitly rejects any routes with the transit community string. Term 3, on the other hand, reflects your ability to learn from the mistakes that were made on r7's customer export policy by explicitly filtering the 192.168/16 routes. Routes from customer and other peer locations (if there were additional peers in the test bed) will not match either of the peer-export policy terms, and will therefore filter through to the default BGP policy, where they will be advertised to P1. You now apply the peer-export policy to r1's EBGP peer group as export:

[edit protocols] lab@r1# set bgp group p1 export peer-export 

To complete this task, you define the 10/8 aggregate as was done for r7:

[edit] lab@r1# set routing-options aggregate route 10/8 

You commit your changes and decide to test the policy before you configure r2 with the same policy solution, which turns out to be a smart move on your part.

Verify Peer Site Export Policy

You display the routes being sent to the P1 router to verify that the peer-export policy meets all specified restrictions. Once again, hidden routes will not be advertised so their presence can impact the results of your verification tests. You start by verifying the aggregate for your AS is being advertised:

[edit] lab@r1# run show route advertising-protocol bgp 10.0.5.254 10/8 inet.0: 110845 destinations, 110866 routes (110838 active, 0 holddown, 8 hidden) + = Active Route, - = Last Active, * = Both 10.0.0.0/8 Self                                 I

Good, the aggregate is going out as required. You next verify that you are not sending any 192.168/16 routes to the P1 router:

[edit] lab@r1# run show route advertising-protocol bgp 10.0.5.254 192.168/16 [edit] lab@r1#

Good, no 192.168/16 prefixes are being sent, which is in accordance with the specified restrictions. Next, display active BGP routes on r1 to get an idea of what other prefixes can be used to test your peer export policy:

 [edit] lab@r1# run show route protocol bgp inet.0: 33 destinations, 35 routes (28 active, 0 holddown, 6 hidden) + = Active Route, - = Last Active, * = Both 3.4.0.0/20          *[BGP/170] 06:50:12, MED 0, localpref 100                        AS path: 65050 I                      > to 10.0.5.254 via fe-0/0/0.0 120.120.0.0/24      *[BGP/170] 06:50:12, MED 0, localpref 100                        AS path: 65050 I                      > to 10.0.5.254 via fe-0/0/0.0 . . . 120.120.7.0/24      *[BGP/170] 06:50:12, MED 0, localpref 100                        AS path: 65050 I

This output seems a bit odd, because r1 claims to have only 35 routes, of which only 6 are hidden. Further, the only BGP routes returned are those that are learned from P1. What has happened to all your IBGP routes? Deciding to investigate this a bit further, you display BGP summary information and find yourself wishing you had worn some protective undergarments into the lab:

[edit] lab@r1# run show bgp summary Groups: 2 Peers: 3 Down peers: 2 Table          Tot Paths  Act Paths Suppressed    History Damp State     Pending inet.0                15          9          0          0          0           0 Peer               AS      InPkt     OutPkt    OutQ   Flaps Last Up/Dwn  State|#Active/Received/Damped... 10.0.3.3        65000      48738        653       0       4        24:20 Active 10.0.3.4        65000        670        653       0       4        24:21 Active 10.0.5.254      65050      47387      47313       0       2     6:54:15 9/15/0 0/0/0

It would seem that, somehow, your recent actions have disrupted your IBGP sessions, or that you have experienced a hardware failure. Ping tests to the lo0-based IBGP peering addresses of r3 and r4 fail, but pings to the addresses within the OSPF area succeed:

 [edit] lab@r1# run ping 10.0.3.3 PING 10.0.3.3 (10.0.3.3): 56 data bytes ping: sendto: No route to host  ^C  --- 10.0.3.3 ping statistics --- 1 packets transmitted, 0 packets received, 100% packet loss [edit] lab@r1# run ping 10.0.4.1 count 1 PING 10.0.4.1 (10.0.4.1): 56 data bytes 64 bytes from 10.0.4.1: icmp_seq=0 ttl=254 time=0.615 ms --- 10.0.4.1 ping statistics --- 1 packets transmitted, 1 packets received, 0% packet loss round-trip min/avg/max/stddev = 0.615/0.615/0.615/0.000 ms

Displaying the route to the unreachable loopback addresses illuminates the nature of your dilemma:

[edit] lab@r1# run show route 10.0.3.3 inet.0: 33 destinations, 35 routes (28 active, 0 holddown, 6 hidden) + = Active Route, - = Last Active, * = Both 10.0.0.0/8         *[Aggregate/130] 00:29:22                       Reject 

It would seem that the local aggregate definition, which worked so well for r4 and r7, has resulted in a serious problem when applied to routers in OSPF area 1. This is because area 1 is functioning as an NSSA with summaries filtered at the ABRs, which in turn has caused the local aggregate to come back as the longest, and therefore the best, match for all destinations outside of area 1. Because an aggregate route can point to only discard or reject next hops, you have managed to black-hole your connectivity to destinations in other OSPF areas. Looks like you will need to find another way to get the 10/8 summary route into r1 and r2.

The first reaction of many candidates is to try adjusting global protocol preferences, but this approach is futile at best, because routers always use the longest match when forwarding packets, regardless of the route's preference. After wasting time with preference adjustments, many candidates will decide to define the aggregate on an ABR, because these routers carry full OSPF routing information, which results in the aggregate route not being used in deference to the more specific routes they carry. To get things moving, you delete the 10/8 aggregate route definition from r1:

[edit] lab@r1# delete routing-options aggregate route 10.0.0.0/8 

You next define the same aggregate route on r3 and adjust r3's IBGP export policy to advertise the aggregate, as shown next with highlights:

[edit routing-options] lab@r3# set aggregate route 10/8 [edit policy-options policy-statement ibgp] lab@r3# show term 1 {      from {         protocol static;         route-filter 192.168.0.0/16 longer;      }      then accept; } term 2 {      from {         protocol aggregate;         route-filter 10.0.0.0/8 exact;      }      then accept; } 

After waiting for the reestablishment of the r1-r3 IBGP session, you confirm the summary route's presence on r1 as a BGP route:

[edit] lab@r1# run show route 10/8 | match 10.0.0.0/8 [edit]

The aggregate route is not returned. You might want to use the hidden switch to see if it is actually being sent to r1:

[edit] lab@r1# run show route 10.0.0.0 hidden detail inet.0: 110836 destinations, 110858 routes (110828 active, 0 holddown, 10 hidden) 10.0.0.0/8 (2 entries, 0 announced)         BGP     Preference: 170/-101                 Next hop type: Unusable                 State: <Hidden Int Ext>                 Local AS: 65000 Peer AS: 65000                 Age: 7:36                 Task: BGP_65000.10.0.3.4+1070                 AS path: I (Originator)Aggregator: 65000 10.0.3.3                 AS path: Cluster list: 1.1.1.1                 AS path: Originator ID: 10.0.3.3                 Localpref: 100                 Router ID: 10.0.3.4             BGP Preference: 170/-101                 Next hop type: Unusable                 State: <Hidden Int Ext>                 Local AS: 65000 Peer AS: 65000                 Age: 7:36                 Task: BGP_65000.10.0.3.3+1069                 AS path: IAggregator: 65000 10.0.3.3                 Localpref: 100                 Router ID: 10.0.3.3 

The 10/8 summary route has been sent to r1, but unfortunately it has been hidden due to next hop resolution problems. Newer versions of JUNOS software will not allow a more specific route to recurse through a less specific route, because this behavior could lead to recursion loops. If the aggregate route were to be installed as active on r1, the following conditions would exist:

  • 10/8 is reachable through 10.0.3.3.

  • 10.0.3.3 is reachable through 10/8.

This situation would be unproductive at best, so the router chooses to hide the 10/8 prefix to prevent the formation of a recursion loop.

Your best bet at extricating yourself from this quagmire is to define a generated route on r1 (and ultimately r2) that, unlike an aggregate route, can forward over the next hop associated with the primary contributing route, thus preventing the formation of a black hole. The generated route is defined on r1:

[edit] lab@r1# set routing-options generate route 10/8 

After the commit, you confirm that area 0 loopback addresses are no longer being black-holed, and that you have an active 10/8 summary available for export to P1:

[edit] lab@r1# run show route protocol aggregate inet.0: 110848 destinations, 110871 routes (110840 active, 1 holddown, 10 hidden) + = Active Route, - = Last Active, * = Both 10.0.0.0/8          *[Aggregate/130] 00:00:04                     > to 10.0.4.6 via fe-0/0/2.0 [edit] lab@r1# run show route 10.0.3.3 inet.0: 110847 destinations, 110870 routes (110840 active, 0 holddown, 10 hidden) + = Active Route, - = Last Active, * = Both 10.0.0.0/8           *[Aggregate/130] 00:00:12                       > to 10.0.4.6 via fe-0/0/2.0 

The results indicate that you have definitely taken a step in the right direction. The only issue now is the fact that the generated route's primary contributing route (10.0.4.0/30) causes traffic to be forwarded from r1 to r3 by way of r2:

[edit] lab@r1# run show route protocol aggregate detail inet.0: 110856 destinations, 110879 routes (110849 active, 0 holddown, 10 hidden) 10.0.0.0/8 (3 entries, 1 announced)          *Aggregate Preference: 130                   Nexthop: 10.0.4.6 via fe-0/0/2.0, selected                   State: <Active Int Ext>                   Age: 4:03                   Task: Aggregate                   Announcement bits (3): 0-KRT 3-BGP.0.0.0.0+179 4-Resolve inet.0                   AS path: I                                    Flags: Generate Depth: 0 Active                   Contributing Routes (3):                               10.0.4.0/30 proto OSPF                               10.0.4.8/30 proto OSPF                               10.0.6.2/32 proto OSPF [edit] lab@r1# run traceroute 10.0.3.3 traceroute to 10.0.3.3 (10.0.3.3), 30 hops max, 40 byte packets  1 10.0.4.6 (10.0.4.6) 0.222 ms 0.186 ms 0.114 ms  2 10.0.3.3 (10.0.3.3) 0.479 ms 0.432 ms 0.409 ms

This is a real dilemma, because a route can only contribute to a generated route when it is associated with a forwarding next hop, which means that r1's directly connected broadcast interfaces are not allowed to contribute, as indicated by their absence in the output of the show route protocol aggregate detail command. Because the numerically lowest route with a valid next hop is chosen as the generated route's primary contributor, r1 forwards packets that match the generated route as if it were forwarding to the 10.0.4.0/30 primary contributor's subnet.

The preferred way to resolve the suboptimal routing problem would be to define a static route to the 10.0.4.0/30 subnet that points to r3 as a next hop, because forcing 10.0.4.0/30 traffic from r1 through r3 is really no worse than having the same traffic transit r2. Before applying any static route, you should first verify whether static routes are permitted in your lab scenario because the capricious use of static routing may result in point loss. In this example, a single static route on r1 and r2 to circumvent this problem is permitted, so the route is defined:

[edit routing-options] lab@r1# set static route 10.0.4.0/30 next-hop 10.0.4.13 [edit] lab@r1# commit commit complete

And now to confirm the results:

[edit] lab@r1# run show route protocol aggregate detail inet.0: 110855 destinations, 110879 routes (110848 active, 0 holddown, 10 hidden) 10.0.0.0/8 (3 entries, 1 announced)         *Aggregate Preference: 130                    Nexthop: 10.0.4.13 via fe-0/0/1.200, selected                    State: <Active Int Ext>                    Age: 18:53                    Task: Aggregate                    Announcement bits (3): 0-KRT 3-BGP.0.0.0.0+179 4-Resolve inet.0                    AS path: I                                      Flags: Generate Depth: 0 Active                    Contributing Routes (3):                            10.0.4.0/30 proto Static                            10.0.4.8/30 proto OSPF                            10.0.6.2/32 proto OSPF

In an effort to savor the sweet smell of your hard-fought success, you again test if you are correctly advertising the 10/8 summary to P1:

[edit] lab@r1# run show route advertising-protocol bgp 10.0.5.254 10.0.0.0 inet.0: 110861 destinations, 110885 routes (110854 active, 0 holddown, 10 hidden) + = Active Route, - = Last Active, * = Both 10.0.0.0/8 Self I 

start sidebar
Another Solution to the 'Aggregate Route Problem'

Another solution to the hidden aggregate route problem described in this section, without using generated or static routes, is to use policy to alter the BGP next hop associated with the aggregate route so that the associated BGP next hop is no longer resolved through the summary route. The following policy statement, applied to r3, will cause the BGP next hop for the 10/8 aggregate route to be 10.0.4.13. Because this address is present in the IS-IS level 1 area (and in an OSPF totally stubby area), the advertised BGP next hop will no longer need to recurse through the 10/8 aggregate, allowing it to become active. The following policy term, when applied as part of an IBGP export policy, will achieve this goal:

[edit policy-options policy-statement ibgp-export term  agg-route] lab@r3# show from {    route-filter 10.0.0.0/8 exact;  }  then {    next-hop 10.0.4.13;

end sidebar

Because of the next hop self policy deployed at the beginning of this section, you can test the remainder of your peer-export policies because there should be no hidden routes on r1. You start by confirming that no transit provider routes are being advertised:

[edit] lab@r1# run show route advertising-protocol bgp 10.0.5.254 130.130/16 [edit]

The 130.130/16 route advertised by your transit provider is active on r1 and is correctly omitted from r1's EBGP advertisements to P1. You now verify that customer routes are being correctly advertised:

 [edit] lab@r1# run show route advertising-protocol bgp 10.0.5.254 200.200/16 inet.0: 112180 destinations, 224336 routes (112175 active, 0 holddown, 8 hidden) + = Active Route, - = Last Active, * = Both 200.200.0.0/24 Self                                   65010 I 200.200.0.0/28 Self                                   65010 I . . . [edit] lab@r1# run show route advertising-protocol bgp 10.0.5.254 201.201/16 inet.0: 112184 destinations, 224344 routes (112179 active, 0 holddown, 8 hidden) + = Active Route, - = Last Active, * = Both 201.201.1.0/24 Self                                   (65001) 65020 65020 I 201.201.2.0/24 Self                                   (65001) 65020 65020 I . . .

The output confirms that C1 and C2 routes are being sent to P1 in accordance with the example criteria.

Transit Provider Export Policy

To complete this task, you need to define and apply your transit provider EBGP export policy at r3 and r6. A working policy for r3's T1 and T2 peering sessions is shown next:

[edit policy-options] lab@r3# show policy-statement transit-export term 1 {     from {         protocol aggregate;         route-filter 10.0.0.0/8 exact;     }      then accept; } term 2 {     from community [ peers transit ];      then reject; } term 3 {     from {         route-filter 192.168.0.0/16 orlonger reject;     } } 

Rather than accepting customer-tagged routes and explicitly rejecting all other BGP routes, this policy matches on, and rejects, BGP routes with the transit and peers communities, which is why the 192.168/16 route filter in term 3 is needed to suppress advertisement of the 192.168.x/24 routes r3 has learned from its IBGP peers. Inclusion of the transit community in term 2 prevents the default JUNOS software behavior of advertising routes back to the EBGP peer from which they were received, and ensures that transit routes that might be coupled between r3 and r6 (using IBGP) are not inadvertently re-advertised to your transit providers.

The highlights show the changes needed in r3's configuration to apply the transit-export policy and define the local aggregate:

[edit] lab@r3# show protocols bgp group t1-t2 type external; import [ damp bogons community ]; export transit-export; peer-as 65222; multipath; neighbor 172.16.0.14; neighbor 172.16.0.18; [edit] lab@r3# show routing-options static {    route 10.0.200.0/24 {       next-hop 10.0.1.102;       no-readvertise;    }    route 192.168.30.0/24 reject;  }  aggregate {    route 10.0.0.0/8;  }  autonomous-system 65000; confederation 65412 members [ 65000 65001 ];

You should adapt these policy changes and the related configuration changes for use in r6 before proceeding.

Verify Transit Provider Export Policy

You can verify the transit provider export policy in the same manner as demonstrated for peer and customer sites. The policy should result in the advertisement of a 10/8 and the 200.200/16 and 201.201/16 routes from customer sites, as shown next in this edited capture:

[edit] lab@r6# run show route advertising-protocol bgp 172.16.0.22 inet.0: 117782 destinations, 229860 routes (112139 active, 2 holddown, 5645 hidden) + = Active Route, - = Last Active, * = Both 10.0.0.0/8 Self                                  I 200.200.0.0/24 Self                                  (65000) 65010 I . . . 201.201.4.0/24 Self                                   65020 65020 I 201.201.5.0/24 Self                                   65020 65020 I . . .

As required, only the 10/8 aggregate and the routes learned from the C1 and C2 peering sessions are being sent to T1.

Confirm Forwarding Paths

Before calling your EBGP policy efforts a success, you should take a few moments to trace the routes from various points in your AS to the destinations advertised by your EBGP peers. Most exam candidates opt to trace the route to each EBGP peer's loopback address, because the other routes coming from your EBGP peers may point to discard next hops that will prevent the trace from completing. The aggregate for your AS must be correctly advertised before traceroute and ping testing can succeed. Also, your test bed will lack actual Internet connectivity, so despite the full Internet routing table you will not be able to reach Internet destinations. When tracing the routes, attention should be placed on the forwarding paths taken through your network because problems with next hop self policies or active BGP sessions can be discovered when the traces either fail or are observed to take convoluted paths. The following captures are taken from r5 and show successful traceroutes to various EBGP peer loopback addresses.

 [edit] lab@r5# run traceroute 120.120.0.1 traceroute to 120.120.0.1 (120.120.0.1), 30 hops max, 40 byte packets  1 10.0.2.10 (10.0.2.10) 0.661 ms 0.543 ms 0.507 ms  2 10.0.4.10 (10.0.4.10) 0.444 ms 0.446 ms 0.428 ms  3 120.120.0.1 (120.120.0.1) 0.613 ms 0.590 ms 0.569 ms [edit] lab@r5# run traceroute 130.130.0.1 traceroute to 130.130.0.1 (130.130.0.1), 30 hops max, 40 byte packets  1 10.0.2.2 (10.0.2.2) 1.165 ms 1.115 ms 1.081 ms  2 130.130.0.1 (130.130.0.1) 0.989 ms 1.503 ms 1.074 ms [edit] lab@r5# run traceroute 130.130.0.2 traceroute to 130.130.0.2 (130.130.0.2), 30 hops max, 40 byte packets  1 10.0.2.2 (10.0.2.2) 1.296 ms 1.129 ms 1.086 ms  2 172.16.0.14 (172.16.0.14) 0.991 ms 0.976 ms 1.097 ms  3 130.130.0.2 (130.130.0.2) 0.819 ms 0.965 ms 1.083 ms [edit] lab@r5# run traceroute 201.201.0.1 traceroute to 201.201.0.1 (201.201.0.1), 30 hops max, 40 byte packets  1 10.0.8.5 (10.0.8.5) 0.624 ms 0.443 ms 0.405 ms  2 10.0.8.2 (10.0.8.2) 0.498 ms 0.490 ms 0.474 ms  3 201.201.0.1 (201.201.0.1) 0.587 ms 0.568 ms 0.549 ms




JNCIP. Juniper Networks Certified Internet Professional Study Guide Exam CERT-JNCIP-M
JNCIP: Juniper Networks Certified Internet Professional Study Guide
ISBN: 0782140734
EAN: 2147483647
Year: 2003
Pages: 132

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net